Snapshot tests (aka golden tests) are similar to unit tests except that the
expected result is stored in a separate file that is managed by testthat.
Snapshot tests are useful for when the expected value is large, or when
the intent of the code is something that can only be verified by a human
(e.g. this is a useful error message). Learn more in
vignette("snapshotting")
.
expect_snapshot()
captures all messages, warnings, errors, and output from code.expect_snapshot_output()
captures just output printed to the console.expect_snapshot_error()
captures an error message and optionally checks its class.expect_snapshot_warning()
captures a warning message and optionally checks its class.expect_snapshot_value()
captures the return value.
(These functions supersede verify_output()
, expect_known_output()
,
expect_known_value()
, and expect_known_hash()
.)
Usage
expect_snapshot(
x,
cran = FALSE,
error = FALSE,
transform = NULL,
variant = NULL,
cnd_class = FALSE
)
expect_snapshot_output(x, cran = FALSE, variant = NULL)
expect_snapshot_error(x, class = "error", cran = FALSE, variant = NULL)
expect_snapshot_warning(x, class = "warning", cran = FALSE, variant = NULL)
expect_snapshot_value(
x,
style = c("json", "json2", "deparse", "serialize"),
cran = FALSE,
tolerance = testthat_tolerance(),
...,
variant = NULL
)
Arguments
- x
Code to evaluate.
- cran
Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies.
- error
Do you expect the code to throw an error? The expectation will fail (even on CRAN) if an unexpected error is thrown or the expected error is not thrown.
- transform
Optionally, a function to scrub sensitive or stochastic text from the output. Should take a character vector of lines as input and return a modified character vector as output.
- variant
-
If not-
NULL
, results will be saved in_snaps/{variant}/{test.md}
, sovariant
must be a single string of alphanumeric characters suitable for use as a directory name.You can variants to deal with cases where the snapshot output varies and you want to capture and test the variations. Common use cases include variations for operating system, R version, or version of key dependency. Variants are an advanced feature. When you use them, you'll need to carefully think about your testing strategy to ensure that all important variants are covered by automated tests, and ensure that you have a way to get snapshot changes out of your CI system and back into the repo.
- cnd_class
Whether to include the class of messages, warnings, and errors in the snapshot. Only the most specific class is included, i.e. the first element of
class(cnd)
.- class
Class of expected error or warning. The expectation will always fail (even on CRAN) if an error of this class isn't seen when executing
x
.- style
Serialization style to use:
json
usesjsonlite::fromJSON()
andjsonlite::toJSON()
. This produces the simplest output but only works for relatively simple objects.json2
usesjsonlite::serializeJSON()
andjsonlite::unserializeJSON()
which are more verbose but work for a wider range of type.deparse
usesdeparse()
, which generates a depiction of the object using R code.serialize()
produces a binary serialization of the object usingserialize()
. This is all but guaranteed to work for any R object, but produces a completely opaque serialization.
- tolerance
Numerical tolerance: any differences (in the sense of
base::all.equal()
) smaller than this value will be ignored.The default tolerance is
sqrt(.Machine$double.eps)
, unless long doubles are not available, in which case the test is skipped.- ...
For
expect_snapshot_value()
only, passed on towaldo::compare()
so you can control the details of the comparison.
Workflow
The first time that you run a snapshot expectation it will run x
,
capture the results, and record in tests/testthat/snap/{test}.json
.
Each test file gets its own snapshot file, e.g. test-foo.R
will get
snap/foo.json
.
It's important to review the JSON files and commit them to git. They are designed to be human readable, and you should always review new additions to ensure that the salient information has been captured. They should also be carefully reviewed in pull requests, to make sure that snapshots have updated in the expected way.
On subsequent runs, the result of x
will be compared to the value stored
on disk. If it's different, the expectation will fail, and a new file
snap/{test}.new.json
will be created. If the change was deliberate,
you can approve the change with snapshot_accept()
and then the tests will
pass the next time you run them.
Note that snapshotting can only work when executing a complete test file
(with test_file()
, test_dir()
, or friends) because there's otherwise
no way to figure out the snapshot path. If you run snapshot tests
interactively, they'll just display the current value.