Snapshot tests (aka golden tests) are similar to unit tests except that the expected result is stored in a separate file that is managed by testthat. Snapshot tests are useful for when the expected value is large, or when the intent of the code is something that can only be verified by a human (e.g. this is a useful error message). Learn more in vignette("snapshotting").

• expect_snapshot() captures all messages, warnings, errors, and output from code.

• expect_snapshot_output() captures just output printed to the console.

• expect_snapshot_error() captures just error messages.

• expect_snapshot_value() captures the return value.

(These functions supersede verify_output(), expect_known_output(), expect_known_value(), and expect_known_hash().)

expect_snapshot(x, cran = FALSE, error = FALSE)

expect_snapshot_output(x, cran = FALSE)

expect_snapshot_error(x, class = "error", cran = FALSE)

expect_snapshot_value(
x,
style = c("json", "json2", "deparse", "serialize"),
cran = FALSE,
...
)

Arguments

x Code to evaluate. Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies. Do you expect the code to throw an error? The expectation will fail (even on CRAN) if an unexpected error is thrown or the expected error is not thrown. Expected class of condition, e.g. use error for errors, warning for warnings, message for messages. The expectation will always fail (even on CRAN) if a condition of this class isn't seen when executing x. Serialization style to use: json uses jsonlite::fromJSON() and jsonlite::toJSON(). This produces the simplest output but only works for relatively simple objects. json2 uses jsonlite::serializeJSON() and jsonlite::unserializeJSON() which are more verbose but work for a wider range of type. deparse uses deparse(), which generates a depiction of the object using R code. serialize() produces a binary serialization of the object using serialize(). This is all but guaranteed to work for any R object, but produces a completely opaque serialization. For expect_snapshot_value() only, passed on to waldo::compare() so you can control the details of the comparison.

Workflow

The first time that you run a snapshot expectation it will run x, capture the results, and record in tests/testthat/snap/{test}.json. Each test file gets its own snapshot file, e.g. test-foo.R will get snap/foo.json.

It's important to review the JSON files and commit them to git. They are designed to be human readable, and you should always review new additions to ensure that the salient information has been capture. They should also be carefully reviewed in pull requests, to make sure that snapshots have updated in the expected way.

On subsequent runs, the result of x will be compared to the value stored on disk. If it's different, the expectation will fail, and a new file snap/{test}.new.json will be created. If the change was deliberate, you can approve the change with snapshot_accept() and then the tests will pass the next time you run them.

Note that snapshotting can onµly work when executing a complete test file (with test_file(), test_dir(), or friends) because there's otherwise no way to figure out the snapshot path. If you run snapshot tests interactively, they'll just display the current value.