Some times you have tests that you don’t want to run in certain circumstances. This vignette describes how to skip tests to avoid execution in undesired environments. Skipping is a relatively advanced topic because in most cases you want all your tests to run everywhere. The most common exceptions are:

  • You’re testing a web service that occassionally fails, and you don’t want to run the tests on CRAN. Or maybe the API requires authentication, and you can only run the tests when you’ve securely distributed some secrets.

  • You’re relying on features that not all operating systems possess, and want to make sure your code doesn’t run on a platform where it doesn’t work. This platform tends to be Windows, since amongst other things, it lacks full utf8 support.

  • You’re writing your tests for multiple versions of R or multiple versions of a dependency and you want to skip when a feature isn’t available. You generally don’t need to skip tests if a suggested package is not installed. This is only needed in exceptional circumstances, e.g. when a package is not available on some operating system.

library(testthat)

Basics

testthat comes with a variety of helpers for the most common situations:

  • skip_on_cran() skips tests on CRAN. This is useful for slow tests and tests that occassionally fail for reaons outside of your control.

  • skip_on_os() allows you to skip tests on a specific operating system. Generally, you should strive to avoid this as much as possible (so your code works the same on all platforms), but sometimes it’s just not possible.

  • skip_on_ci() skips tests on most continuous integration platforms (e.g. Travis, Appveyor, GitHub actions).

You can also easily implement your own using either skip_if() or skip_if_not(), which both take an expression that should yield a single TRUE or FALSE.

All reporters show which tests as skipped. As of testthat 3.0.0, ProgressReporter (used interactively) and CheckReporter (used inside of R CMD check) also display a summary of skips across all tests. It looks something like this:

── Skipped tests  ───────────────────────────────────────────────────────
● No token (3)
● On CRAN (1)

You should keep an on eye this when developing interactively to make sure that you’re not accidentally skipping the wrong things.

Helpers

If you find yourself using the same skip_if()/skip_if_not() expression across multiple tests, it’s a good idea to create a helper function. This function should start with skip_ and live somewhere in your R/ directory.

skip_if_Tuesday <- function() {
  if (as.POSIXlt(Sys.Date())$wday == 2) {
    skip("Not run on Tuesday")
  }
}

It’s important to test your skip helpers because it’s easy to miss if you’re skipping more often than desired, and the test code is never run. This is unlikely to happen locally (since you’ll see the skipped tests in the summary), but is quite possible in continuous integration.

For that reason, it’s a good idea to add a test that you skip is activated when you expect. skips are a special type of error, so you can test that they occur with expect_error(). The basic code looks like this:

test_that("skips work", {
  # Test that a skip happens
  expect_error(skip("Hi"), class = "skip")

  # Test that a skip doesn't happen
  expect_error("Hi", NA, class = "skip")
})
#> Test passed 😀