Listing Tests

You can easily list all available tests

$ tfb --list-tests

Running Tests

There are a number of options that can be specified:

# Run a verification for test beego
$ tfb --test beego --mode verify

# Run a test in debug mode, which skips verification and leaves the server up for testing endpoints
$ tfb --test beego --mode debug

# Run the default benchmark for the beego test
$ tfb --test beego

# Specify which test types are run during benchmark
$ tfb --test beego --type json
$ tfb --test beego --type db
$ tfb --test beego --type fortune

Note: The results directory must be removed after each test has been run in order to run the test again.

Testing on both Windows and Linux

If your framework and platform can execute on both Windows and Linux, we encourage you to specify tests for both operating systems. This increases the amount of testing you should do before submitting your pull-request, however, so we understand if you start with just one of the two. Travis-CI cannot automatically verify Windows-based tests, and therefore you should verify your code manually.

The steps involved are:

  • Assuming you have implemented the Linux test already, add a new test permutation to your benchmark_config.json file for the Windows test. When the benchmark script runs on Linux, it skips tests where os is Windows and vice versa.
  • Add the necessary tweaks to your setup file to start and stop on the new operating system.
  • Test on Windows and Linux to make sure everything works as expected.


Enable Travis-CI on your project fork. This is as simple as going to travis-ci.org, using the Log in with Github button, and enabling Travis-CI for your fork. If you're submitting a new framework, be sure to add it to the travis.yml file in the root of the project. When your development is done and your changes pass the Travis-CI verification, submit a pull request with confidence that it can be merged quickly. Read more about TFB and Travis-CI.

Important Note About Travis-CI

If you make changes to configuration files, or files outside your framework directory, Travis will run tests on all existing frameworks. For this reason, it may appear that your tests have failed. Be sure to check your Travis build by clicking on the checkmark or red 'X' to dig into your specific test.


Finding output logs

Logs file locations use the format results/latest/logs/wt/err.txt. The general structure is results/<run name>/<timestamp>/logs/<test name>/<file>. You can use the --name flag to change the <run name>. If you re-run the same test multiple times, you will get a different folder for each <timestamp>, although the latest folder will be kept up to date. The <test name> is simply the name of the test type you ran, and <file> is either out.txt or err.txt (these are the logout and logerr arguments passed into each setup.py file.

Note: If you're looking for logs from our official benchmark rounds, see Round 13, Round 12, and Round 11