Benchmarking tests

Every night, a series of tests (called Regression tests) are run, in which results produced by Ashes are compared to results produces by other software or published in scientific journals. In addition, a report is generated for each test, showing the comparison between the analytical solution and the results from Ashes. Whenever the results produced by Ashes do not match the analytical solution, the test is considered failed and we receive a notification.
There are mainly two reasons for a test to fail:
  • a bug was introduced into the code
  • a new feature requires updating the analytical solution
Whenever this happens, we can have a closer look at the code and either fix the bug or update the test to the new feature. This process ensures that the results produced by Ashes are always correct.

This document presents all the benchmarking regression tests run every night, and where the benchmarking results were obtained. It also provides you with links to the reports for the tests that were run the night prior to the latest Ashes release.

If you have any question or would like to see a certain test added, please contact us at support@simis.io