With computer models as research objects, it becomes crucial to assure their reliable operation. Implementing model tests can not only help to detect implementation errors, but also assists in maintainable growth of the software project, which larger models inevitably become.
Utopia facilitates testing models by making it easier to implement tests alongside a model and automating how they are carried out. While this cannot address the test oracle problem, it can address many potential difficulties.
The Utopia frontend can assist in defining model tests that operate on the output data. This allows to test that given some configuration, the model generates the expected output.
With the help of
utopya’s access to performing and loading simulations, test definitions can be as simple as this:
import pytest from utopya.testtools import ModelTest # Set up a model test object, giving access to local configuration files mtc = ModelTest("ForestFire", test_file=__file__) def test_dynamics(): """Test that the ForestFire dynamics are correct""" # Run the model with a custom configuration mv, dm = mtc.create_run_load(from_cfg="dynamics.yml") # For each simulation, check the output data for uni_no, uni in dm['multiverse'].items(): data = uni['data/ForestFire/kind'] # Need the number of cells to calculate the density num_cells = data.sizes['x'] * data.sizes['y'] # All cells are trees at time step 0 density = data.isel(time=0).sum() / num_cells assert density == 1.0 # ...
While Python model tests are well-suited for testing macroscopic behavior of the model, it is frequently required to test parts of the implementation directly. This requires tests on C++ side.
Utopia integrates the widely-used Boost.Test library, which assists in defining unit tests for the model implementation.
On top of that, Utopia provides a set of testtols and fixtures via the
These simplify the implementation of configuration-based tests.
Additionally, through custom CMake functions, registering a test will directly integrate it into the testing pipeline, see below.
Having defined model tests, it needs to be ensured that they are carried out frequently enough to detect bugs and regressions.
Adhering to modern software engineering best practices, this is best achieved by embedding these tasks into an automatically triggered pipeline, like GitLab CI/CD. As part of that pipeline, the framework and the models are build, their tests are being carried out, even extending to the creation of plots. This frees a model developer from having to carry out the tests themselves and provides a “ground truth” environment in which a model is ensured by the tests to run as intended.
Model development can conveniently happen in a fork or clone of the
utopia GitLab project.
However, in the scientific context, it is often desired to have separate project. While the Utopia framework is already designed to be used as a library in a separate source code repository, setting up this separate repository and corresponding project infrastructure can be difficult.
To make this process simpler, we are planning and developing a template repository that can be used for conveniently implementing models while also benefitting from automated test integration. Work in Progress.