Utopia features

Model testing

With computer models as research objects, it becomes crucial to assure their reliable operation. Implementing model tests can not only help to detect implementation errors, but also assists in maintainable growth of the software project, which larger models inevitably become.

Utopia facilitates testing models by making it easier to implement tests alongside a model and automating how they are carried out. While this cannot address the test oracle problem, it can address many potential difficulties.


The Utopia frontend can assist in defining model tests that operate on the output data. This allows to test that given some configuration, the model generates the expected output.

With the help of pytest and utopya’s access to performing and loading simulations, test definitions can be as simple as this:

import pytest
from utopya.testtools import ModelTest

# Set up a model test object, giving access to local configuration files
mtc = ModelTest("ForestFire", test_file=__file__)

def test_dynamics():
    """Test that the ForestFire dynamics are correct"""

    # Run the model with a custom configuration
    mv, dm = mtc.create_run_load(from_cfg="dynamics.yml")

    # For each simulation, check the output data
    for uni_no, uni in dm['multiverse'].items():
        kind = uni['data/ForestFire/kind']

        # All cells are trees (state 1) at time step 0
        assert (kind.isel(time=0) == 1).all()

        # ...

More details arrow-right

While Python model tests are well-suited for testing macroscopic behavior of the model, it is frequently required to test parts of the implementation directly. This requires tests on C++ side.

Utopia integrates the widely-used Boost.Test library, which assists in defining unit tests for the model implementation. On top of that, Utopia provides a set of testtols and fixtures via the Utopia::TestTools namespace. These simplify the implementation of configuration-based tests.

Additionally, through custom CMake functions, registering a test will directly integrate it into the testing pipeline, see below.

More details arrow-right

Having defined model tests, it needs to be ensured that they are carried out frequently enough to detect bugs and regressions.

Adhering to modern software engineering best practices, this is best achieved by embedding these tasks into an automatically triggered pipeline, like GitLab CI/CD. As part of that pipeline, the framework and the models are build, their tests are being carried out, even extending to the creation of plots. This frees a model developer from having to carry out the tests themselves and provides a “ground truth” environment in which a model is ensured by the tests to run as intended.

More details arrow-right