Contract-Based Testing for Embedded Systems
Testing embedded systems becomes much more difficult with increasing complexity. The Contract-Based-Design method opens a new perspective and very effective approaches when extended and applied to testing.
What is the purpose of testing?
Testing is often reduced to developing and executing manual or automated test procedures. Let’s first look at the main goals of testing:
- Evaluating the functionality and quality of a system.
- Gaining knowledge for the elimination or avoidance of errors.
We therefore extend the concept of testing to all methods that support these two goals. These can also be e.g. reviews, formal verification or runtime monitoring.
The challenge of testing
To achieve the goals, three questions in particular must be answered:
- How does one specify the required or prohibited behavior unambiguously and sufficiently completely?
- How does one generate tests from this that are highly likely to detect faulty behavior during development?
- How does one find errors that only occur during the use of the system, e.g. due to incorrect use or environmental conditions?
In many cases, these questions can be answered quite elegantly by using contracts.
Classic Contracts according to Bertrand Meyer
Design by Contract (DbC) was developed and introduced by Bertrand Meyer. DbC allows the definition of formal contracts for interfaces and behavior of software components. Pre- and post-conditions, invariants and side effects define the semantics of functions. Languages like Ada and Eiffel support the definition and validation of contracts.
Here is an example of the implementation of a very simple contract with pre- and post-conditions in the C language using asserts.

This type of contract is defined at the component level, i.e. for interfaces of functions, classes or C modules. The resulting checks can therefore only be made at this level. Design-time checks require language support (e.g. Ada and Eiffel). Integration and system tests for embedded systems are thus hardly possible.

Contract extensions for embedded systems
We need extensions to the contracts that make it possible to formally describe asynchronous, concurrent behavior for all levels of a system. An excellent method for this is interface state machines. Orders and temporal behavior (temporal logic) of synchronous or asynchronous calls can be specified very precisely. Also, state machines are a method familiar to most embedded developers. The interface state machine defines which calls are permitted at any given time. Any other call is forbidden. This reduces the number of allowed call orders by orders of magnitude. If you can test compliance with the specified behavior or monitor it at runtime, this leads to significantly simpler systems that contain significantly fewer errors.
The following example shows the specification of all permitted calls on a simple interface as an interface state machine. The following questions, for example, cannot be answered just because of the C-API:
- Is calling
function1directly afterinitializeallowed? - Do I have to call
initializeagain afterstop?
If you specify the allowed sequence as an interface state machine, you can answer the questions quickly and clearly:
- After
initializeyou have to callstartfirst. After that, callingfunction1is allowed. - No, after
stopit is not necessary to callinitialize. Onlystartmay be called.
The number of permitted call sequences is reduced from 120 to 6 for a one-time call of all functions, with 3 iterations already from approx. 1012 to 300,000.

Interface contracts are therefore an extremely effective way of formally specifying permitted and prohibited behavior at interfaces on all levels. The states to be tested can usually be reduced by many orders of magnitude. A pleasant side effect is the excellent and unambiguous documentation of the interface processes.
Checking the contracts
The specified Contracts can be checked at three different times:
- At design time (time of programming or modeling).
- At test time (time of execution of unit, integration, and system tests).
- At runtime (system in productive operation)
Using the example of the open source tool eTrice and the test language CaGe for miniHIL systems, it is shown how such checks can look like.
Checking of contracts at design time (formal verification)
As part of the Contract Based Modeling & Design (CBMD) research project, the eTrice tool and the ROOM language (Real-Time Object-Oriented Modeling) were expanded to include contracts. The methods are already being used in practice.
In the following example, the interface specification of the ports (fct and terminal) between the components (structural model) is extended by a contract (interface state machine).
The actors each contain a model of their behavior as an implementation state machine. During modelling, the implementation state machines are constantly checked against the contract. Violations of the contract are displayed in the model. Generating the implementation from the model ensures that the correct behavior of the model is also implemented in the code.

The main advantage is the early detection and correction of errors and potential problems. Formal verification detects many issues that are very difficult to find through testing. One example is race conditions, which lead to sporadic, unclear and hardly reproducible error situations.
A limitation of formal verification is that you need a formal application model to test it. If the application is not modeled, but implemented with a programming language like C, formal verification is difficult to realize.
Checking contracts at runtime (monitoring)
Another result of the CBMD project is the generation of contract monitors in eTrice/ROOM models. These monitors are built into the communication link between the components or to other systems and allow the checking of contracts at runtime. The following sub-functions are required for this purpose:
- Monitoring: detection of misbehavior at runtime.
- Filtering and containment: action to prevent misbehavior, e.g., by preventing or correcting the erroneous call on the interface or switching to a safe state
- Root cause analysis and debugging: logging a sequence diagram showing the cause of the error and the exact sequence around the misbehavior.

In the example, the monitor generated from the contract has detected a misbehavior at runtime. In the generated sequence diagram, one can see the exact time, the course and the cause.
The resulting systems are very robust against misbehavior at the interfaces. This is a decisive advantage also for systems that have safety or security requirements. One advantage of runtime monitoring is that applications and interfaces that have not been modeled can also be improved very well with monitors.
Testing Contracts at Test Time (Test Case Generation)
Interface State Machines are generally not very well suited for generating test cases, since they do not provide any information about the overall behavior of a system under test (SUT). Therefore, instead of Interface State Machines, we use State-Transition tests in the CaGe test language. State transition tests can be used to describe the expected black box behavior (contract) of a SUT as a state transition diagram. From this, combinatorial test cases can be generated, which test the allowed and forbidden paths in the SUT with very high coverage. It can also be used to test data combinatorics, e.g. for call or configuration parameters.

State transition testing is an effective method to achieve high test coverage quickly and in a structured way. For smaller problems or lower test coverage, you can derive test cases with pen and paper and implement the tests manually. For larger coverages, it makes sense to use tools with test case generators.
So what do contracts do for testing?
Extended Contracts …
- enable the formal specification of interfaces not only at component level, but also at integration and system level
- can be used for synchronous and asynchronous systems (embedded systems)
- allow the exact specification of processes and timing constraints
- can be used in all development phases for verification (design, test, runtime)
- allow a very high coverage for many areas
- can be used for automatic generation of test cases and monitors