Open Test Framework
|
|
With the OTX development environment (OTF) you can program in OTX. In order to ensure the quality of the programmed OTX code, so-called unit tests can be programmed in OTX. This is the task of the OTX UnitTest Extension.
Note: The task of unit tests is to protect the programmed test logic against the expected behavior.
The following main functions are supported by the OTX UnitTest Extension inside OTF:
This section gives a short glance about the fundamentals of software testing. For a deeper insight into the topic of "software testing", please refer to the relevant specialist literature, e.g. ISO 29119 "Software Testing".
Related Topics:
Note: The OTX UnitTest Extension was developed by EMOTIVE and is not part of the OTX standard. However, this has no impact on the standard compliance of the delivered OTX sequences, as the unit test cases are not saved together with the OTX test logic.
Note: The UnitTest Extension has no influence on the normal OTX test logic and is stored separately from it.
The so called "test pyramid" shows the different types for software verification.
There are essentially the following types of tests. The higher you get in the pyramid, the more expensive it gets and the longer the tests take.
A Unit Test tests the functionality of the smallest units of a software. The aim is to check each individual component independently and isolated from other data and external influences after each change and thus ensure the quality of the software.
Note: Unit Tests are also known as a regression tests.
A good unit test is based on the so called FIRST principle, see table below. It has to be fast and independently. It must always lead to the same result when repeated. It must be self-validating, meaning it must either pass or fail and the best thing to do, is to write the test before implementation.
Requirement | Description |
---|---|
F-ast | Fast test execution to run as often as possible |
I-ndependent | Independent to parallelize test execution |
R-epeatable | Must always lead to the same result when repeated |
S-elf-Validating | A test must either pass or fail |
T-imely | It is best to write the test before implementing the code (Test Driven Development) |
Note: Before writing a test, think about what really should be tested! Don't write senseless tests!
There are essentially two test methods for unit tests:
In OTX mostly black box tests are used. To write a good unit test the following two steps should be taken into account:
To write a test, the UnitTest
extension provides the following elements, among others:
Element | Description |
---|---|
TestProcedure | A TestProcedure is the main element of a test and represents a single test step |
parallelizable | The parallelizable attribute marks the TestProcedure as "can be executed in parallel" |
repeat | The repeat attribute specifies if a TestProcedure should be executed multiple times (Default = 0) |
retry | The retry attribute specifies if a TestProcedure should be repeated in case of a failure (Default = 0) |
timeout | The timeout attribute specifies a timeout for the TestProcedure execution (Default = no timeout) |
TestCase | The TestCase specifies the selected parameters and the expected results of a TestProcedure |
expected | With expected the expected value of a certain out parameter can be set inside a TestCase |
exception | With exception the TestCase expects a certain Exception during test execution. Exception inheritance will taken into account. |
ValueList | With the ValueList a TestCase parameter can be set to an arbitrary list of possible values. For each value in the list an implicit TestCase will be created, see sequential attribute. |
Range | With a Range a TestCase parameter can be set to a range of possible values (MinValue , MaxValue , Step ). For each value in the range an implicit TestCase will be created, see sequential attribute. |
ignored | The ignored attribute marks the TestCase as ignored for the TestProtocol , see Test Protocol Explorer (Default = not ignored ) |
sequential | A TestCase marked as sequential reduces the number of implicit generated TestCase , see ValueList and Range. No combinations of TestCases are generated if multiple ValueList or Range elements are present in a TestCase (Default = not sequential ). |
errorMessage | With the errorMessage a certain message can be assigned to TestCase which will be included in the report if the TestCase fails (Default = empty) |
The following example shows the test of an integer division with the OTX UnitTest extension in OTL syntax.
First, an OTX procedure with input parameters Dividend and Divisor and output parameter Quotient must be marked as Test Procedure inside a Unit Test Project. The mark in OTL code above is expressed with [Test]
. Within the procedure, the division is simply performed.
In the first test case (in OTL code above see [TestCase(...)]
) a normal division was described: 10
divided by 2
with the expected result 5
. This corresponds to the Normal division equivalence class. The second equivalence class is the Negative Division for which three test cases be needed. The next equivalence class is the Division with Remainder: 11
divided by 3
gives 3
and not 4
, since the remainder is always cut off. The next step is to go to the range borders and checks Division from Zero, which must always result in 0
. Finally, it is checked whether an ArithmeticException
is thrown when Dividing with Zero.
The attribute [Parallelizable]
marks the test case as independent and thus parallelizable.
Note: A Test Procedure is like a normal Procedure and can contain any OTX code.
In the previous example, the expected behavior was checked implicitly in the TestCase
using the keyword expected
. However, the expected behavior can also be tested explicitly within the OTX logic. The following activities are available for this:
Activity | Description |
---|---|
Assert | This is an Action . Evaluates an condition. If it evaluates to false, the execution of the TestCase will be stopped and the Test Results is set to FAILED . |
Assume | This is an Action . Evaluates an condition. If the condition of the assumption returns false, the Test Results of the Test Procedure is set to INCONCLUSIVE . The execution of the test procedure containing this assumption returned to the caller. No node after this node will be executed. |
Warning | This is an Action . Evaluates an condition. If the condition of the warning returns false, a warning for this Test Procedure will given back to caller. The warning will be written into the test protocol. The execution of the test procedure containing this warning will continued with the next node after the warning. |
AssertThrows | This is an CompoundNode . If the given exception will not be thrown inside the flow, the result of the Test Procedure is set to FAILED . The execution of the test procedure containing this assertion returned to the caller. No node after this node will be executed. |
Fail | This is an EndNode . The Test Results of the Test Procedure is set to FAILED . The execution of the test procedure returned to the caller. No node after this node will be executed. |
Ignore | This is an EndNode . The Test Results of the Test Procedure is set to IGNORED . The execution of the test procedure returned to the caller. No node after this node will be executed. |
Inconclusive | This is an EndNode . The Test Results of the Test Procedure is set to INCONCLUSIVE . The execution of the test procedure returned to the caller. No node after this node will be executed. |
Pass | This is an EndNode . The Test Results of the Test Procedure is set to PASSED . The execution of the test procedure returned to the caller. No node after this node will be executed. |
The following example shows the explicitly checking of expected behavior via Assert and AssertThrows:
Each TestCase
execution results in one of the following test results. All test results will be written into a Test Protocol Explorer.
Test Result | Description |
---|---|
DISABLED | The test was disabled if the TestProcedure or the TestCase is marked as DISABLED. No subsequent elements will be executed. Use case: For incomplete or wrong test cases or test procedures. |
PASSED | The test was passed. The test procedure execution was successful. This will only happens if all expected results of all test cases are fulfilled, all Assert and Assume activities fulfill its condition or a Pass end node was executed. |
FAILED | The test execution was failed. This can only happens if an expected result of a test case does not match, an Assert activity does not fulfill its condition or a Fail end node was executed. |
IGNORED | The test was ignored. The default result of a test procedure is IGNORED, if this will not explicit changed via a test case or an action or and node or a disabled attribute. A test case can be explicit set to ignored, if the TestCase is marked as IGNORED or a Ignored end node was executed. Conditions: - Corner Case - This should be used if an known problem occurs but only in test conditions, which are not relevant for practical use cases - Wrong Specification - If the test case is right but the specification is wrong - Instable Test Case - The test case will be executed right, if it was executed stand alone, but if it executed inside the entire test suite it can happens sometimes (not every time!) that the test case failed. E.g. this can happens, if the environment is unstable and the problem behind is unknown. Note: A test case wich is marked as IGNORED will be executed, but if it fails, the result is IGNORED and not FAILED. Note: This result has no influence to the overall result of the parent element. Note: An IGNORED test case should contain a understandable message, why it was ignored. |
INCONCLUSIVE | The test execution was inconclusive. This can only happens if an Assume activity does not fulfill its condition or a Inconclusive end node was executed. The behavior is similar to FAILED except that an assumption and not a result does not match.T |
A TestProcedure should, if possible, be self-contained and executable independently of the current environment. The following so-called CallbackProcedure are supported for this purpose, see table below. If a CallbackProcedure
is defined for an activity, for example the ConfirmDialog of the HMI extension, then the ConfirmDialog CallbackProcedure and not the actual activity is called when simulation is switched on (see StartSimulation and StopSimulation).
Note: Inside the
CallbackProcedure
, the desired behavior can be simulated using any OTX code.
The following main functions are supported:
The following table lists all activities that can be used for environment simulation:
Note: The simulation of the environment must be started via StartSimulation. The OTX runtime will then call the CallbackProcedure for the related actions. If no
CallbackProcedure
for the related action exists, a NOP operation will be performed. The simulation can be stopped via StopSimulation.
The following example shows the simulation of the ConfirmDialog action of the HMI extension:
The following more complex example shows the simulation of the OpenScreen action of the HMI extension together with detecting of parameter changes and raising events.
PDU
Simulation
means simulation of the diagnostic communication, see picture below. The OTF
supports an in-built PDU Simulation. For a complete simulation, the diagnostic communication of the DiagCom extension must be simulated independently of the environment. This is the task of the PduSimulation extension. The PduSimulation
extension extends OTX to simulate the diagnostic communication at the level of a PDU. PDU means ProtocolDataUnit
and expresses the HexByteField
of a Request
or a Response
. This makes it possible to individually influence the diagnostic communication for each test case or to explicitly generate errors.
Note: With the PduSimulation extension a TestCase can be written and executed independent of the underlaying vehicle communication. No hardware is necessary!
Related Topics:
The following main functions are supported:
VariantIdentification
and VariantSelection
TesterPresent
BatteryVoltage
(KL30) and IgnitionState
(KL15) can be setDiagRA-S
Request = "36 X1 .."
, Response = "76 X1"
)Note: The system requirement for using the extension is that the used D-PDU API supports the simulation option.
Note: The extension may only be used within UnitTest projects.
The following table list all related activities:
Activity | Type | Description |
---|---|---|
LoadSimulation | Action | Loads or imports a PDU simulation from a file |
ResetSimulation | Action | Reset to state after loading the simulation |
Start | Action | Starts the PDU simulation |
Stop | Action | Stops the PDU simulation |
SetVariantName | Action | Set the value of ecuVariantName |
SetDoIPLogicalAddress | Action | Sets the logical address of the gateway or DoIP-Edge-Node to the given byte field |
SetDoIPVin | Action | Sets the given string as VIN number for DoIP communication. |
SetBatteryVoltage | Action | Sets the current battery voltage in milliseconds as a Float value |
SetIgnitionState | Action | Sets the current state of the ignition clamp. |
SetPdus | Action | Set one or more responses to a certain request (Can contain wildcards and placeholders) |
IsStarted | Term | Checks the simulation was started or not |
The following more OTL code example shows the simulation of the of diagnostic communication via PduSimulation extension.
Once the Unit Test Project has been created, it can be executed in various ways. Within the OTF in the Test Explorer or outside using the included stand-alone application Unit Test Execution. The tests can be integrated into your own CI/CD chain using the Unit Test Execution Console Application application.
Using different Test Context, the same Unit Test Project can be executed under different environment settings.
Related Topics: