Chi-Tech
|
Tests are extremely important to the overall vision of the project. They can be tests to see if a simulation behaves in a certain way, tests to check input language is handled appropriately, tests to check that operations produce the expected result, and even checks for certain error or warning behavior. In a nutshell... it assures that you know if something breaks.
The test system is contained in the test
directory of the project.
├── doc ├── external ├── framework ├── modules ├── resources ├── test <-- Here ├── tutorials ├── CMakeLists.txt ├── LICENSE ├── README.md └── configure.sh
Within this directory we have the run_tests
script.
test ├── bin <-- Test executable in here ├── framework ├── modules ├── src <-- main.cc in here ├── CMakeLists.txt └── run_tests <-- Primary script
The test sources, contained throughout the test
directory, is compiled along with the regular project but it does not form part of the library (it is a separate executable). We can do this because ChiTech's compile time is super short, and ddditionally, the benefit is that we can get compiler errors if we break any interfaces.
The entry point for the test executable is the main.cc
contained in the test/src
directory. Thereafter all other tests sources are added to the executable and linked together using Static Registration.
The executable is called ChiTech_test
and is contained in the bin
directory.
Comprises up to 4 things:
.lua
input file that will initiate the tests..json
configuration file specifying one or more tests with associated checks. Our convention is to have just one of these in a folder and to name it YTests.json
(the Y always places it at the bottom of the directory).cc
file implementing specific unit testing code..gold
file if the test involves a gold-file check.Example test:
Here we have example_test.lua
that contains only a single line:
which executes a wrapped function defined in example_test.cc
The YTests.json
has the following syntax:
Where we introduce our JSON test configuration option.
[...]
defining an array of Test Blocks.{...}
. For this example the test had the following parameters:"comment"
= Free to use to annotate a test. Does not get used internally."file"
= The name of the .lua
that initiates the test."num_procs"
= The number of mpi processes to use."checks"
= An array of checks [..Checks..]
, where we have the following checks:StrCompare
check. Checks for the presence of a key string.ErrorCode
check. Checks for a specified exit code. In this case 0, meaning successful execution.Specifies a specific test Parameters:
"file"
: The name of the .lua
that initiates the test."num_procs"
: The number of mpi processes to use."checks"
: An array of checks [..Checks..]
"args"
: An array of arguments to pass to the executable"weight_class"
: An optional string, either "short", "intermediate" or "long" used to filter different tests length. The default is "short". "long" should be used for tests >2min."outfileprefix"
: Optional parameter. Will default to "file"
but can be to change the output file name (outfileprefix+".out") so that the same input file can be used for different tests."skip"
: Optional parameter. Must be non-empty string stating the reason the test was skipped. The presence of this string causes the test to be skipped.All other keys are ignored so feel free to peruse something like "comment"
to annotate the test.
Currently we have the following tests:
Looks for a key with a floating point value right after it.
Parameters:
"type"
: "KeyValuePair""key"
: The key-string to look for"goldvalue"
: Float value"tol"
: Tolerance on the goldvalue"skip_lines_until"
: Optional. Do not check lines in the output file until this string is encountered. e.g. "skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"
. This is useful if a simulation is expected to have multiples of the key-string but you only want the last one.Can do one of two things, 1) looks for the presence of the key, returns success if it is present, or 2) looks for the presence of the key and if present (and the "wordnum"
parameter is present then checks if the "wordnum"
-th word equals that specified by the "gold"
value.
Parameters:
"type"
: "StrCompare""key"
: The key-string to look for"wordnum"
: Optional. If supplied then "gold" needs to specified."gold"
: Golden word"skip_lines_until"
: Optional. Do not check lines in the output file until this string is encountered. e.g. "skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"
. This is useful if a simulation is expected to have multiples of the key-string but you only want the last one.On the line containing the key, compares the "wordnum"
-th word against the specified gold-value.
Parameters:
"type"
: "FloatCompare""key"
: The key-string to look for"wordnum"
: The word number on the line containing the key that will be used in the check."gold"
: Golden value (float
)"tol"
: The floating point tolerance to use"skip_lines_until"
: Optional. Do not check lines in the output file until this string is encountered. e.g. "skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"
. This is useful if a simulation is expected to have multiples of the key-string but you only want the last one.Integer version of 2.2.3 FloatCompareCheck. On the line containing the key, compares the "wordnum"
-th word against the specified gold-value.
Parameters:
"type"
: "IntCompare""key"
: The key-string to look for"wordnum"
: The word number on the line containing the key that will be used in the check."gold"
: Golden value (int
)"skip_lines_until"
: Optional. Do not check lines in the output file until this string is encountered. e.g. "skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"
. This is useful if a simulation is expected to have multiples of the key-string but you only want the last one.Compares the return/error code of the test with a specified value.
Parameters:
"type"
: "ErrorCode""error_code"
: The return code required to passCompares the contents of the test output to a golden output file.
Parameters:
"type"
: "GoldFile""scope_keyword"
: Optional. Restrict the gold comparison to within a section of the respective gold/output file that are between the keywords <scope_keyword>_BEGIN
and <scope_keyword>_END
."candidate_filename"
: Optional. If supplied, this check will use this file rather than the test's output file. For example if zorba.csv
is provided then zorba.csv
will be compared against zorba.csv.gold
."skiplines_top"
: Number of lines at the top of both the gold and comparison file to skip in the comparison check.The tests are executed by executing the run_tests
script, for example:
or
Example portion of the output:
The run_tests
script can be executed on any folder within the test
directory. This is a really great feature, it means that you can restrict your testing to specific areas of the code. For example, if you know you only made changes to a specific module then there is no need to rerun the framework tests.
The run_tests
has a number of useful arguments:
options: -h, --help show this help message and exit -d DIRECTORY, --directory DIRECTORY The test directory to process -t TEST, --test TEST A specific test to run --exe EXE The executable to use for testing -j JOBS, --jobs JOBS Allow N jobs at once -v VERBOSE, --verbose VERBOSE Controls verbose failure
The functionality here allows one to execute only a subset of tests. For example, to only execute the framework tests we can do
If interested in a specific test you can narrow the tests even further: