Chi-Tech
devman_01_07_TestSystem.h
Go to the documentation of this file.
1
/**\page DevManTestSystem The Test System
2
3
\tableofcontents
4
5
Tests are extremely important to the overall vision of the project. They can
6
be tests to see if a simulation behaves in a certain way, tests to check input
7
language is handled appropriately, tests to check that operations produce the
8
expected result, and even checks for certain error or warning behavior. In a
9
nutshell... it assures that you know if something breaks.
10
11
The test system is contained in the `test` directory of the project.
12
\verbatim
13
├── doc
14
├── external
15
├── framework
16
├── modules
17
├── resources
18
├── test <-- Here
19
├── tutorials
20
├── CMakeLists.txt
21
├── LICENSE
22
├── README.md
23
└── configure.sh
24
\endverbatim
25
26
Within this directory we have the `run_tests` script.
27
\verbatim
28
test
29
├── bin <-- Test executable in here
30
├── framework
31
├── modules
32
├── src <-- main.cc in here
33
├── CMakeLists.txt
34
└── run_tests <-- Primary script
35
\endverbatim
36
37
\section DevManTestSystem_sec1 1 Compiling the tests
38
The test sources, contained throughout the `test` directory, is compiled along
39
with the regular project but it does not form part of the library (it is a
40
separate executable). We can do this because ChiTech's compile time is super
41
short, and ddditionally, the benefit is that we can get compiler errors if we
42
break any interfaces.
43
44
The entry point for the test executable is the `main.cc` contained in the
45
`test/src` directory. Thereafter all other tests sources are added to the
46
executable and linked together using \ref DevManStaticRegistration.
47
48
The executable is called `ChiTech_test` and is contained in the `bin` directory.
49
50
\section DevManTestSystem_sec2 2 What is a test?
51
Comprises up to 4 things:
52
- A directory in which the test resides. Multiple tests can be in the same
53
directory.
54
- A `.lua` input file that will initiate the tests.
55
- A `.json` configuration file specifying one or more tests with associated
56
checks. Our convention is to have just one of these in a folder and to
57
name it `YTests.json` (the Y always places it at the bottom of the directory)
58
- Optional. A `.cc` file implementing specific unit testing code.
59
- Optional. A `.gold` file if the test involves a gold-file check.
60
61
Example test:
62
```
63
test/example/
64
├── example_test.lua
65
├── example_test.cc
66
└── YTests.json
67
```
68
Here we have `example_test.lua` that contains only a single line:
69
\include test/example/example_test.lua
70
which executes a wrapped function defined in `example_test.cc`
71
\include test/example/example_test.cc
72
73
The `YTests.json` has the following syntax:
74
\include test/example/YTests.json
75
76
Where we introduce our JSON test configuration option.
77
- The JSON file starts with a square-bracket wrap `[...]` defining an array of
78
<B>Test Blocks</B>.
79
- Each <B>Test Block</B> is wrapped with curly-braces `{...}`. For this example
80
the test had the following parameters:
81
- `"comment"` = Free to use to annotate a test. Does not get used internally.
82
- `"file"` = The name of the `.lua` that initiates the test.
83
- `"num_procs"` = The number of mpi processes to use.
84
- `"checks"` = An array of checks `[..Checks..]`, where we have the following
85
checks:
86
- `StrCompare` check. Checks for the presence of a key string.
87
- `ErrorCode` check. Checks for a specified exit code. In this case 0,
88
meaning successful execution.
89
90
\subsection DevManTestSystem_sec2_1 2.1 Test-blocks documentation
91
Specifies a specific test
92
Parameters:
93
- `"file"` : The name of the `.lua` that initiates the test.
94
- `"num_procs"` : The number of mpi processes to use.
95
- `"checks"` : An array of checks `[..Checks..]`
96
- `"args"` : An array of arguments to pass to the executable
97
- `"weight_class"` : An optional string, either "short", "intermediate" or
98
"long" used to filter different
99
tests length. The default is "short". "long" should be used for tests >2min.
100
- `"outfileprefix"` : Optional parameter. Will default to `"file"` but can be
101
to change the output file name (outfileprefix+".out") so that the same input
102
file can be used for different tests.
103
- `"skip"` : Optional parameter. Must be non-empty string stating the reason the
104
test was skipped. The presence of this string causes the test to be skipped.
105
106
All other keys are ignored so feel free to peruse something like `"comment"` to
107
annotate the test.
108
109
\subsection DevManTestSystem_sec2_2 2.2 Checks documentation
110
Currently we have the following tests:
111
- \ref DevManTestSystem_sec2_2_1
112
- \ref DevManTestSystem_sec2_2_2
113
- \ref DevManTestSystem_sec2_2_3
114
- \ref DevManTestSystem_sec2_2_4
115
- \ref DevManTestSystem_sec2_2_5
116
- \ref DevManTestSystem_sec2_2_6
117
\subsubsection DevManTestSystem_sec2_2_1 2.2.1 KeyValuePairCheck
118
Looks for a key with a floating point value right after it.\n
119
Parameters:
120
- `"type"` : "KeyValuePair"
121
- `"key"` : The key-string to look for
122
- `"goldvalue"` : Float value
123
- `"tol"` : Tolerance on the goldvalue
124
- `"skip_lines_until"` : Optional. Do not check lines in the output file until
125
this string is encountered.
126
e.g. `"skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"`. This
127
is useful if a simulation is expected to have multiples of the key-string but
128
you only want the last one.
129
130
\subsubsection DevManTestSystem_sec2_2_2 2.2.2 StrCompareCheck
131
Can do one of two things, 1) looks for the presence of the key, returns success
132
if it is present, or 2) looks for the presence of the key and if present (and the
133
`"wordnum"` parameter is present then checks if the `"wordnum"`-th word equals
134
that specified by the `"gold"` value.\n
135
Parameters:
136
- `"type"` : "StrCompare"
137
- `"key"` : The key-string to look for
138
- `"wordnum"` : Optional. If supplied then "gold" needs to specified.
139
- `"gold"` : Golden word
140
- `"skip_lines_until"` : Optional. Do not check lines in the output file until
141
this string is encountered.
142
e.g. `"skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"`. This
143
is useful if a simulation is expected to have multiples of the key-string but
144
you only want the last one.
145
146
\subsubsection DevManTestSystem_sec2_2_3 2.2.3 FloatCompareCheck
147
On the line containing the key, compares the `"wordnum"`-th word against the
148
specified gold-value.\n
149
Parameters:
150
- `"type"` : "FloatCompare"
151
- `"key"` : The key-string to look for
152
- `"wordnum"` : The word number on the line containing the key that will be
153
used in the check.
154
- `"gold"` : Golden value (`float`)
155
- `"tol"` : The floating point tolerance to use
156
- `"skip_lines_until"` : Optional. Do not check lines in the output file until
157
this string is encountered.
158
e.g. `"skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"`. This
159
is useful if a simulation is expected to have multiples of the key-string but
160
you only want the last one.
161
162
\subsubsection DevManTestSystem_sec2_2_4 2.2.4 IntCompareCheck
163
Integer version of \ref DevManTestSystem_sec2_2_3. On the line containing the
164
key, compares the `"wordnum"`-th word against the
165
specified gold-value.\n
166
Parameters:
167
- `"type"` : "IntCompare"
168
- `"key"` : The key-string to look for
169
- `"wordnum"` : The word number on the line containing the key that will be
170
used in the check.
171
- `"gold"` : Golden value (`int`)
172
- `"skip_lines_until"` : Optional. Do not check lines in the output file until
173
this string is encountered.
174
e.g. `"skip_lines_until": "LinearBoltzmann::KEigenvalueSolver execution"`. This
175
is useful if a simulation is expected to have multiples of the key-string but
176
you only want the last one.
177
178
\subsubsection DevManTestSystem_sec2_2_5 2.2.5 ErrorCodeCheck
179
Compares the return/error code of the test with a specified value.\n
180
Parameters:
181
- `"type"` : "ErrorCode"
182
- `"error_code"` : The return code required to pass
183
184
\subsubsection DevManTestSystem_sec2_2_6 2.2.6 GoldFileCheck
185
Compares the contents of the test output to a golden output file.\n
186
Parameters:
187
- `"type"` : "GoldFile"
188
- `"scope_keyword"` : Optional. Restrict the gold comparison to within a section
189
of the respective gold/output file that are between the keywords
190
`<scope_keyword>_BEGIN` and `<scope_keyword>_END`.
191
- `"candidate_filename"` : Optional. If supplied, this check will use this file
192
rather than the test's output file. For example if `zorba.csv` is provided then
193
`zorba.csv` will be compared against `zorba.csv.gold`.
194
- `"skiplines_top"`: Number of lines at the top of both the gold and comparison
195
file to skip in the comparison check.
196
197
\section DevManTestSystem_sec3 3 Running the test system
198
The tests are executed by executing the `run_tests` script, for example:
199
```
200
> test/run_tests -j 8 --directory test
201
```
202
or
203
```
204
> test/run_tests -j 8 -d test
205
```
206
Example portion of the output:
207
```
208
Description of what you are looking at:
209
[XX]<test directory>/<lua filename>.lua.......[annotations]Passed/Failed
210
[XX] = number of mpi processes
211
[annotations] = error messages preventing a test from running
212
[lua file missing] = A test pointing to a .lua file was indicated in a .json file but the actual lua file is missing.
213
[Gold file missing] = Test with a GoldFile check has no .gold file in the gold/ directory. If the input is Input.lua
214
then there needs to be a Input.lua.gold file in the gold directory. If this was the first run
215
then copy Input.lua.out from the out/ directory and use that as the gold.
216
[Python error] = A python error occurred. Run with -v 1 to see the error.
217
218
[ 1]test/framework/chi_misc_utils/chi_misc_utils_test_00.lua......................................................Passed
219
[ 1]test/framework/parameters/params_test_00.lua..................................................................Passed
220
[ 1]test/framework/parameters/params_test_01a.lua.................................................................Passed
221
[ 1]test/framework/parameters/params_test_01b.lua.................................................................Passed
222
[ 1]test/framework/parameters/params_test_01c.lua.................................................................Passed
223
[ 2]test/framework/chi_data_types/chi_data_types_test_00.lua......................................................Passed
224
[ 1]test/framework/parameters/params_test_01d.lua.................................................................Passed
225
[ 4]test/framework/chi_mesh/ReadWavefrontObj1.lua.................................................................Passed
226
[ 1]test/framework/tutorials/fv_test1.lua.........................................................................Passed
227
[ 1]test/framework/tutorials/fv_test2.lua.........................................................................Passed
228
[ 1]test/framework/tutorials/tutorial_06_wdd.lua..................................................................Passed
229
[ 4]test/framework/tutorials/pwlc_test1.lua.......................................................................Passed
230
[ 1]test/framework/chi_math/chi_math_test_00.lua..................................................................Passed
231
232
```
233
The `run_tests` script can be executed on any folder within the `test` directory.
234
This is a really great feature, it means that you can restrict your testing
235
to specific areas of the code. For example, if you know you only made changes
236
to a specific module then there is no need to rerun the framework tests.
237
238
The `run_tests` has a number of useful arguments:
239
\verbatim
240
options:
241
-h, --help show this help message and exit
242
-d DIRECTORY, --directory DIRECTORY
243
The test directory to process
244
-t TEST, --test TEST A specific test to run
245
--exe EXE The executable to use for testing
246
-j JOBS, --jobs JOBS Allow N jobs at once
247
-v VERBOSE, --verbose VERBOSE
248
Controls verbose failure
249
\endverbatim
250
251
The functionality here allows one to execute only a subset of tests. For
252
example, to only execute the framework tests we can do
253
```
254
> test/run_tests -j 8 -d test/framework
255
```
256
257
If interested in a specific test you can narrow the tests even further:
258
```
259
> test/run_tests -j 8 -d test/framework -t params_test_00.lua
260
```
261
*/
doc
PAGES
ProgrammersManual
devman_01_07_TestSystem.h
Generated by
1.9.3