# Test Harness

-   MOOSE provides an extendable Test Harness for executing your code with different input files.
-   Each kernel (or logical set of kernels) you write should have test cases that verify the code's correctness.
-   The test system is very flexible. You can perform many different operations, for example: testing for expected error conditions.
-   Additionally, you can create your own "Tester" classes which extend the Test Harness.

[](---)
# Tests setup

-   Related tests should be grouped into an individual directory and have a consistent naming convention.
-   We recommend organizing tests in a hierarchy similar to your application source (i.e. kernels, BCs, materials, etc).
-   Tests are found dynamically by matching patterns (highlighted below)

```puppet
tests/
  kernels/
    my_kernel_test/
      my_kernel_test.e   [input mesh]
      my_kernel_test.i   [input file]
      tests              [test specification file]
      gold/              [gold standard folder - validated solution]
      out.e              [solution]
```

[](---)
# A quick look at the test specification file

-   Same format as the standard MOOSE input file

```puppet
[Tests]
   [./my_kernel_test]
     type    = Exodiff
     input   = my_kernel_test.i
     exodiff = my_kernel_test_out.e
   [../]

   [./kernel_check_exception]
     type       = RunException
     input      = my_kernel_exception.i
     expect_err = 'Bad stuff happened with variable \w+'
   [../]
 []  
```

[](---)
# Testers provided in MOOSE

[image:152]

[](.footnotesize[)

-   RunApp: Runs a MOOSE-based application with specified options.
-   Exodiff: Checks Exodus files for differences within specified tolerances.
-   CSVDiff: Checks CSV files for differences within specified tolerances.
-   RunException: Tests for various error conditions.
-   CheckFiles: Checks for the existence of specific files after a completed run.

[](])

[](---)
# Adding Additional Testers (Advanced)

-   Inherit from `Tester` and override:
    -   checkRunnable()
    -   prepare(): Method to run right before a test starts
    -   getCommand(): Command to run (in parallel with other tests)
    -   processResults(): Process the results to check whether the test has passed
-   NO REGISTRATION REQUIRED!
    -   Drop the `Tester` object in "<Your App>/scripts/TestHarness/testers"

[](---)
# Options available to each Tester

-   Run: `./run_tests --dump`
    -   input: The name of the input file
    -   exodiff: The list of output filenames to compare
    -   abs\_zero: Absolute zero tolerance for exodiff
    -   rel\_err: Relative error tolerance for exodiff
    -   prereq: Name of the test that needs to complete before running this test
    -   min\_parallel: Minimum number of processors to use for a test (default: 1)
    -   max\_parallel: Maximum number of processors to use for a test
    -   . . .

[](---)
# Running your tests

`./run_tests [options]`

options | description
-|-
` -j <n>` | run ‘n' jobs at a time
` –dbg` | run tests in debug mode (debug binary)
` FOLDER NAME` | run just one set of tests
` -h` | help
` –heavy` | run regular tests and those marked ‘heavy'
` -a` | write separate log file for each failed test
` –group=GROUP` | all the tests in a user defined group
` –not_group=GROUP` | opposite of `–group` option
` -q` | quiet (don't print output of FAILED tests
` -p <n>` | request to run each test with ‘n' procs

[](---)
# Other Notes on Tests

-   Individual tests should run relatively quickly ($$\sim$$2 second rule)
-   Outputs or other generated files should not be checked into the subversion repository.
    -   Do not check in the solution that is created in the test directory when running the test!
-   The MOOSE developers rely on application tests when refactoring to verify correctness
    -   Poor test coverage = Higher code failure rate