Linux カーネルセルフテスト¶
The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
You can find additional information on Kselftest framework, how to write new tests using the framework on Kselftest wiki:
https://kselftest.wiki.kernel.org/
On some systems, hot-plug tests could hang forever waiting for cpu and memory to be ready to be offlined. A special hot-plug target is created to run the full range of hot-plug tests. In default mode, hot-plug tests run in safe mode with a limited scope. In limited mode, cpu-hotplug test is run on a single cpu as opposed to all hotplug capable cpus, and memory hotplug test is run on 2% of hotplug capable memory instead of 10%.
kselftest runs as a userspace process. Tests that can be written/run in userspace may wish to use the Test Harness. Tests that need to be run in kernel space may wish to use a Test Module.
Running the selftests (hotplug tests are run in limited mode)¶
To build the tests:
$ make -C tools/testing/selftests
To run the tests:
$ make -C tools/testing/selftests run_tests
To build and run the tests with a single command, use:
$ make kselftest
Note that some tests will require root privileges.
Kselftest supports saving output files in a separate directory and then running tests. To locate output files in a separate directory two syntaxes are supported. In both cases the working directory must be the root of the kernel src. This is applicable to "Running a subset of selftests" section below.
To build, save output files in a separate directory with O=
$ make O=/tmp/kselftest kselftest
To build, save output files in a separate directory with KBUILD_OUTPUT
$ export KBUILD_OUTPUT=/tmp/kselftest; make kselftest
The O= assignment takes precedence over the KBUILD_OUTPUT environment variable.
The above commands by default run the tests and print full pass/fail report. Kselftest supports "summary" option to make it easier to understand the test results. Please find the detailed individual test results for each test in /tmp/testname file(s) when summary option is specified. This is applicable to "Running a subset of selftests" section below.
To run kselftest with summary option enabled
$ make summary=1 kselftest
Running a subset of selftests¶
You can use the "TARGETS" variable on the make command line to specify single test to run, or a list of tests to run.
To run only tests targeted for a single subsystem:
$ make -C tools/testing/selftests TARGETS=ptrace run_tests
You can specify multiple tests to build and run:
$ make TARGETS="size timers" kselftest
To build, save output files in a separate directory with O=
$ make O=/tmp/kselftest TARGETS="size timers" kselftest
To build, save output files in a separate directory with KBUILD_OUTPUT
$ export KBUILD_OUTPUT=/tmp/kselftest; make TARGETS="size timers" kselftest
See the top-level tools/testing/selftests/Makefile for the list of all possible targets.
Running the full range hotplug selftests¶
To build the hotplug tests:
$ make -C tools/testing/selftests hotplug
To run the hotplug tests:
$ make -C tools/testing/selftests run_hotplug
Note that some tests will require root privileges.
Install selftests¶
You can use the kselftest_install.sh tool to install selftests in the default location, which is tools/testing/selftests/kselftest, or in a user specified location.
To install selftests in default location:
$ cd tools/testing/selftests
$ ./kselftest_install.sh
To install selftests in a user specified location:
$ cd tools/testing/selftests
$ ./kselftest_install.sh install_dir
Running installed selftests¶
Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests.
You can simply do the following to run the installed Kselftests. Please note some tests will require root privileges:
$ cd kselftest
$ ./run_kselftest.sh
Contributing new tests¶
In general, the rules for selftests are
Do as much as you can if you're not root;
Don't take too long;
Don't break the build on any architecture, and
Don't cause the top-level "make run_tests" to fail if your feature is unconfigured.
Contributing new tests (details)¶
Use TEST_GEN_XXX if such binaries or files are generated during compiling.
TEST_PROGS, TEST_GEN_PROGS mean it is the executable tested by default.
TEST_CUSTOM_PROGS should be used by tests that require custom build rules and prevent common build rule use.
TEST_PROGS are for test shell scripts. Please ensure shell script has its exec bit set. Otherwise, lib.mk run_tests will generate a warning.
TEST_CUSTOM_PROGS and TEST_PROGS will be run by common run_tests.
TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is the executable which is not tested by default. TEST_FILES, TEST_GEN_FILES mean it is the file which is used by test.
First use the headers inside the kernel source and/or git repo, and then the system headers. Headers for the kernel release as opposed to headers installed by the distro on the system should be the primary focus to be able to find regressions.
If a test needs specific kernel config options enabled, add a config file in the test directory to enable them.
e.g: tools/testing/selftests/android/config
Test Module¶
Kselftest tests the kernel from userspace. Sometimes things need
testing from within the kernel, one method of doing this is to create a
test module. We can tie the module into the kselftest framework by
using a shell script test runner. kselftest_module.sh
is designed
to facilitate this process. There is also a header file provided to
assist writing kernel modules that are for use with kselftest:
tools/testing/kselftest/kselftest_module.h
tools/testing/kselftest/kselftest_module.sh
How to use¶
Here we show the typical steps to create a test module and tie it into kselftest. We use kselftests for lib/ as an example.
Create the test module
Create the test script that will run (load/unload) the module e.g.
tools/testing/selftests/lib/printf.sh
Add line to config file e.g.
tools/testing/selftests/lib/config
Add test script to makefile e.g.
tools/testing/selftests/lib/Makefile
Verify it works:
# Assumes you have booted a fresh build of this kernel tree
cd /path/to/linux/tree
make kselftest-merge
make modules
sudo make modules_install
make TARGETS=lib kselftest
Example Module¶
A bare bones test module might look like this:
// SPDX-License-Identifier: GPL-2.0+
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "../tools/testing/selftests/kselftest_module.h"
KSTM_MODULE_GLOBALS();
/*
* Kernel module for testing the foobinator
*/
static int __init test_function()
{
...
}
static void __init selftest(void)
{
KSTM_CHECK_ZERO(do_test_case("", 0));
}
KSTM_MODULE_LOADERS(test_foo);
MODULE_AUTHOR("John Developer <jd@fooman.org>");
MODULE_LICENSE("GPL");
Example test script¶
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0+
$(dirname $0)/../kselftest_module.sh "foo" test_foo
Test Harness¶
The kselftest_harness.h file contains useful helpers to build tests. The test harness is for userspace testing, for kernel space testing see Test Module above.
The tests from tools/testing/selftests/seccomp/seccomp_bpf.c can be used as example.
Example¶
#include "../kselftest_harness.h"
TEST(standalone_test) {
do_some_stuff;
EXPECT_GT(10, stuff) {
stuff_state_t state;
enumerate_stuff_state(&state);
TH_LOG("expectation failed with state: %s", state.msg);
}
more_stuff;
ASSERT_NE(some_stuff, NULL) TH_LOG("how did it happen?!");
last_stuff;
EXPECT_EQ(0, last_stuff);
}
FIXTURE(my_fixture) {
mytype_t *data;
int awesomeness_level;
};
FIXTURE_SETUP(my_fixture) {
self->data = mytype_new();
ASSERT_NE(NULL, self->data);
}
FIXTURE_TEARDOWN(my_fixture) {
mytype_free(self->data);
}
TEST_F(my_fixture, data_is_good) {
EXPECT_EQ(1, is_my_data_good(self->data));
}
TEST_HARNESS_MAIN
Helpers¶
-
TH_LOG
(fmt, ...)¶
Parameters
fmt
format string
...
optional arguments
Description
TH_LOG(format, ...)
Optional debug logging function available for use in tests. Logging may be enabled or disabled by defining TH_LOG_ENABLED. E.g., #define TH_LOG_ENABLED 1
If no definition is provided, logging is enabled by default.
If there is no way to print an error message for the process running the test (e.g. not allowed to write to stderr), it is still possible to get the ASSERT_* number for which the test failed. This behavior can be enabled by writing _metadata->no_print = true; before the check sequence that is unable to print. When an error occur, instead of printing an error message and calling abort(3), the test process call _exit(2) with the assert number as argument, which is then printed by the parent process.
-
TEST
(test_name)¶ Defines the test function and creates the registration stub
Parameters
test_name
test name
Description
TEST(name) { implementation }
Defines a test by name. Names must be unique and tests must not be run in parallel. The implementation containing block is a function and scoping should be treated as such. Returning early may be performed with a bare "return;" statement.
EXPECT_* and ASSERT_* are valid in a TEST()
{ } context.
-
TEST_SIGNAL
(test_name, signal)¶
Parameters
test_name
test name
signal
signal number
Description
TEST_SIGNAL(name, signal) { implementation }
Defines a test by name and the expected term signal. Names must be unique and tests must not be run in parallel. The implementation containing block is a function and scoping should be treated as such. Returning early may be performed with a bare "return;" statement.
EXPECT_* and ASSERT_* are valid in a TEST()
{ } context.
-
FIXTURE_DATA
(datatype_name)¶ Wraps the struct name so we have one less argument to pass around
Parameters
datatype_name
datatype name
Description
FIXTURE_DATA(datatype name)
This call may be used when the type of the fixture data is needed. In general, this should not be needed unless the self is being passed to a helper directly.
-
FIXTURE
(fixture_name)¶ Called once per fixture to setup the data and register
Parameters
fixture_name
fixture name
Description
FIXTURE(datatype name) {
type property1;
...
};
Defines the data provided to TEST_F()
-defined tests as self. It should be
populated and cleaned up using FIXTURE_SETUP()
and FIXTURE_TEARDOWN()
.
-
FIXTURE_SETUP
(fixture_name)¶ Prepares the setup function for the fixture. _metadata is included so that EXPECT_* and ASSERT_* work correctly.
Parameters
fixture_name
fixture name
Description
FIXTURE_SETUP(fixture name) { implementation }
Populates the required "setup" function for a fixture. An instance of the
datatype defined with FIXTURE_DATA()
will be exposed as self for the
implementation.
ASSERT_* are valid for use in this context and will prempt the execution of any dependent fixture tests.
A bare "return;" statement may be used to return early.
-
FIXTURE_TEARDOWN
(fixture_name)¶
Parameters
fixture_name
fixture name
Description
_metadata is included so that EXPECT_* and ASSERT_* work correctly.
FIXTURE_TEARDOWN(fixture name) { implementation }
Populates the required "teardown" function for a fixture. An instance of the
datatype defined with FIXTURE_DATA()
will be exposed as self for the
implementation to clean up.
A bare "return;" statement may be used to return early.
-
TEST_F
(fixture_name, test_name)¶ Emits test registration and helpers for fixture-based test cases
Parameters
fixture_name
fixture name
test_name
test name
Description
TEST_F(fixture, name) { implementation }
Defines a test that depends on a fixture (e.g., is part of a test case).
Very similar to TEST()
except that self is the setup instance of fixture's
datatype exposed for use by the implementation.
Warning: use of ASSERT_* here will skip TEARDOWN.
-
TEST_HARNESS_MAIN
()¶ Simple wrapper to run the test harness
Parameters
Description
TEST_HARNESS_MAIN
Use once to append a main() to the test file.
Operators¶
Operators for use in TEST()
and TEST_F()
.
ASSERT_* calls will stop test execution immediately.
EXPECT_* calls will emit a failure warning, note it, and continue.
-
ASSERT_EQ
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_EQ(expected, measured): expected == measured
-
ASSERT_NE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_NE(expected, measured): expected != measured
-
ASSERT_LT
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_LT(expected, measured): expected < measured
-
ASSERT_LE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_LE(expected, measured): expected <= measured
-
ASSERT_GT
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_GT(expected, measured): expected > measured
-
ASSERT_GE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_GE(expected, measured): expected >= measured
-
ASSERT_NULL
(seen)¶
Parameters
seen
measured value
Description
ASSERT_NULL(measured): NULL == measured
-
ASSERT_TRUE
(seen)¶
Parameters
seen
measured value
Description
ASSERT_TRUE(measured): measured != 0
-
ASSERT_FALSE
(seen)¶
Parameters
seen
measured value
Description
ASSERT_FALSE(measured): measured == 0
-
ASSERT_STREQ
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_STREQ(expected, measured): !strcmp(expected, measured)
-
ASSERT_STRNE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
ASSERT_STRNE(expected, measured): strcmp(expected, measured)
-
EXPECT_EQ
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_EQ(expected, measured): expected == measured
-
EXPECT_NE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_NE(expected, measured): expected != measured
-
EXPECT_LT
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_LT(expected, measured): expected < measured
-
EXPECT_LE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_LE(expected, measured): expected <= measured
-
EXPECT_GT
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_GT(expected, measured): expected > measured
-
EXPECT_GE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_GE(expected, measured): expected >= measured
-
EXPECT_NULL
(seen)¶
Parameters
seen
measured value
Description
EXPECT_NULL(measured): NULL == measured
-
EXPECT_TRUE
(seen)¶
Parameters
seen
measured value
Description
EXPECT_TRUE(measured): 0 != measured
-
EXPECT_FALSE
(seen)¶
Parameters
seen
measured value
Description
EXPECT_FALSE(measured): 0 == measured
-
EXPECT_STREQ
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_STREQ(expected, measured): !strcmp(expected, measured)
-
EXPECT_STRNE
(expected, seen)¶
Parameters
expected
expected value
seen
measured value
Description
EXPECT_STRNE(expected, measured): strcmp(expected, measured)