Re: [PATCH v5] kselftest: Add basic test for probing the rust sample modules

From: Shuah Khan
Date: Mon Mar 04 2024 - 15:19:05 EST


On 3/1/24 09:28, Miguel Ojeda wrote:
On Fri, Mar 1, 2024 at 4:22 PM Laura Nao <laura.nao@xxxxxxxxxxxxx> wrote:

Adding --first-time (you meant --first-time, right?) definitely makes
sense, thanks for the pointer. I think having the modules being built-in
should be treated as a skip, same as when they are not there at all.

Yeah, I meant `--first-time`, sorry.

I didn't see other tests using it, so I am not sure if there is a
reason not to do that (ditto for adding `MODULES` etc. to `config` and
whether we should fail/skip in certain cases) -- I guess Shuah will
let us know.

So something like this:

for sample in "${rust_sample_modules[@]}"; do
- if ! /sbin/modprobe -n -q "$sample"; then
+ if ! /sbin/modprobe -n -q --first-time "$sample"; then
ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)"
exit "$KSFT_SKIP"
fi

will cover both cases.

What about the other calls to `modprobe`?

I think it's safe to assume no other module will depend on the sample
rust modules, so is there any other reason unloading the modules
might fail apart from MODULE_UNLOAD not being enabled? If not, then I

I was thinking more in general terms: that we would like to catch if
unloading does not work as expected.

Especially since these "simple samples" are, in part, testing that the
basic infrastructure for Rust modules works. So I would say it is
important to check whether module unloading failed.

For instance, if something is very broken, a Rust module could in
principle fail unloading even if `MODULE_UNLOAD=y` and even if C
modules unload without issue.

I can't just simply skip all tests like this though:

for sample in "${rust_sample_modules[@]}"; do
if /sbin/modprobe -q "$sample"; then
- /sbin/modprobe -q -r "$sample"
+ if ! /sbin/modprobe -q -r "$sample"; then
+ ktap_skip_all "Failed to unload module $sample, please enable CONFIG_MODULE_UNLOAD"
+ exit "$KSFT_SKIP"
+ fi
ktap_test_pass "$sample"
else
ktap_test_fail "$sample"

as the test plan has already been printed by then.
I'll need to rework the script a bit to skip the test upon errors on
module removal.

Perhaps Shuah prefers to merge this before and then improve it instead
-- I don't know. I didn't mean to trigger a rework :)

Especially since it is unclear what is the "pattern" to follow here --
perhaps this is another case of a wider cleanup for more tests, like
the ktap helpers I suggested (thanks for implementing those by the
way!).

If we need more granularity on the feedback provided to the user (i.e.
indication on what particular options are missing), then I guess we
could check the current kernel config (/proc/config.gz) and skip the
entire test if any required config is missing. However, this adds an
extra dependency on CONFIG_IKCONFIG=y and CONFIG_IKCONFIG_PROC=y.

Any advice on the best approach here?

I guess this also depends on what tests are supposed to do etc., so
let's see what Shuah says.

Kselftest should all the tests it can run and skip only the ones it can't
run if configuration and other dependencies aren't met. The reason is to
avoid false failures.


Kselftest exit codes are predefined
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/kselftest.h?h=v6.8-rc6#n74),
so if we use `set -e` and source a missing file we end up returning "1"
as if the test was run and failed. With this check we're sure to return
a value that makes sense in the event the helpers file ever gets moved.

Yeah, definitely. I was thinking here about just failing if something
does not work as expected, i.e. speaking more generally (that is why I
also mentioned even other languages).

By "failing" here I didn't mean reporting the test as failing; I see
it as something in the layer above. That is, if the helpers file is
ever moved or is not installed for whatever reason, then it is the
test infrastructure that failed. So I would have expected that "skip"
is due to a reason related to the test itself rather than something
unexpected related to the infrastructure, but I guess it may be part
of the "skip" meaning in kselftests. So it depends on what is supposed
to mean in kselftests, which I don't know.


I applied this to linux-kselftest next for Linux 6.9-rc1.

We can make improvements if any on top of this.

thanks,
-- Shuah