User triggerable kernel panic.

From: Darren Austin
Date: Wed Oct 19 2016 - 10:17:57 EST


Hi,
I'm not sure if this is the best place to report an issue i've discovered
with the kernel and the 'fsc' mount option - please let me know if there is
some other mailing list or person I should be notifying about this.

The bug appears (at least for me) when using an NFS server, and a client
which mounts an export from that server with the 'fsc' option (whether or
not the fscache daemon is running or not). It also seems easiest to trigger
using the 'nano' editor, but other commands will trigger it randomly.

I've tested this bug on the Ubuntu 1610 kernel (4.8.0) and with the 4.8.2
kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/. The latter
purports to be a kernel built from the unmodified kernel source, and simply
packaged in a .deb.

I can repeatidly reproduce this bug on my system, so it's definitely not a
one off - it causes a kernel panic and complete lock-up every time.

The NFS share I tested with is exported with options:
rw,async,insecure,insecure_locks,no_root_squash,anongid=99,anonuid=99,no_subtree_check
(but the export options don't seem to matter to trigger the bug)
and mounted on the client with options:
vers=4,hard,intr,acl,rw,fsc
via the Linux automounter (but the issue persists on mounts from fstab or
when mounted manually).

To reproduce the bug is quite simple...
1) Set up the server export and client mount as detailed above.
2) From the console (or in a terminal; but I only tested this once) in the
directory where you've mounted the NFS share, run:
nano testfile.txt
and add write some text to the file.
2) Save the file and exit (Ctl+X is how I did it).
3) When back at the prompt, immediately hit the up arrow on the keyboard (to
load the last typed command into the buffer) and hit enter.
4) Watch as the pretty text of the panic scrolls by :)

With the help of people on the nano-dev mailing list, I figured out that
it's the 'fsc' option which causes the panic - repeated tests without that
option active do not trigger it. However, this is /not/ a nano specific bug
- it can be triggered by any command used on the mount. And besides, nano
shouldn't be able to take down the system :)

If anyone can double check this and see if they can reproduce it would be
greatfully appreciated - i'd like to know it's not just me!

If any further information is required, please don't hesitate to reply.

Darren.