Re: [PATCH 2/4] lib: add error_report_notify to collect debugging tools' reports

From: Alexander Potapenko
Date: Thu Jan 14 2021 - 04:52:32 EST


On Thu, Jan 14, 2021 at 1:06 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, 13 Jan 2021 10:16:55 +0100 Alexander Potapenko <glider@xxxxxxxxxx> wrote:
>
> > With the introduction of various production error-detection tools, such as
> > MTE-based KASAN and KFENCE, the need arises to efficiently notify the
> > userspace OS components about kernel errors. Currently, no facility exists
> > to notify userspace about a kernel error from such bug-detection tools.
> > The problem is obviously not restricted to the above bug detection tools,
> > and applies to any error reporting mechanism that does not panic the
> > kernel; this series, however, will only add support for KASAN and KFENCE
> > reporting.
> >
> > All such error reports appear in the kernel log. But, when such errors
> > occur, userspace would normally need to read the entire kernel log and
> > parse the relevant errors. This is error prone and inefficient, as
> > userspace needs to continuously monitor the kernel log for error messages.
> > On certain devices, this is unfortunately not acceptable. Therefore, we
> > need to revisit how reports are propagated to userspace.
> >
> > The library added, error_report_notify (CONFIG_ERROR_REPORT_NOTIFY),
> > solves the above by using the error_report_start/error_report_end tracing
> > events and exposing the last report and the total report count to the
> > userspace via /sys/kernel/error_report/last_report and
> > /sys/kernel/error_report/report_count.
> >
> > Userspace apps can call poll(POLLPRI) on those files to get notified about
> > the new reports without having to watch dmesg in a loop.
>
> It would be nice to see some user-facing documentation for this, under
> Documentation/. How to use it, what the shortcomings are, etc.

Good point, will do.

> For instance... what happens when userspace is slow reading
> /sys/kernel/error_report/last_report? Does that file buffer multiple
> reports? Does the previous one get overwritten? etc. Words on how
> this obvious issue is handled...

Yes, there can be issues with overwriting, and the recommended way to
handle them would be to check the value in
/sys/kernel/error_report/report_count before and after reading the
report.

> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -209,6 +209,20 @@ config DEBUG_BUGVERBOSE
> > of the BUG call as well as the EIP and oops trace. This aids
> > debugging but costs about 70-100K of memory.
> >
> > +config ERROR_REPORT_NOTIFY
> > + bool "Expose memory error reports to the userspace"
>
> There's really nothing "memory" specific about this? Any kernel
> subsystem could use it?

Indeed. Perhaps it's better to emphasize "production" here, because
users of debugging tools are more or less happy with dmesg output.

>
> > + depends on TRACING
> > + help
> > + When enabled, captures error reports from debugging tools (such as
> > + KFENCE or KASAN) using console tracing, and exposes reports in
> > + /sys/kernel/error_report/: the file last_report contains the last
> > + report (with maximum report length of PAGE_SIZE), and report_count,
> > + the total report count.
> > +
> > + Userspace programs can call poll(POLLPRI) on those files to get
> > + notified about the new reports without having to watch dmesg in a
> > + loop.
>
> So we have a whole new way of getting debug info out of the kernel. I
> fear this will become a monster. And anticipating that, we should make
> darn sure that the interface is right, and is extensible.

Let me elaborate a bit on the problem we are trying to solve here.
It is specific to Android, but other Linux-based systems may require
something similar.
There's a userspace daemon that collects kernel/userspace crashes from
the device if its owner has opted into that, and we want that daemon
to also collect non-fatal error reports.

There are several issues with that:
- there is currently no way to synchronously notify the userspace
about an error, and without that the daemon will have to actively
monitor the kernel log (or some other file, e.g. /proc/kernel/tainted
for certain strings;
- once we figure out there is an error report available, the daemon
will have to find its beginning and end, and also filter out the lines
that do not belong to that report;
- this all requires letting that daemon see the whole dmesg output,
which may contain sensitive data accidentally printed by the kernel.

So, first of all, our solution had to provide some poll()-based
interface to avoid reading files in a loop, and that interface should
trigger every time an error report is printed.
Adding ftrace tracepoints to every tool at the points where reports
start/end is perhaps least invasive, and also allows multiple
subscribers (plus free tracing!).
Then, since the notification library was already in the business of
trace probes, we thought it makes sense to capture the whole report,
assuming that every dmesg line from the same task/cpu between the
report probes belongs to the report.
That drastically reduces the amount of data the userspace daemon has
access to (only the report instead of the whole dmesg), and removes
the need of active polling.

A potential pinch-point is the report size, which cannot exceed 4K if
we want it to live in sysfs.
It turned out certain reports didn't fit under that limit when taken
as-is, but stripping away the timestamps and task IDs printed by
CONFIG_PRINTK_CALLER saved us 1.5-2K in those cases.

Given that the information we expose is a subset of what dmesg
provides, I wouldn't call it a "whole new way" though.
Existing users will probably still stick to dmesg unless they want to
be notified of errors.