Re: [PATCH] livepatch: Avoid CPU hogging with cond_resched

From: Petr Mladek
Date: Mon Jan 03 2022 - 11:06:36 EST


On Wed 2021-12-29 20:16:50, David Vernet wrote:
> Adding modules + BPF list and maintainers to this thread.
>
> David Vernet <void@xxxxxxxxxxxxx> wrote on Wed [2021-Dec-29 13:56:47 -0800]:
> > When initializing a 'struct klp_object' in klp_init_object_loaded(), and
> > performing relocations in klp_resolve_symbols(), klp_find_object_symbol()
> > is invoked to look up the address of a symbol in an already-loaded module
> > (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or
> > module_kallsyms_on_each_symbol() to find the address of the symbol that is
> > being patched.
> >
> > It turns out that symbol lookups often take up the most CPU time when
> > enabling and disabling a patch, and may hog the CPU and cause other tasks
> > on that CPU's runqueue to starve -- even in paths where interrupts are
> > enabled. For example, under certain workloads, enabling a KLP patch with
> > many objects or functions may cause ksoftirqd to be starved, and thus for
^^^^^^^^^^^^^^^^^^^^^^^^^
This suggests that a single kallsyms_on_each_symbol() is not a big
problem. cond_resched() might be called non-necessarily often there.
I wonder if it would be enough to add cond_resched() into the two
loops calling klp_find_object_symbol().

That said, kallsyms_on_each_symbol() is a slow path and there might
be many symbols. So, it might be the right place.

I am just thinking loudly. I do not have strong opinion. I am fine
with any cond_resched() location that solves the problem. Feel free
to use:

Acked-by: Petr Mladek <pmladek@xxxxxxxx>

Best Regards,
Petr


> > interrupts to be backlogged and delayed. This may end up causing TCP
> > retransmits on the host where the KLP patch is being applied, and in
> > general, may cause any interrupts serviced by softirqd to be delayed while
> > the patch is being applied.
> >
> > So as to ensure that kallsyms_on_each_symbol() does not end up hogging the
> > CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol()
> > and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol
> > lookup in vmlinux and a module respectively. Without this patch, if a
> > live-patch is applied on a 36-core Intel host with heavy TCP traffic, a
> > ~10x spike is observed in TCP retransmits while the patch is being applied.
> > Additionally, collecting sched events with perf indicates that ksoftirqd is
> > awakened ~1.3 seconds before it's eventually scheduled. With the patch, no
> > increase in TCP retransmit events is observed, and ksoftirqd is scheduled
> > shortly after it's awakened.
> >
> > Signed-off-by: David Vernet <void@xxxxxxxxxxxxx>
> > ---
> > kernel/kallsyms.c | 1 +
> > kernel/module.c | 2 ++
> > 2 files changed, 3 insertions(+)
> >
> > diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
> > index 0ba87982d017..2a9afe484aec 100644
> > --- a/kernel/kallsyms.c
> > +++ b/kernel/kallsyms.c
> > @@ -223,6 +223,7 @@ int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
> > ret = fn(data, namebuf, NULL, kallsyms_sym_address(i));
> > if (ret != 0)
> > return ret;
> > + cond_resched();
> > }
> > return 0;
> > }
> > diff --git a/kernel/module.c b/kernel/module.c
> > index 40ec9a030eec..c96160f7f3f5 100644
> > --- a/kernel/module.c
> > +++ b/kernel/module.c
> > @@ -4462,6 +4462,8 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
> > mod, kallsyms_symbol_value(sym));
> > if (ret != 0)
> > goto out;
> > +
> > + cond_resched();
> > }
> > }
> > out:
> > --
> > 2.30.2
> >