Re: general protection fault in perf_misc_flags

From: Nick Desaulniers
Date: Fri Sep 25 2020 - 20:32:28 EST


On Fri, Sep 25, 2020 at 5:22 AM Dmitry Vyukov <dvyukov@xxxxxxxxxx> wrote:
>
> On Wed, Sep 23, 2020 at 5:20 PM Dmitry Vyukov <dvyukov@xxxxxxxxxx> wrote:
> >
> > On Wed, Sep 23, 2020 at 12:34 PM Borislav Petkov <bp@xxxxxxxxx> wrote:
> > >
> > > On Wed, Sep 23, 2020 at 11:24:48AM +0200, Dmitry Vyukov wrote:
> > > > 3. Run syzkaller locally with custom patches.
> > >
> > > Let's say I wanna build the kernel with clang-10 using your .config and
> > > run it in a vm locally. What are the steps in order to reproduce the
> > > same workload syzkaller runs in the guest on the GCE so that I can at
> > > least try get as close as possible to reproducing locally?
> >
> > It's a random fuzzing workload. You can get this workload by running
> > syzkaller locally:
> > https://github.com/google/syzkaller/blob/master/docs/linux/setup_ubuntu-host_qemu-vm_x86-64-kernel.md

These are virtualized guests, right? Has anyone played with getting
`rr` working to record traces of guests in QEMU?

I had seen the bug that generated this on github:
https://julialang.org/blog/2020/09/rr-memory-magic/

That way, even if syzkaller didn't have a reproducer binary, it would
at least have a replayable trace.

Boris, one question I have. Doesn't the kernel mark pages backing
executable code as read only at some point? If that were the case,
then I don't see how the instruction stream could be modified. I
guess static key patching would have to undo that permission mapping
before patching.

You're right about the length shorter than what I would have expected
from static key patching. That could very well be a write through
dangling int pointer...

> >
> > The exact clang compiler syzbot used is available here:
> > https://github.com/google/syzkaller/blob/master/docs/syzbot.md#crash-does-not-reproduce
>
> I've marked all other similar ones a dup of this one. Now you can see
> all manifestations on the dashboard:
> https://syzkaller.appspot.com/bug?extid=ce179bc99e64377c24bc
>
> Another possible debugging vector on this:
> The location of crashes does not seem to be completely random and
> evenly spread across kernel code. I think there are many more static
> branches (mm, net), but we have 3 crashes in vdso and 9 in paravirt
> code + these 6 crashes in perf_misc_flags which looks a bit like an
> outlier (?). What's special about paravirt/vdso?..



--
Thanks,
~Nick Desaulniers