Re: [RFC] de-asmify the x86-64 system call slowpath

From: Linus Torvalds
Date: Thu Feb 06 2014 - 16:29:23 EST


On Wed, Feb 5, 2014 at 8:33 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> Doing the gang-lookup is hard, since it's all abstracted away, but the
> attached patch kind of tries to do what I described.
>
> This patch probably doesn't work, but something *like* this might be
> worth playing with.

Interesting. Here are some pte fault statistics with and without the patch.

I added a few new count_vm_event() counters: PTEFAULT, PTEFILE,
PTEANON, PTEWP, PTESPECULATIVE for the handle_pte_fault,
do_linear_fault, do_anonymous_page, do_wp_page and the "let's
speculatively fill the page tables" case.

This is what the statistics look like for me doing a "make -j" of a
fully built almodconfig build:

5007450 ptefault
3272038 ptefile
1470242 pteanon
265121 ptewp
0 ptespeculative

where obviously the ptespeculative count is zero, and I was wrong
about anon faults being most common - the file mapping faults really
are the most common for this load (it's fairly fork/exec heavy, I
guess).

This is what happens with that patch I posted:

2962090 ptefault
1195130 ptefile
1490460 pteanon
276479 ptewp
5690724 ptespeculative

about 200k page faults went away, and the numbers make sense (ie they
got removed from the ptefile column - the other number changes are
just noise).

Now, we filled 600k extra page table entries to do that (that
ptespeculative number), so the "hitrate" for the speculative filling
was basically about 33%. Which doesn't sound crazy - the code
basically populates the 8 aligned pages around the faulting page.

Now, because I didn't make this easily dynamically configurable I have
no good way to really test timing, but the numbers says at least the
concept works.

Whether the reduced number of page faults and presumably better
locality for the speculative prefilling makes up for the fact that 66%
of the prefilled entries never get used is very debatable. But I think
it's a somewhat interesting experiment, and the patch was certainly
not hugely complicated.

I should add a switch to turn this on/off and then do many builds in
sequence to get some kind of idea of whether it actually changes
performance. But if 5% of the overall time was literally spent on the
*exception* part of the page fault (ie not counting all the work we do
in the kernel), I think it's worth looking at this.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/