Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

From: Mel Gorman
Date: Wed Jun 10 2015 - 05:00:16 EST


On Wed, Jun 10, 2015 at 10:33:32AM +0200, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@xxxxxxx> wrote:
>
> > Linear mapped reader on a 4-node machine with 64G RAM and 48 CPUs
> >
> > 4.1.0-rc6 4.1.0-rc6
> > vanilla flushfull-v6
> > Ops lru-file-mmap-read-elapsed 162.88 ( 0.00%) 120.81 ( 25.83%)
> >
> > 4.1.0-rc6 4.1.0-rc6
> > vanillaflushfull-v6r5
> > User 568.96 614.68
> > System 6085.61 4226.61
> > Elapsed 164.24 122.17
> >
> > This is showing that the readers completed 25.83% faster with 30% less
> > system CPU time. From vmstats, it is known that the vanilla kernel was
> > interrupted roughly 900K times per second during the steady phase of the
> > test and the patched kernel was interrupts 180K times per second.
> >
> > The impact is lower on a single socket machine.
> >
> > 4.1.0-rc6 4.1.0-rc6
> > vanilla flushfull-v6
> > Ops lru-file-mmap-read-elapsed 25.43 ( 0.00%) 20.59 ( 19.03%)
> >
> > 4.1.0-rc6 4.1.0-rc6
> > vanilla flushfull-v6
> > User 59.14 58.99
> > System 109.15 77.84
> > Elapsed 27.32 22.31
> >
> > It's still a noticeable improvement with vmstat showing interrupts went
> > from roughly 500K per second to 45K per second.
>
> Btw., I tried to compare your previous (v5) pfn-tracking numbers with these
> full-flushing numbers, and found that the IRQ rate appears to be the same:
>

That's expected because the number of IPIs sent is the same. What
changes is the tracking of the PFNs and then the work within the IPI
itself.

> > > From vmstats, it is known that the vanilla kernel was interrupted roughly 900K
> > > times per second during the steady phase of the test and the patched kernel
> > > was interrupts 180K times per second.
>
> > > It's still a noticeable improvement with vmstat showing interrupts went from
> > > roughly 500K per second to 45K per second.
>
> ... is that because the batching limit in the pfn-tracking case was high enough to
> not be noticeable in the vmstat?
>

It's just the case that there are fewer cores and less activity in the
machine overall.

> In the full-flushing case (v6 without patch 4) the batching limit is 'infinite',
> we'll batch as long as possible, right?
>

No because we must flush before pages are freed so the maximum batching
is related to SWAP_CLUSTER_MAX. If we free a page before the flush then
in theory the page can be reallocated and a stale TLB entry can allow
access to unrelated data. It would be almost impossible to trigger
corruption this way but it's a concern.

--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/