Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness accounting/migration

From: Fengguang Wu
Date: Mon Jan 28 2019 - 21:00:51 EST


Hi Jonathan,

Thanks for showing the gap on tracking hot accesses from devices.

On Mon, Jan 28, 2019 at 05:42:39PM +0000, Jonathan Cameron wrote:
On Wed, 2 Jan 2019 12:21:10 +0000
Jonathan Cameron <jonathan.cameron@xxxxxxxxxx> wrote:

On Fri, 28 Dec 2018 20:52:24 +0100
Michal Hocko <mhocko@xxxxxxxxxx> wrote:

> [Ccing Mel and Andrea]
>

Hi,

I just wanted to highlight this section as I didn't feel we really addressed this
in the earlier conversation.

* Hot pages may not be hot just because the host is using them a lot. It would be
very useful to have a means of adding information available from accelerators
beyond simple accessed bits (dreaming ;) One problem here is translation
caches (ATCs) as they won't normally result in any updates to the page accessed
bits. The arm SMMU v3 spec for example makes it clear (though it's kind of
obvious) that the ATS request is the only opportunity to update the accessed
bit. The nasty option here would be to periodically flush the ATC to force
the access bit updates via repeats of the ATS request (ouch).
That option only works if the iommu supports updating the accessed flag
(optional on SMMU v3 for example).

If ATS based updates are supported, we may trigger it when closing the
/proc/pid/idle_pages file. We already do TLB flushes at that time. For
example,

[PATCH 15/21] ept-idle: EPT walk for virtual machine

ept_idle_release():
kvm_flush_remote_tlbs(kvm);

[PATCH 17/21] proc: introduce /proc/PID/idle_pages

mm_idle_release():
flush_tlb_mm(mm);

The flush cost is kind of "minimal necessary" in our current use
model, where user space scan+migration daemon will do such loop:

loop:
walk page table N times:
open,read,close /proc/PID/idle_pages
(flushes TLB on file close)
sleep for a short interval
sort and migrate hot pages
sleep for a while

If we ignore the IOMMU hardware update issue which will simply need to be addressed
by future hardware if these techniques become common, how do we address the
Address Translation Cache issue without potentially causing big performance
problems by flushing the cache just to force an accessed bit update?

These devices are frequently used with PRI and Shared Virtual Addressing
and can be accessing most of your memory without you having any visibility
of it in the page tables (as they aren't walked if your ATC is well matched
in size to your usecase.

Classic example would be accelerated DB walkers like the the CCIX demo
Xilinx has shown at a few conferences. The whole point of those is that
most of the time only your large set of database walkers is using your
memory and they have translations cached for for a good part of what
they are accessing. Flushing that cache could hurt a lot.
Pinning pages hurts for all the normal flexibility reasons.

Last thing we want is to be migrating these pages that can be very hot but
in an invisible fashion.

If there are some other way to get hotness for special device memory,
the user space daemon may be extended to cover that. Perhaps by
querying another new kernel interface.

By driving hotness accounting and migration in user space, we harvest
this kind of flexibility. In the daemon POV, /proc/PID/idle_pages
provides one common way to get "accessed" bits hence hotness, though
the daemon does not need to depend solely on it.

Thanks,
Fengguang