Re: Hugepages demand paging V2 [0/8]: Discussion and overview

From: Ray Bryant
Date: Wed Oct 27 2004 - 18:08:03 EST


Christoph Lameter wrote:
On Tue, 26 Oct 2004, Robin Holt wrote:


Sorry for being a stickler here, but the BTE is really part of the
I/O Interface portion of the shub. That portion has a seperate clock
frequency from the memory controller (unfortunately slower). The BTE
can zero at a slightly slower speed than the processor. It does, as
you pointed out, not trash the CPU cache.

One other feature of the BTE is it can operate asynchronously from
the cpu. This could be used to, during a clock interrupt, schedule
additional huge page zero filling on multiple nodes at the same time.
This could result in a huge speed boost on machines that have multiple
memory only nodes. That has not been tested thoroughly. We have done
considerable testing of the page zero functionality as well as the
error handling.


If the huge patch would support some way of redirecting the clearing of a
huge page then we could:

1. set the huge pte to not present so that we get a fault on access
2. run the bte clearer.
3. On receiving a huge fault we could check for the bte being finished.

This would parallelize the clearing of huge pages. But is that really more
efficient? There may be complexity involved in allowing the clearing of
multiple pages and tracking of the clear in progress is additional
overhead.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


I'm personally of the opinion that using the BTE to "speculatively" clear
hugetlb pages in advance of when the hugetlb pages are requested is not a good
thing [tm]. One never knows if those pages will ever be requested. And in
the meantime, tasks that need the BTE will be delayed by speculative use.
But that is a personal bias :-), with no data to back it up.

AFAIK, it is faster to clear the page with the processor anyway, since the
processor has a faster clock cycle. Yes, it destroys the processor cache,
but the application has clearly indicated that it wants the page NOW, please,
(because it has faulted on it), and delivering the page to the application
as quickly as possible sounds like a good thing. I'm not sure reloading
the processor cache at this point is a cost we care about, given that the
application is likely just starting up anyway. I figure hugetlb pages are
allocated once, stay around a long long time, so I'm not sure optimizing to
minimize cache damage is the correct way to go here.

The only obvious win is for memory only nodes, that have a BTE and no CPU.
It is probably faster to use the local BTE than a remote CPU to clear the page.

Does that make any sense?

--
Best Regards,
Ray
-----------------------------------------------
Ray Bryant
512-453-9679 (work) 512-507-7807 (cell)
raybry@xxxxxxx raybry@xxxxxxxxxxxxx
The box said: "Requires Windows 98 or better",
so I installed Linux.
-----------------------------------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/