Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot

From: David Rientjes
Date: Fri Nov 24 2023 - 15:00:11 EST


On Fri, 24 Nov 2023, David Hildenbrand wrote:

> On 24.11.23 20:44, David Rientjes wrote:
> > On Thu, 23 Nov 2023, David Hildenbrand wrote:
> >
> > > On 23.11.23 14:30, Gang Li wrote:
> > > > From: Gang Li <ligang.bdlg@xxxxxxxxxxxxx>
> > > >
> > > > Inspired by these patches [1][2], this series aims to speed up the
> > > > initialization of hugetlb during the boot process through
> > > > parallelization.
> > > >
> > > > It is particularly effective in large systems. On a machine equipped
> > > > with 1TB of memory and two NUMA nodes, the time for hugetlb
> > > > initialization was reduced from 2 seconds to 1 second.
> > >
> > > Sorry to say, but why is that a scenario worth adding complexity for /
> > > optimizing for? You don't cover that, so there is a clear lack in the
> > > motivation.
> > >
> > > 2 vs. 1 second on a 1 TiB system is usually really just noise.
> > >
> >
> > The cost will continue to grow over time, so I presume that Gang is trying
> > to get out in front of the issue even though it may not be a large savings
> > today.
> >
> > Running single boot tests, with the latest upstream kernel, allocating
> > 1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.
> >
> > But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
> > today with the current implementation.
>
> And there, the 65.2s won't be noise because that 12TB system is up by a snap
> of a finger? :)
>

In this single boot test, total boot time was 373.78s, so 1GB hugetlb
allocation is 17.4% of that.

Would love to see what the numbers would look like if 1GB pages were
supported.