Re: [PATCH v7 13/20] x86/virt/tdx: Allocate and set up PAMTs for TDMRs

From: Dave Hansen
Date: Mon Nov 28 2022 - 11:39:33 EST


On 11/24/22 03:46, Huang, Kai wrote:
> On Wed, 2022-11-23 at 14:57 -0800, Dave Hansen wrote:
>> On 11/20/22 16:26, Kai Huang wrote:
>>> Use alloc_contig_pages() since PAMT must be a physically contiguous area
>>> and it may be potentially large (~1/256th of the size of the given TDMR).
>>> The downside is alloc_contig_pages() may fail at runtime. One (bad)
>>> mitigation is to launch a TD guest early during system boot to get those
>>> PAMTs allocated at early time, but the only way to fix is to add a boot
>>> option to allocate or reserve PAMTs during kernel boot.
>>
>> FWIW, we all agree that this is a bad permanent way to leave things.
>> You can call me out here as proposing that this wart be left in place
>> while this series is merged and is a detail we can work on afterword
>> with new module params, boot options, Kconfig or whatever.
>
> Sorry do you mean to call out in the cover letter, or in this changelog?

Cover letter would be best. But, a note in the changelog that it is
imperfect and will be improved on later would also be nice.

>>> + /*
>>> + * Fall back to allocating the TDMR's metadata from node 0 when
>>> + * no TDX memory block can be found. This should never happen
>>> + * since TDMRs originate from TDX memory blocks.
>>> + */
>>> + WARN_ON_ONCE(1);
>>
>> That's probably better a pr_warn() or something. A backtrace and all
>> that jazz seems a bit overly dramatic for this.
>
> How about below?
>
> pr_warn("TDMR [0x%llx, 0x%llx): unable to find local NUMA node for PAMT
> allocation, fallback to use node 0.\n");

I actually try to make these somewhat mirror the code. For instance, if
you are searching using *just* the start TDMR address, then the message
should only talk about the start address. Also, it's not trying to find
a *node* per se. It's trying to find a 'tmb'. So, if someone wanted to
debug this problem, they would actually want to dump out the tmbs.

But, back to the loop that this message describes:

> + /* Find the first memory region covered by the TDMR */
> + list_for_each_entry(tmb, &tdx_memlist, list) {
> + if (tmb->end_pfn > (tdmr_start(tdmr) >> PAGE_SHIFT))
> + return tmb->nid;
> + }

That loop is funky. It's not obvious at *all* why it even works.

1. A 'tmb' describes "real" memory. It never covers holes.
2. This code is trying to find *a* 'tmb' to place a structure in. It
needs real memory to place this, of course.
3. A 'tdmr' may include holes and may not have a 'tmb' at either its
start or end address
4. A 'tdmr' is expected to cover one or more 'tmb's. If there were no
'tmb's, then the TDMR is going to be marked as all reserved and is
effectively being wasted.
5. A 'tdmr' may cover more than one NUMA node. If this happens, it is
ok to get memory from any of those nodes for that tdmr's PAMT.

I'd include this comment on the loop:

A TDMR must cover at least part of one TMB. That TMB will end
after the TDMR begins. But, that TMB may have started before
the TDMR. Find the next 'tmb' that _ends_ after this TDMR
begins. Ignore 'tmb' start addresses. They are irrelevant.

Maybe even a little ASCII diagram about the different tmb configurations
that this can find:

| TDMR1 | TDMR2 |
|---tmb---|
|tmb|
|------tmb-------|
|------tmb-------|

I'd also include this on the function:

/*
* Locate a NUMA node which should hold the allocation of the @tdmr
* PAMT. This node will have some memory covered by the TDMR. The
* relative amount of memory covered is not considered.
*/