Re: [PATCH v5 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range()

From: Maciej S. Szmigiero
Date: Thu Oct 21 2021 - 17:45:13 EST


On 21.10.2021 18:30, Sean Christopherson wrote:
On Thu, Oct 21, 2021, Maciej S. Szmigiero wrote:
On 21.10.2021 01:47, Sean Christopherson wrote:
In this case, I would honestly just drop the helper. It's really hard to express
what this function does in a name that isn't absurdly long, and there's exactly
one user at the end of the series.

The "upper bound" is a common name for a binary search operation that
finds the first node that has its key strictly greater than the searched key.

Ah, that I did not know (obviously). But I suspect that detail will be lost on
other readers as well, even if they are familiar with the terminology.

It can be integrated into its caller but I would leave a comment there
describing what kind of operation that block of code does to aid in
understanding the code.

Yeah, completely agree a comment would be wonderful.

👍

Although, to be honest, I don't quite get the reason for doing this
considering that you want to put a single "rb_next()" call into its own
helper for clarity below.

The goal is to make the macro itself easy to understand, even if the reader may
not understand the underlying details. The bare rb_next() forces the reader to
pause to think about exactly what "node" is, and perhaps even dive into the code
for the other helpers.

With something like this, a reader that doesn't know the memslots details can
get a good idea of the basic gist of the macro without having to even know the
type of "node". Obviously someone writing code will need to know the type, but
for readers bouncing around code it's a detail they don't need to know.

#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end) \
for (node = kvm_get_first_node(slots, start); \
!kvm_is_valid_node(slots, node, end); \
node = kvm_get_next_node(node))

Hmm, on that point, having the caller do

memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);

is more than a bit odd, and as is the case with the bare rb_next(), bleeds
implementation details into code that really doesn't care about implementation
details. Eww, and looking closer, the caller also needs to grab slots->node_idx.

So while I would love to avoid an opaque iterator, adding one would be a net
positive in this case. E.g.

/* Iterator used for walking memslots that overlap a gfn range. */
struct kvm_memslot_iterator iter {
struct rb_node *node;
struct kvm_memory_slot *memslot;
struct kvm_memory_slots *slots;
gfn_t start;
gfn_t end;
}

static inline void kvm_memslot_iter_start(struct kvm_memslot_iter *iter,
struct kvm_memslots *slots,
gfn_t start, gfn_t end)
{
...
}

static inline bool kvm_memslot_iter_is_valid(struct kvm_memslot_iter *iter)
{
/*
* If this slot starts beyond or at the end of the range so does
* every next one
*/
return iter->node && iter->memslot->base_gfn < end;
}

static inline void kvm_memslot_iter_next(struct kvm_memslot_iter *iter)
{
iter->node = rb_next(iter->node);

if (!iter->node)
return;

iter->memslot = container_of(iter->node, struct kvm_memory_slot,
gfn_node[iter->slots->node_idx]);
}

/* Iterate over each memslot *possibly* intersecting [start, end) range */
#define kvm_for_each_memslot_in_gfn_range(iter, node, slots, start, end) \
for (kvm_memslot_iter_start(iter, node, slots, start, end); \
kvm_memslot_iter_is_valid(iter); \
kvm_memslot_iter_next(node)) \


The iterator-based for_each implementation looks pretty nice (love the
order and consistency that higher-level abstractions bring to code) -
will change the code to use iterators instead.

It also solves the kvm_is_valid_node() naming issue below.

Ugh, this got me looking at kvm_zap_gfn_range(), and that thing is trainwreck.
There are three calls kvm_flush_remote_tlbs_with_address(), two of which should
be unnecessary, but become necessary because the last one is broken. *sigh*

That'd also be a good excuse to extract the rmap loop to a separate helper. Then
you don't need to constantly juggle the 80 char limit and variable collisions
while you're modifying this mess. I'll post the attached patches separately
since the first one (two?) should go into 5.15. They're compile tested only
at this point, but hopefully I've had enough coffee and they're safe to base
this series on top (note, they're based on kvm/queue, commit 73f122c4f06f ("KVM:
cleanup allocation of rmaps and page tracking data").

All right, will make sure that a respin is based on a kvm tree with these
commits in.

The kvm_for_each_in_gfn prefix is _really_ confusing. I get that these are all
helpers for "kvm_for_each_memslot...", but it's hard not to think these are all
iterators on their own. I would gladly sacrifice namespacing for readability in
this case.

"kvm_for_each_memslot_in_gfn_range" was your proposed name here:
https://lore.kernel.org/kvm/YK6GWUP107i5KAJo@xxxxxxxxxx/

But no problem renaming it.

Oh, I was commenting on the inner helpers. The macro name itself is great. ;-)

@@ -882,12 +875,16 @@ struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t star
return node;
}

-static inline
-bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
+static inline bool kvm_is_last_node(struct kvm_memslots *slots,
+ struct rb_node *node, gfn_t end)

kvm_is_last_node() is a bit misleading since this function is supposed
to return true even on the last node, only returning false one node past
the last one (or when the tree runs out of nodes).

Good point. I didn't love the name when I suggested either. What about
kvm_is_valid_node()?

kvm_is_valid_node() sounds a bit too generic for me, but since we rewrite
the code to be iterator-based this issue goes away.

Thanks,
Maciej