Re: [PATCH 03/15] x86/irq: Use bitfields exclusively in posted interrupt descriptor

From: Sean Christopherson
Date: Tue Jan 30 2024 - 20:49:25 EST


On Fri, Jan 26, 2024, Jacob Pan wrote:
> From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>
> Mixture of bitfields and types is weird and really not intuitive, remove
> types and use bitfields exclusively.

I agree it's weird, and maybe not immediately intuitive, but that doesn't mean
there's no a good reason for the code being the way it is, i.e. "it's weird" isn't
sufficient justification for touching this type of code.

Bitfields almost always generate inferior code when accessing a subset of the
overall thing. And even worse, there are subtle side effects that I really don't
want to find out whether or not they are benign.

E.g. before this change, setting the notification vector is:

movb $0xf2,0x62(%rsp)

whereas after this change it becomes:

mov %eax,%edx
and $0xff00fffd,%edx
or $0xf20000,%edx
mov %edx,0x60(%rsp)

Writing extra bytes _shouln't_ be a problem, as KVM needs to atomically write
the entire control chunk no matter what, but changing this without very good cause
scares me.

If we really want to clean things up, my very strong vote is to remove the
bitfields entirely. SN is the only bit that's accessed without going through an
accessor, and those should be easy enough to fixup one by one (and we can add
more non-atomic accessors/mutators if it makes sense to do so).

E.g. end up with

/* Posted-Interrupt Descriptor */
struct pi_desc {
u32 pir[8]; /* Posted interrupt requested */
union {
struct {
u16 notification_bits;
u8 nv;
u8 rsvd_2;
u32 ndst;
};
u64 control;
};
u32 rsvd[6];
} __aligned(64);