Re: [PATCH v2 4/4] x86/percpu: Use C for percpu read/write accessors

From: Linus Torvalds
Date: Wed Oct 04 2023 - 15:42:44 EST


Unrelated reaction..

On Wed, 4 Oct 2023 at 12:24, Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
>
> the code improves from:
>
> 65 8b 05 00 00 00 00 mov %gs:0x0(%rip),%eax
> a9 00 00 0f 00 test $0xf0000,%eax
>
> to:
>
> 65 f7 05 00 00 00 00 testl $0xf0000,%gs:0x0(%rip)
> 00 00 0f 00

Funky.

Why does gcc generate that full-width load from memory, and not demote
it to a byte test?

IOW, it should not be

65 f7 05 00 00 00 00 testl $0xf0000,%gs:0x0(%rip)
00 00 0f 00

after optimizing it, it should be three bytes shorter at

65 f6 05 00 00 00 00 testb $0xf,%gs:0x0(%rip)
0f

instead (this is "objdump", so it doesn't show that the relocation
entry has changed by +2 to compensate).

Now, doing the access narrowing is a bad idea for stores (because it
can cause subsequent loads to have conflicts in the store buffer), but
for loads it should always be a win to narrow the access.

I wonder why gcc doesn't do it. This is not related to __seg_gs - I
tried it with regular memory accesses too, and gcc kept those as
32-bit accesses too.

And no, the assembler can't optimize that operation either, since I
think changing the testl to a testb would change the 'P' bit in the
resulting eflags, so this is a "the compiler could pick a better
instruction choice" thing.

I'm probably missing some reason why gcc wouldn't do this. But clang
does seem to do this obvious optimization.

Linus