Re: [PATCH 2/3] x86: mm: Change tlb_flushall_shift for IvyBridge

From: Alex Shi
Date: Mon Dec 16 2013 - 03:26:44 EST


On 12/14/2013 10:19 PM, Peter Zijlstra wrote:
> On Fri, Dec 13, 2013 at 10:11:05AM +0800, Alex Shi wrote:
>> BTW,
>> A bewitching idea is till attracting me.
>> https://lkml.org/lkml/2012/5/23/148
>> Even it was sentenced to death by HPA.
>> https://lkml.org/lkml/2012/5/24/143
>>
>> That is that just flush one of thread TLB is enough for SMT/HT, seems
>> TLB is still shared in core on Intel CPU. This benefit is unconditional,
>> and if my memory right, Kbuild testing can improve about 1~2% in average
>> level.
>>
>> So could you like to accept some ugly quirks to do this lazy TLB flush
>> on known working CPU?
>> Forgive me if it's stupid.
>
> I think there's a further problem with that patch -- aside of it being
> right from a hardware point of view.
>
> We currently rely on the tlb flush IPI to synchronize with lockless page
> table walkers like gup_fast().

I am sorry if I miss sth. :)

But if my understand correct, in the example of gup_fast, wait_split_huge_page
will never goes to BUG_ON(). Since the flush TLB IPI still be sent out to clear
each of _PAGE_SPLITTING on each CPU core. This patch just stop repeat TLB flush
in another SMT on same core. If there only noe SMT affected, the flush still be
executed on it.

#define wait_split_huge_page(__anon_vma, __pmd) \
do { \
pmd_t *____pmd = (__pmd); \
anon_vma_lock_write(__anon_vma); \
anon_vma_unlock_write(__anon_vma); \
BUG_ON(pmd_trans_splitting(*____pmd) || \
pmd_trans_huge(*____pmd)); \
} while (0)

>
> By not sending an IPI to all CPUs you can get into trouble and crash the
> kernel.
>
> We absolutely must keep sending the IPI to all relevant CPUs, we can
> choose not to actually do the flush on some CPUs, but we must keep
> sending the IPI.
>


--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/