Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at()

From: Miaohe Lin
Date: Mon Aug 22 2022 - 21:43:05 EST


On 2022/8/22 18:23, Muchun Song wrote:
>
>
>> On Aug 22, 2022, at 16:45, Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>
>> On 2022/8/20 16:12, Muchun Song wrote:
>>>
>>>
>>>> On Aug 16, 2022, at 21:05, Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>>>
>>>> The memory barrier smp_wmb() is needed to make sure that preceding stores
>>>> to the page contents become visible before the below set_pte_at() write.
>>>
>>> I found another place where is a similar case. See kasan_populate_vmalloc_pte() in
>>> mm/kasan/shadow.c.
>>
>> Thanks for your report.
>>
>>>
>>> Should we fix it as well?
>>
>> I'm not familiar with kasan yet, but I think memory barrier is needed here or memory corrupt
>> can't be detected until the contents are visible. smp_mb__after_atomic before set_pte_at should
>> be enough? What's your opinion?
>
> I didn’t see any atomic operation between set_pte_at() and memset(), I don’t think
> smp_mb__after_atomic() is feasible if we really need to insert a barrier. I suggest

Oh, it should be smp_mb__after_spinlock(), i.e. something like below:

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 0e3648b603a6..38e503c89740 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -277,6 +277,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,

spin_lock(&init_mm.page_table_lock);
if (likely(pte_none(*ptep))) {
+ smp_mb__after_spinlock();
set_pte_at(&init_mm, addr, ptep, pte);
page = 0;
}

Does this make sense for you?

> you to send a RFC patch to KASAN maintainers, they are more familiar with this than
> us.

Sounds like a good idea. Will do it.

Thanks,
Miaohe Lin