[PATCH] csky: pgtable: Invalidate stale I-cache lines in update_mmu_cache

From: guoren
Date: Tue Aug 08 2023 - 20:27:21 EST


From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>

The final icache_flush was in the update_mmu_cache, and update_mmu_cache
is after the set_pte_at. Thus, when CPU0 sets the pte, the other CPU
would see it before the icache_flush broadcast happens, and their
icaches may have cached stale VIPT cache lines in their I-caches. When
address translation was ready for the new cache line, they will use the
stale data of icache, not the fresh one of the dcache.

The csky instruction cache is VIPT, and it needs an origin virtual
address to invalidate the virtual address index entries of cache ways.
The current implementation uses a temporary mapping mechanism -
kmap_atomic, which returns a new virtual address for invalidation. But,
the original virtual address cache line may still in the I-cache.

So force invalidation I-cache in update_mmu_cache, and prevent
flush_dcache when there is an EXEC page. This bug was detected in the
4*c860 SMP system, and this patch could pass the stress test.

Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>
Signed-off-by: Guo Ren <guoren@xxxxxxxxxx>
---
arch/csky/abiv2/cacheflush.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c
index 9923cd24db58..500eb8f69397 100644
--- a/arch/csky/abiv2/cacheflush.c
+++ b/arch/csky/abiv2/cacheflush.c
@@ -27,11 +27,9 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,

addr = (unsigned long) kmap_atomic(page);

+ icache_inv_range(address, address + PAGE_SIZE);
dcache_wb_range(addr, addr + PAGE_SIZE);

- if (vma->vm_flags & VM_EXEC)
- icache_inv_range(addr, addr + PAGE_SIZE);
-
kunmap_atomic((void *) addr);
}

--
2.36.1