Re: [PATCH 1/2] mm: swap: update inuse_pages after all cleanups are done

From: Yosry Ahmed
Date: Tue Jan 23 2024 - 04:54:52 EST


> Alternatively, we may just hold the spinlock in try_to_unuse() when we
> check si->inuse_pages at the end. This will also ensure that any calls
> to swap_range_free() have completed. Let me know what you prefer.

To elaborate, I mean replacing this patch and the memory barriers with
the diff below.

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 2fedb148b9404..9b932ecbd80a8 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2046,6 +2046,7 @@ static int try_to_unuse(unsigned int type)
struct swap_info_struct *si = swap_info[type];
struct folio *folio;
swp_entry_t entry;
+ unsigned int inuse;
unsigned int i;

if (!READ_ONCE(si->inuse_pages))
@@ -2123,8 +2124,14 @@ static int try_to_unuse(unsigned int type)
* and even shmem_writepage() could have been preempted after
* folio_alloc_swap(), temporarily hiding that swap. It's easy
* and robust (though cpu-intensive) just to keep retrying.
+ *
+ * Read si->inuse_pages with the lock held to make sure that cleanups in
+ * swap_range_free() are completed when we read si->inuse_pages == 0.
*/
- if (READ_ONCE(si->inuse_pages)) {
+ spin_lock(&si->lock);
+ inuse = si->inuse_pages;
+ spin_unlock(&si->lock);
+ if (inuse) {
if (!signal_pending(current))
goto retry;
return -EINTR;