Re: [PATCH 11/24] mm/swap: also handle swapcache lookup in swapin_readahead

From: Kairui Song
Date: Fri Nov 24 2023 - 03:44:03 EST


Chris Li <chrisl@xxxxxxxxxx> 于2023年11月22日周三 00:07写道:


>
> On Sun, Nov 19, 2023 at 11:48 AM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> >
> > From: Kairui Song <kasong@xxxxxxxxxxx>
> >
> > No feature change, just prepare for later commits.
>
> You need to have a proper commit message why this change needs to happen.
> Preparing is too generic, it does not give any real information.
> For example, it seems you want to reduce one swap cache lookup because
> swap_readahead already has it?
>
> I am a bit puzzled at this patch. It shuffles a lot of sensitive code.
> However I do not get the value.
> It seems like this patch should be merged with the later patch that
> depends on it to be judged together.
>
> >
> > Signed-off-by: Kairui Song <kasong@xxxxxxxxxxx>
> > ---
> > mm/memory.c | 61 +++++++++++++++++++++++--------------------------
> > mm/swap.h | 10 ++++++--
> > mm/swap_state.c | 26 +++++++++++++--------
> > mm/swapfile.c | 30 +++++++++++-------------
> > 4 files changed, 66 insertions(+), 61 deletions(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index f4237a2e3b93..22af9f3e8c75 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3786,13 +3786,13 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
> > vm_fault_t do_swap_page(struct vm_fault *vmf)
> > {
> > struct vm_area_struct *vma = vmf->vma;
> > - struct folio *swapcache, *folio = NULL;
> > + struct folio *swapcache = NULL, *folio = NULL;
> > + enum swap_cache_result cache_result;
> > struct page *page;
> > struct swap_info_struct *si = NULL;
> > rmap_t rmap_flags = RMAP_NONE;
> > bool exclusive = false;
> > swp_entry_t entry;
> > - bool swapcached;
> > pte_t pte;
> > vm_fault_t ret = 0;
> >
> > @@ -3850,42 +3850,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > if (unlikely(!si))
> > goto out;
> >
> > - folio = swap_cache_get_folio(entry, vma, vmf->address);
> > - if (folio)
> > - page = folio_file_page(folio, swp_offset(entry));
> > - swapcache = folio;
>
> Is the motivation that swap_readahead() already has a swap cache look up so you
> remove this look up here?

Yes, the cache look up can is moved and shared in swapin_readahead,
and this also make it possible to use that look up to return a shadow
when entry is not a page, so another shadow look up can be saved for
sync (ZRAM) swapin path. This can help improve ZRAM performance for
~4% for a 10G ZRAM, and should improves more when the cache tree grows
large.

>
> > -
> > - if (!folio) {
> > - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
> > - vmf, &swapcached);
> > - if (page) {
> > - folio = page_folio(page);
> > - if (swapcached)
> > - swapcache = folio;
> > - } else {
> > + page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
> > + vmf, &cache_result);
> > + if (page) {
> > + folio = page_folio(page);
> > + if (cache_result != SWAP_CACHE_HIT) {
> > + /* Had to read the page from swap area: Major fault */
> > + ret = VM_FAULT_MAJOR;
> > + count_vm_event(PGMAJFAULT);
> > + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
> > + }
> > + if (cache_result != SWAP_CACHE_BYPASS)
> > + swapcache = folio;
> > + if (PageHWPoison(page)) {
>
> There is a lot of code shuffle here. From the diff it is hard to tell
> if they are doing the same thing as before.
>
> > /*
> > - * Back out if somebody else faulted in this pte
> > - * while we released the pte lock.
> > + * hwpoisoned dirty swapcache pages are kept for killing
> > + * owner processes (which may be unknown at hwpoison time)
> > */
> > - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
> > - vmf->address, &vmf->ptl);
> > - if (likely(vmf->pte &&
> > - pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
> > - ret = VM_FAULT_OOM;
> > - goto unlock;
> > + ret = VM_FAULT_HWPOISON;
> > + goto out_release;
> > }
> > -
> > - /* Had to read the page from swap area: Major fault */
> > - ret = VM_FAULT_MAJOR;
> > - count_vm_event(PGMAJFAULT);
> > - count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
> > - } else if (PageHWPoison(page)) {
> > + } else {
> > /*
> > - * hwpoisoned dirty swapcache pages are kept for killing
> > - * owner processes (which may be unknown at hwpoison time)
> > + * Back out if somebody else faulted in this pte
> > + * while we released the pte lock.
> > */
> > - ret = VM_FAULT_HWPOISON;
> > - goto out_release;
> > + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
> > + vmf->address, &vmf->ptl);
> > + if (likely(vmf->pte &&
> > + pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
> > + ret = VM_FAULT_OOM;
> > + goto unlock;
> > }
> >
> > ret |= folio_lock_or_retry(folio, vmf);
> > diff --git a/mm/swap.h b/mm/swap.h
> > index a9a654af791e..ac9136eee690 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -30,6 +30,12 @@ extern struct address_space *swapper_spaces[];
> > (&swapper_spaces[swp_type(entry)][swp_offset(entry) \
> > >> SWAP_ADDRESS_SPACE_SHIFT])
> >
> > +enum swap_cache_result {
> > + SWAP_CACHE_HIT,
> > + SWAP_CACHE_MISS,
> > + SWAP_CACHE_BYPASS,
> > +};
>
> Does any function later care about CACHE_BYPASS?
>
> Again, better introduce it with the function that uses it. Don't
> introduce it for "just in case I might use it later".

Yes, callers in shmem will also need to know if the page is cached in
swap, and need a value to indicate the bypass case. I can add some
comments here to indicate the usage.

>
> > +
> > void show_swap_cache_info(void);
> > bool add_to_swap(struct folio *folio);
> > void *get_shadow_from_swap_cache(swp_entry_t entry);
> > @@ -55,7 +61,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > struct mempolicy *mpol, pgoff_t ilx);
> > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > - struct vm_fault *vmf, bool *swapcached);
> > + struct vm_fault *vmf, enum swap_cache_result *result);
> >
> > static inline unsigned int folio_swap_flags(struct folio *folio)
> > {
> > @@ -92,7 +98,7 @@ static inline struct page *swap_cluster_readahead(swp_entry_t entry,
> > }
> >
> > static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
> > - struct vm_fault *vmf, bool *swapcached)
> > + struct vm_fault *vmf, enum swap_cache_result *result)
> > {
> > return NULL;
> > }
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index d87c20f9f7ec..e96d63bf8a22 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -908,8 +908,7 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > * @entry: swap entry of this memory
> > * @gfp_mask: memory allocation flags
> > * @vmf: fault information
> > - * @swapcached: pointer to a bool used as indicator if the
> > - * page is swapped in through swapcache.
> > + * @result: a return value to indicate swap cache usage.
> > *
> > * Returns the struct page for entry and addr, after queueing swapin.
> > *
> > @@ -918,30 +917,39 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > * or vma-based(ie, virtual address based on faulty address) readahead.
> > */
> > struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > - struct vm_fault *vmf, bool *swapcached)
> > + struct vm_fault *vmf, enum swap_cache_result *result)
> > {
> > + enum swap_cache_result cache_result;
> > struct swap_info_struct *si;
> > struct mempolicy *mpol;
> > + struct folio *folio;
> > struct page *page;
> > pgoff_t ilx;
> > - bool cached;
> > +
> > + folio = swap_cache_get_folio(entry, vmf->vma, vmf->address);
> > + if (folio) {
> > + page = folio_file_page(folio, swp_offset(entry));
> > + cache_result = SWAP_CACHE_HIT;
> > + goto done;
> > + }
> >
> > si = swp_swap_info(entry);
> > mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
> > if (swap_use_no_readahead(si, swp_offset(entry))) {
> > page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm);
> > - cached = false;
> > + cache_result = SWAP_CACHE_BYPASS;
> > } else if (swap_use_vma_readahead(si)) {
> > page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf);
> > - cached = true;
> > + cache_result = SWAP_CACHE_MISS;
> > } else {
> > page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
> > - cached = true;
> > + cache_result = SWAP_CACHE_MISS;
> > }
> > mpol_cond_put(mpol);
> >
> > - if (swapcached)
> > - *swapcached = cached;
> > +done:
> > + if (result)
> > + *result = cache_result;
> >
> > return page;
> > }
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 01c3f53b6521..b6d57fff5e21 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1822,13 +1822,21 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> >
> > si = swap_info[type];
> > do {
> > - struct folio *folio;
> > + struct page *page;
> > unsigned long offset;
> > unsigned char swp_count;
> > + struct folio *folio = NULL;
> > swp_entry_t entry;
> > int ret;
> > pte_t ptent;
> >
> > + struct vm_fault vmf = {
> > + .vma = vma,
> > + .address = addr,
> > + .real_address = addr,
> > + .pmd = pmd,
> > + };
>
> Is this code move caused by skipping the swap cache look up here?

Yes.

>
> This is very sensitive code related to swap cache racing. It needs
> very careful reviewing. Better not shuffle it for no good reason.

Thanks for the suggestion, I'll try to avoid these shuffling, but
cache lookup is moved into swappin_readahead so some changes in the
original caller are not avoidable...