Re: 6.6.8 stable: crash in folio_mark_dirty

From: Matthew Wilcox
Date: Sun Dec 31 2023 - 08:07:31 EST


On Sun, Dec 31, 2023 at 09:28:46AM +0800, Hillf Danton wrote:
> On Sat, Dec 30, 2023 at 10:23:26AM -0500 Genes Lists <lists@xxxxxxxxxxxx>
> > Apologies in advance, but I cannot git bisect this since machine was
> > running for 10 days on 6.6.8 before this happened.
> >
> > Dec 30 07:00:36 s6 kernel: ------------[ cut here ]------------
> > Dec 30 07:00:36 s6 kernel: WARNING: CPU: 0 PID: 521524 at mm/page-writeback.c:2668 __folio_mark_dirty (??:?)
> > Dec 30 07:00:36 s6 kernel: CPU: 0 PID: 521524 Comm: rsync Not tainted 6.6.8-stable-1 #13 d238f5ab6a206cdb0cc5cd72f8688230f23d58df
> > Dec 30 07:00:36 s6 kernel: block_dirty_folio (??:?)
> > Dec 30 07:00:36 s6 kernel: unmap_page_range (??:?)
> > Dec 30 07:00:36 s6 kernel: unmap_vmas (??:?)
> > Dec 30 07:00:36 s6 kernel: exit_mmap (??:?)
> > Dec 30 07:00:36 s6 kernel: __mmput (??:?)
> > Dec 30 07:00:36 s6 kernel: do_exit (??:?)
> > Dec 30 07:00:36 s6 kernel: do_group_exit (??:?)
> > Dec 30 07:00:36 s6 kernel: __x64_sys_exit_group (??:?)
> > Dec 30 07:00:36 s6 kernel: do_syscall_64 (??:?)
>
> See what comes out if race is handled.
> Only for thoughts.

I don't think this can happen. Look at the call trace;
block_dirty_folio() is called from unmap_page_range(). That means the
page is in the page tables. We unmap the pages in a folio from the
page tables before we set folio->mapping to NULL. Look at
invalidate_inode_pages2_range() for example:

unmap_mapping_pages(mapping, indices[i],
(1 + end - indices[i]), false);
folio_lock(folio);
folio_wait_writeback(folio);
if (folio_mapped(folio))
unmap_mapping_folio(folio);
BUG_ON(folio_mapped(folio));
if (!invalidate_complete_folio2(mapping, folio))

... and invalidate_complete_folio2() is where we set ->mapping to NULL
in __filemap_remove_folio -> page_cache_delete().