Re: [RFC PATCH] mm/filemap: avoid buffered read/write race to read inconsistent data

From: Jan Kara
Date: Tue Dec 12 2023 - 08:37:19 EST


On Tue 12-12-23 21:16:16, Baokun Li wrote:
> On 2023/12/12 20:41, Jan Kara wrote:
> > On Tue 12-12-23 17:36:34, Baokun Li wrote:
> > > The following concurrency may cause the data read to be inconsistent with
> > > the data on disk:
> > >
> > > cpu1 cpu2
> > > ------------------------------|------------------------------
> > > // Buffered write 2048 from 0
> > > ext4_buffered_write_iter
> > > generic_perform_write
> > > copy_page_from_iter_atomic
> > > ext4_da_write_end
> > > ext4_da_do_write_end
> > > block_write_end
> > > __block_commit_write
> > > folio_mark_uptodate
> > > // Buffered read 4096 from 0 smp_wmb()
> > > ext4_file_read_iter set_bit(PG_uptodate, folio_flags)
> > > generic_file_read_iter i_size_write // 2048
> > > filemap_read unlock_page(page)
> > > filemap_get_pages
> > > filemap_get_read_batch
> > > folio_test_uptodate(folio)
> > > ret = test_bit(PG_uptodate, folio_flags)
> > > if (ret)
> > > smp_rmb();
> > > // Ensure that the data in page 0-2048 is up-to-date.
> > >
> > > // New buffered write 2048 from 2048
> > > ext4_buffered_write_iter
> > > generic_perform_write
> > > copy_page_from_iter_atomic
> > > ext4_da_write_end
> > > ext4_da_do_write_end
> > > block_write_end
> > > __block_commit_write
> > > folio_mark_uptodate
> > > smp_wmb()
> > > set_bit(PG_uptodate, folio_flags)
> > > i_size_write // 4096
> > > unlock_page(page)
> > >
> > > isize = i_size_read(inode) // 4096
> > > // Read the latest isize 4096, but without smp_rmb(), there may be
> > > // Load-Load disorder resulting in the data in the 2048-4096 range
> > > // in the page is not up-to-date.
> > > copy_page_to_iter
> > > // copyout 4096
> > >
> > > In the concurrency above, we read the updated i_size, but there is no read
> > > barrier to ensure that the data in the page is the same as the i_size at
> > > this point, so we may copy the unsynchronized page out. Hence adding the
> > > missing read memory barrier to fix this.
> > >
> > > This is a Load-Load reordering issue, which only occurs on some weak
> > > mem-ordering architectures (e.g. ARM64, ALPHA), but not on strong
> > > mem-ordering architectures (e.g. X86). And theoretically the problem
> > AFAIK x86 can also reorder loads vs loads so the problem can in theory
> > happen on x86 as well.
>
> According to what I read in the /perfbook /at the link below,
>
>  Loads Reordered After Loads does not happen on x86.
>
> pdf sheet 562 corresponds to page 550,
>
>    Table 15.5: Summary of Memory Ordering
>
> https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook-1c.2023.06.11a.pdf

Indeed. I stand corrected! Thanks for the link.

Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR