Re: [RFC 00/23] Enable block size > page size in XFS

From: IBM
Date: Tue Sep 19 2023 - 07:56:57 EST


Pankaj Raghav <p.raghav@xxxxxxxxxxx> writes:

>>>>
>>>> As it is, I'd really prefer stuff that adds significant XFS
>>>> functionality that we need to test to be based on a current Linus
>>>> TOT kernel so that we can test it without being impacted by all
>>>> the random unrelated breakages that regularly happen in linux-next
>>>> kernels....
>>>
>>> That's understandable! I just rebased onto Linus' tree, this only
>>> has the bs > ps support on 4k sector size:
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=v6.6-rc2-lbs-nobdev
>>
>
> I think this tree doesn't have some of the last minute changes I did before I sent the RFC. I will
> sync with Luis offline regarding that.
>
>>
>>> I just did a cursory build / boot / fsx with 16k block size / 4k sector size
>>> test with this tree only. I havne't ran fstests on it.
>>
>> W/ 64k block size, generic/042 fails (maybe just a test block size
>> thing), generic/091 fails (data corruption on read after ~70 ops)
>> and then generic/095 hung with a crash in iomap_readpage_iter()
>> during readahead.
>>
>> Looks like a null folio was passed to ifs_alloc(), which implies the
>> iomap_readpage_ctx didn't have a folio attached to it. Something
>> isn't working properly in the readahead code, which would also
>> explain the quick fsx failure...
>>
>
> Yeah, I have noticed this as well. This is the main crash scenario I am noticing
> when I am running xfstests, and hopefully we will be able to fix it soon.
>
> In general, we have had better results with 16k block size than 64k block size. I still don't
> know why, but the ifs_alloc crash happens in generic/451 with 16k block size.
>
>
>>> Just a heads up, using 512 byte sector size will fail for now, it's a
>>> regression we have to fix. Likewise using block sizes 1k, 2k will also
>>> regress on fsx right now. These are regressions we are aware of but
>>> haven't had time yet to bisect / fix.
>>
>> I'm betting that the recently added sub-folio dirty tracking code
>> got broken by this patchset....
>>
>
> Hmm, this crossed my mind as well. I am assuming I can really test the sub-folio dirty
> tracking code on a system which has a page size greater than the block size? Or is there
> some tests that can already test this? CCing Ritesh as well.
>

Sorry I haven't yet looked into this series yet. I will spend sometime
reading it. Will also give a spin to run the fstests.

But to answer your question on how to test sub-folio dirty
tracking code[1] [2] with XFS. Just use blocksize < pagesize in mkfs option
and run fstests. There are a no. of tests which checks for data
correctness for various types of writes.

1. test 1k blocksize on a 4k pagsize machine (as long as bs < ps)
2. Test 4k blocksize on a 64k pagesize machine (if you have one) (as long as bs < ps)
3. Or also enable large folios support and test bs < ps
(with large folios system starts insantiating large folios > 4k on a 4k
pagesize machine. So blocksize automatically becomes lesser than folio size)

You will need CONFIG_TRANSPARENT_HUGEPAGE to be enabled along with
willy's series which enables large folios in buffered write path [3].
(This is already in linux 6.6-rc1)

<snip>
/*
* Large folio support currently depends on THP. These dependencies are
* being worked on but are not yet fixed.
*/
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
}

<links>
[1]: https://lore.kernel.org/linux-xfs/20230725122932.144426-1-ritesh.list@xxxxxxxxx/
[2]:
https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git/commit/?h=for-next&id=4ce02c67972211be488408c275c8fbf19faf29b3
[3]: https://lore.kernel.org/all/ZLVrEkVU2YCneoXR@xxxxxxxxxxxxxxxxxxxx/

Hope this helps!

-ritesh