[PATCH v2 0/8] ext4: Convert inode preallocation list to an rbtree

From: Ojaswin Mujoo
Date: Fri Oct 14 2022 - 16:36:53 EST


This patch series aim to improve the performance and scalability of
inode preallocation by changing inode preallocation linked list to an
rbtree. I've ran xfstests quick on this series and plan to run auto group
as well to confirm we have no regressions.

** Shortcomings of existing implementation **

Right now, we add all the inode preallocations(PAs) to a per inode linked
list ei->i_prealloc_list. To prevent the list from growing infinitely
during heavy sparse workloads, the length of this list was capped at 512
and a trimming logic was added to trim the list whenever it grew over
this threshold, in patch 27bc446e2. This was discussed in detail in the
following lore thread [1].

[1] https://lore.kernel.org/all/d7a98178-056b-6db5-6bce-4ead23f4a257@xxxxxxxxx/

But from our testing, we noticed that the current implementation still
had issues with scalability as the performance degraded when the PAs
stored in the list grew. Most of the degradation was seen in
ext4_mb_normalize_request() and ext4_mb_use_preallocated() functions as
they iterated the inode PA list.

** Improvements in this patchset **

To counter the above shortcomings, this patch series modifies the inode
PA list to an rbtree, which:

- improves the performance of functions discussed above due to the
improved lookup speed.

- improves scalability by changing lookup complexity from O(n) to
O(logn). We no longer need the trimming logic as well.

As a result, the RCU implementation was needed to be changed since
lockless lookups of rbtrees do have some issues like skipping
subtrees. Hence, RCU was replaced with read write locks for inode
PAs. More information can be found in Patch 7 (that has the core
changes).

** Performance Numbers **

Performance numbers were collected with and without these patches, using an
nvme device. Details of tests/benchmarks used are as follows:

Test 1: 200,000 1KiB sparse writes using (fio)
Test 2: Fill 5GiB w/ random writes, 1KiB burst size using (fio)
Test 3: Test 2, but do 4 sequential writes before jumping to random
offset (fio)
Test 4: Fill 8GB FS w/ 2KiB files, 64 threads in parallel (fsmark)

+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| | nodelalloc | delalloc |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| | Unpatched | Patched | Unpatched | Patched |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| Test 1 | 11.8 MB/s | 23.3 MB/s | 27.2 MB/s | 63.7 MB/s |
| Test 2 | 1617 MB/s | 1740 MB/s | 2223 MB/s | 2208 MB/s |
| Test 3 | 1715 MB/s | 1823 MB/s | 2346 MB/s | 2364 MB/s |
| Test 4 | 14284 files/sec | 14347 files/s | 13762 files/sec | 13882 files/sec |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+

In test 1, we almost see 100 to 200% increase in performance due to the high number
of sparse writes highlighting the bottleneck in the unpatched kernel. Further, on running
"perf diff patched.data unpatched.data" for test 1, we see something as follows:

2.83% +29.67% [kernel.vmlinux] [k] _raw_spin_lock
...
+3.33% [ext4] [k] ext4_mb_normalize_request.constprop.30
0.25% +2.81% [ext4] [k] ext4_mb_use_preallocated

Here we can see that the biggest different is in the _raw_spin_lock() function
of unpatched kernel, that is called from `ext4_mb_normalize_request()` as seen
here:

32.47% fio [kernel.vmlinux] [k] _raw_spin_lock
|
---_raw_spin_lock
|
--32.22%--ext4_mb_normalize_request.constprop.30

This is coming from the spin_lock(&pa->pa_lock) that is called for
each PA that we iterate over, in ext4_mb_normalize_request(). Since in rbtrees,
we lookup log(n) PAs rather than n PAs, this spin lock is taken less frequently,
as evident in the perf.

Furthermore, we see some improvements in other tests however since they don't
exercise the PA traversal path as much as test 1, the improvements are relatively
smaller.

** Summary of patches **

- Patch 1-5: Abstractions/Minor optimizations
- Patch 6: Split common inode & locality group specific fields to a union
- Patch 7: Core changes to move inode PA logic from list to rbtree
- Patch 8: Remove the trim logic as it is not needed

** Changes since PATCH v1 **
- fixed styling issue
- merged ext4_mb_rb_insert() and ext4_mb_pa_cmp()

** Changes since RFC v3 **
- Changed while loops to for loops in patch 7
- Fixed some data types
- Made rbtree comparison logic more intuitive. The
rbtree insertion function still kept separate from
comparison function for reusability.

** Changes since RFC v2 **
- Added a function definition that was deleted during v2 rebase

** Changes since RFC v1 **
- Rebased over ext4 dev branch which includes Jan's patchset [1]
that changed some code in mballoc.c

[1] https://lore.kernel.org/all/20220908091301.147-1-jack@xxxxxxx/

*** BLURB HERE ***

Ojaswin Mujoo (8):
ext4: Stop searching if PA doesn't satisfy non-extent file
ext4: Refactor code related to freeing PAs
ext4: Refactor code in ext4_mb_normalize_request() and
ext4_mb_use_preallocated()
ext4: Move overlap assert logic into a separate function
ext4: Abstract out overlap fix/check logic in
ext4_mb_normalize_request()
ext4: Convert pa->pa_inode_list and pa->pa_obj_lock into a union
ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list
ext4: Remove the logic to trim inode PAs

Documentation/admin-guide/ext4.rst | 3 -
fs/ext4/ext4.h | 5 +-
fs/ext4/mballoc.c | 418 ++++++++++++++++++-----------
fs/ext4/mballoc.h | 17 +-
fs/ext4/super.c | 4 +-
fs/ext4/sysfs.c | 2 -
6 files changed, 274 insertions(+), 175 deletions(-)

--
2.31.1