Re: [PATCH v3 0/6] Composefs: an opportunistically sharing verified image filesystem

From: Gao Xiang
Date: Wed Feb 01 2023 - 06:22:30 EST




On 2023/2/1 18:01, Gao Xiang wrote:


On 2023/2/1 17:46, Alexander Larsson wrote:

...


                                   | uncached(ms)| cached(ms)
----------------------------------|-------------|-----------
composefs (with digest)           | 326         | 135
erofs (w/o -T0)                   | 264         | 172
erofs (w/o -T0) + overlayfs       | 651         | 238
squashfs (compressed)                | 538         | 211
squashfs (compressed) + overlayfs | 968         | 302


Clearly erofs with sparse files is the best fs now for the ro-fs +
overlay case. But still, we can see that the additional cost of the
overlayfs layer is not negligible.

According to amir this could be helped by a special composefs-like mode
in overlayfs, but its unclear what performance that would reach, and
we're then talking net new development that further complicates the
overlayfs codebase. Its not clear to me which alternative is easier to
develop/maintain.

Also, the difference between cached and uncached here is less than in
my tests. Probably because my test image was larger. With the test
image I use, the results are:

                                   | uncached(ms)| cached(ms)
----------------------------------|-------------|-----------
composefs (with digest)           | 681         | 390
erofs (w/o -T0) + overlayfs       | 1788        | 532
squashfs (compressed) + overlayfs | 2547        | 443


I gotta say it is weird though that squashfs performed better than
erofs in the cached case. May be worth looking into. The test data I'm
using is available here:

As another wild guess, cached performance is a just vfs-stuff.

I think the performance difference may be due to ACL (since both
composefs and squashfs don't support ACL).  I already asked Jingbo
to get more "perf data" to analyze this but he's now busy in another
stuff.

Again, my overall point is quite simple as always, currently
composefs is a read-only filesystem with massive symlink-like files.
It behaves as a subset of all generic read-only filesystems just
for this specific use cases.

In facts there are many options to improve this (much like Amir
said before):
  1) improve overlayfs, and then it can be used with any local fs;

  2) enhance erofs to support this (even without on-disk change);

  3) introduce fs/composefs;

In addition to option 1), option 2) has many benefits as well, since
your manifest files can save real regular files in addition to composefs
model.

(add some words..)

My first response at that time (on Slack) was "kindly request
Giuseppe to ask in the fsdevel mailing list if this new overlay model
and use cases is feasable", if so, I'm much happy to integrate in to
EROFS (in a cooperative way) in several ways:

- just use EROFS symlink layout and open such file in a stacked way;

or (now)

- just identify overlayfs "trusted.overlay.redirect" in EROFS itself
and open file so such image can be both used for EROFS only and
EROFS + overlayfs.

If that happened, then I think the overlayfs "metacopy" option can
also be shown by other fs community people later (since I'm not an
overlay expert), but I'm not sure why they becomes impossible finally
and even not mentioned at all.

Or if you guys really don't want to use EROFS for whatever reasons
(EROFS is completely open-source, used, contributed by many vendors),
you could improve squashfs, ext4, or other exist local fses with this
new use cases (since they don't need any on-disk change as well, for
example, by using some xattr), I don't think it's really hard.

And like what you said in the other reply, "
On the contrary, erofs lookup is very similar to composefs. There is
nothing magical about it, we're talking about pre-computed, static
lists of names. What you do is you sort the names, put them in a
compact seek-free form, and then you binary search on them. Composefs
v3 has some changes to make larger directories slightly more efficient
(no chunking), but the general performance should be comparable.
" yet core EROFS was a 2017-2018 stuff since we're addressed common
issues of generic read-only use cases.

Also if you'd like to read all dir data and pin such pages in memory at
once. If you run into an AI dataset with (typically) 10 million samples
or more in a dir, you will suffer from it under many devices with limited
memory. That is especially EROFS original target users.

I'm not sure how kernel filesystem upstream works like this (also a few
days ago, I heard another in-kernel new one called "tarfs" which
implements tar in ~500 loc (maybe) from confidental container guys, but
I don't really know how an unaligned unseekable archive format for tape
like tar works effectively without data block-aligned.)

Anyway, that is all what I could do now for your use cases.


Even if you guys still consider 3), I'm not sure that is all codebase
you will just do bugfix and don't add any new features like what I
said.  So eventually, I still think that is another read-only fs which
is much similar to compressed-part-truncated EROFS.


Thanks,
Gao Xiang


https://my.owndrive.com/index.php/s/irHJXRpZHtT3a5i