Question about unpacking large initramfs (>2G)

From: Nazerke Turtayeva
Date: Fri Mar 08 2024 - 22:11:44 EST


Dear kernel community,

Recently I was testing LLMs for RISC-V on Linux plus Buildroot plus
OpenSBI and Spike ISA Simulator. Nevertheless, given my rootfs end up
being pretty large, 3.6GB at the moment, my linux boot falls short
with "Initramfs unpacking failed: write error". I was trying to debug
this problem last week, but got confused with code complexity :(.

Nevertheless, following my earlier debug attempts I suspect this
xwrite's 2G-4K write limitation to be the main cause of unpacking to
rootfs, write buffer and do copy functions falling. In more detail, I
see expected initrd_start and initrd_end values being reserved at the
start of boot process and being transferred as unpacking to rootfs
arguments, however within that function body len is being assigned to
random numbers less than 2G until the very last moment when it
suddenly tries to write almost 3GBs of data at once. To work around
this problem, I was thinking about whether to do multiple unpacking to
rootfs of my large initramfs in smaller chunks. Another idea was to
try to understand how internal FSM works and avoid body len to be
assigned to arbitrarily large numbers. However, after I test these two
ideas I end up with "junk at the end of compressed archive" error. As
a result I have following questions:

1) In this regard, do you by chance have any recommendations on how I
can enable safe unpacking of my large rootfs?
2) Is it actually possible?
3) Or can errors be coming from the Spike simulator's side? What
should I be looking for particularly?
4) If an error might be coming from visually arbitrary assignment of
body len values, can you guys explain how the write buffer FSM works?
I got a bit confused with that part of the code :(

Hope for your kind understanding!

Thanks,
Best wishes