Re: [PATCH 5/8] pstore: Fix long-term implicit conversions in the compression routines

From: Guilherme G. Piccoli
Date: Sat Oct 08 2022 - 12:04:13 EST


On 08/10/2022 12:53, Ard Biesheuvel wrote:
> [...]
> So one thing I don't understand about these changes is why we need
> them in the first place.
>
> The zbufsize routines are all worst case routines, which means each
> one of those will return a value that exceeds the size parameter.
>
> We only use compression for dmesg, which compresses quite well. If it
> doesn't compress well, there is probably something wrong with the
> input data, so preserving it may not be as critical. And if
> compressing the data makes it bigger, can't we just omit the
> compression for that particular record?
>
> In summary, while adding zbufsize to the crypto API seems a reasonable
> thing to do, I don't see why we'd want to make use of it in pstore -
> can't we just use the decompressed size as the worst case compressed
> size for all algorithms, and skip the compression if it doesn't fit?
>
> Or am I missing something here?

In a way (and if I understand you correctly - please let me know if not)
you are making lot of sense: why not just use the maximum size (which is
the full decompressed size + header) as the worst case in pstore and
skip these highly specialized routines that calculate the worst case for
each algorithm, right?

This is exactly what 842 (sw compress) is doing now. If that's
interesting and Kees agrees, and if nobody else plans on doing that, I
could work on it.

Extra question (maybe silly on my side?): is it possible that
_compressed_ data is bigger than the original one? Isn't there any
"protection" on the compress APIs for that? In that case, it'd purely
waste of time / CPU cycles heheh

Cheers,


Guilherme