Re: [PATCH 1/9] crypto: add zbufsize() interface

From: Kees Cook
Date: Wed Dec 01 2021 - 22:52:02 EST


On Thu, Dec 02, 2021 at 12:58:20PM +1100, Herbert Xu wrote:
> On Wed, Dec 01, 2021 at 03:39:06PM -0800, Kees Cook wrote:
> >
> > Okay, I'm looking at this again because of the need in the module loader
> > to know "worst case decompression size"[1]. I am at a loss for how (or
> > why) the acomp interface is the "primary interface".
>
> This is similar to how we transitioned from the old hash interface
> to shash/ahash.
>
> Basically the legacy interface is synchronous only and cannot support
> hardware devices without having the CPU spinning while waiting for the
> result to come back.
>
> If you only care about synchronous support and don't need to access
> these hardware devices then you should use the new scomp interface
> that's equivalent to the old compress interface but built in a way
> to allow acomp users to easily access sync algorithms, but if you
> are processing large amounts of data and wish to access offload devices
> then you should consider using the async acomp interface.

But the scomp API appears to be "internal only":

include/crypto/internal/scompress.h:static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,

What's the correct API calling sequence to do a simple decompress?

> > For modules, all that would be wanted is this, where the buffer size can
> > be allocated on demand:
> >
> > u8 *decompressed = NULL;
> > size_t decompressed_size = 0;
> >
> > decompressed = decompress(decompressed, compressed, compressed_size, &decompressed_size);
> >
> > For pstore, the compressed_size is fixed and the decompression buffer
> > must be preallocated (for catching panic dumps), so the worst-case size
> > needs to be known in advance:
> >
> > u8 *decompressed = NULL;
> > size_t decompressed_worst_size = 0;
> > size_t decompressed_size = 0;
> >
> > worst_case(&decompressed_worst_size, compressed_size);
> >
> > decompressed = kmalloc(decompressed_worst_size, GFP_KERNEL);
> > ...
> > decompressed_size = decompressed_worst_size;
> > decompress(decompressed, compressed, compressed_size, &decompressed_size);
> >
> >
> > I don't see anything like this in the kernel for handling a simple
> > buffer-to-buffer decompression besides crypto_comp_decompress(). The
> > acomp interface is wildly over-complex for this. What the right
> > way to do this? (I can't find any documentation that discusses
> > compress/decompress[2].)
>
> I think you're asking about a different issue, which is that we
> don't have an interface for on-the-go allocation of decompressed
> results so every user has to allocate the maximum worst-case buffer.
>
> This is definitely an area that should be addressed but a lot of work
> needs to be done to get there. Essentially we'd need to convert
> the underlying algorithms to a model where they decompress into a
> list of pages and then they could simply allocate a new page if they
> need extra space.
>
> The result can then be returned to the original caller as an SG list.
>
> Would you be willing to work on something like this? This would benefit
> all existing users too. For example, IPsec would no longer need to
> allocate those 64K buffers for IPcomp.
>
> Unfortunately not many people care deeply about compression so
> volunteers are hard to find.

Dmitry has, I think, a bit of this already in [1] that could maybe be
generalized if it'd needed?

pstore still needs the "worst case" API to do a preallocation, though.

Anyway, if I could have an example of how to use scomp in pstore, I
could better see where to wire up the proposed zbufsize API...

Thanks!

[1] https://lore.kernel.org/linux-modules/YaMYJv539OEBz5B%2F@xxxxxxxxxx/#Z31kernel:module_decompress.c

--
Kees Cook