Re: [syzbot] [crypto?] general protection fault in scatterwalk_copychunks (5)

From: Barry Song
Date: Wed Dec 27 2023 - 01:25:30 EST


On Wed, Dec 27, 2023 at 4:51 PM Chengming Zhou
<zhouchengming@xxxxxxxxxxxxx> wrote:
>
> On 2023/12/27 08:23, Nhat Pham wrote:
> > On Tue, Dec 26, 2023 at 3:30 PM Chris Li <chrisl@xxxxxxxxxx> wrote:
> >>
> >> Again, sorry I was looking at the decompression side rather than the
> >> compression side. The compression side does not even offer a safe
> >> version of the compression function.
> >> That seems to be dangerous. It seems for now we should make the zswap
> >> roll back to 2 page buffer until we have a safe way to do compression
> >> without overwriting the output buffers.
> >
> > Unfortunately, I think this is the way - at least until we rework the
> > crypto/compression API (if that's even possible?).
> > I still think the 2 page buffer is dumb, but it is what it is :(
>
> Hi,
>
> I think it's a bug in `scomp_acomp_comp_decomp()`, which doesn't use
> the caller passed "src" and "dst" scatterlist. Instead, it uses its own
> per-cpu "scomp_scratch", which have 128KB src and dst.
>
> When compression done, it uses the output req->dlen to copy scomp_scratch->dst
> to our dstmem, which has only one page now, so this problem happened.
>
> I still don't know why the alg->compress(src, slen, dst, &dlen) doesn't
> check the dlen? It seems an obvious bug, right?
>
> As for this problem in `scomp_acomp_comp_decomp()`, this patch below
> should fix it. I will set up a few tests to check later.
>
> Thanks!
>
> diff --git a/crypto/scompress.c b/crypto/scompress.c
> index 442a82c9de7d..e654a120ae5a 100644
> --- a/crypto/scompress.c
> +++ b/crypto/scompress.c
> @@ -117,6 +117,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
> struct crypto_scomp *scomp = *tfm_ctx;
> void **ctx = acomp_request_ctx(req);
> struct scomp_scratch *scratch;
> + unsigned int dlen;
> int ret;
>
> if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
> @@ -128,6 +129,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
> if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
> req->dlen = SCOMP_SCRATCH_SIZE;
>
> + dlen = req->dlen;
> +
> scratch = raw_cpu_ptr(&scomp_scratch);
> spin_lock(&scratch->lock);
>
> @@ -145,6 +148,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
> ret = -ENOMEM;
> goto out;
> }
> + } else if (req->dlen > dlen) {
> + ret = -ENOMEM;
> + goto out;
> }

This can't fix the problem as crypto_scomp_compress() has written overflow data.

BTW, in many cases, hardware-accelerators drivers/crypto can do compression and
decompression by off-loading CPU;
we won't have a chance to let hardware check the dst buffer size. so
giving the dst buffer
with enough length to the hardware's dma engine is the right way. I
mean, we shouldn't
change dst from 2pages to 1page.

> scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
> 1);


Thanks
Barry