Re: [PATCH v5 2/5] lib/test_bitmap: add tests for bitmap_{read,write}()
From: Yury Norov
Date: Thu Sep 28 2023 - 16:02:12 EST
On Thu, Sep 28, 2023 at 05:14:55PM +0200, Alexander Potapenko wrote:
> On Thu, Sep 28, 2023 at 4:43 PM Yury Norov <yury.norov@xxxxxxxxx> wrote:
> >
> >
> >
> > On Thu, Sep 28, 2023, 10:20 AM Alexander Potapenko <glider@xxxxxxxxxx> wrote:
> >>
> >> On Wed, Sep 27, 2023 at 9:51 AM David Laight <David.Laight@xxxxxxxxxx> wrote:
> >> >
> >> > ...
> >> > > Overall, unless allocating and initializing bitmaps with size
> >> > > divisible by sizeof(long), most of bitmap.c is undefined behavior, so
> >> > > I don't think it makes much sense to specifically test this case here
> >> > > (given that we do not extend bitmap_equal() in the patch set).
> >> >
> >> > Bitmaps are arrays of unsigned long.
> >> > Using any of the APIs on anything else is a bug.
> >> > So it is always wrong to try to initialise 'a number of bytes'.
> >> > The size used in the definition need not be a multiple of 8 (on 64bit)
> >> > but the allocated data is always a multiple of 8.
> >> >
> >> > Any calls to the functions that have a cast of the bitmap
> >> > parameter are likely to be buggy.
> >> > And yes, there are loads of them, and many are buggy.
> >>
> >> I got rid of the casts in the bitmap test, but they remain in
> >> mtecomp.c, where 16-, 32-, 64-byte buffers allocated by
> >> kmem_cache_alloc() are treated as bitmaps:
> >> https://lore.kernel.org/linux-arm-kernel/20230922080848.1261487-6-glider@xxxxxxxxxx/T/#mdb0d636d2d357f8ffe6ac79cef1145df3440f659
> >>
> >> Having them allocated by bitmap_alloc() won't work, because on Android
> >> bitmap_alloc() will allocate the buffers from the kmalloc-64 cache,
> >> defeating the purpose of the compression.
> >>
> >> Would it be better to extend the bitmap.h API so that it is possible
> >> to allocate from a kmem cache (which would in turn require
> >> bitmap_kmem_cache_create() to ensure the alignment requirements)?
> >
> >
> > So all that is wrong then. Bad on me, I'd spend more time looking into your driver code...
> >
> > We already have bitmap_(from,to)_u(64,32),
> > And you can use them. For 16-bit you have to add helpers yourself. But it's not a rocket science.
> >
>
> So e.g. for compressing something into a 16-byte buffer using bitmaps
> I'd need to:
>
> 1) Allocate the buffer: buf = kmem_cache_alloc(...)
> 2) Allocate the bitmap: bitmap = bitmap_alloc(16*8, ...)
> 3) Fill the bitmap: mte_compress_to_buf(..., bitmap, 16)
> 4) Copy the bitmap contents to the buffer: bitmap_to_arr64(buf, bitmap, 16*8)
> 5) Deallocate the bitmap: bitmap_free(bitmap)
>
> instead of:
>
> buf = kmem_cache_alloc(...)
> mte_compress_to_buf(..., (unsigned long *)buf, 16)
>
> , correct?
>
> Given that the buffer contents are opaque and its size is aligned on 8
> bytes, could it be possible to somehow adopt the `buf` pointer
> instead?
I didn't find an explicit typecasting where you're using
mte_compress_to_buf(), but now after hard 2nd look I see...
Firstly, now that in the documentation you are explicitly describing the
return value of mte_compress() as 64-bit frame, the right way to go would
be declaring the function as: u64 mte_compress(u8 *tags).
And the general pattern should be like this:
unsigned long mte_compress(u8 *tags)
{
DECLARE_BITMAP(tmp, MTECOMP_CACHES_MAXBITS);
void *storage;
...
if (alloc_size < MTE_PAGE_TAG_STORAGE) {
storage = kmem_cache_alloc(cache, GFP_KERNEL);
mte_compress_to_buf(r_len, r_tags, r_sizes, tmp, alloc_size);
switch (alloc_size) {
case 16:
bitmap_to_arr16(storage, tmp, 16);
break;
case 32:
bitmap_to_arr32(storage, tmp, 32);
break;
case 64:
bitmap_to_arr64(storage, tmp, 64);
break;
default:
pr_err("error\n");
}
result = ((u64)storage | cache_id) & MTE_HANDLE_MASK;
goto ret;
}
...
}
Yeah, it looks cumbersome, but this is the right way to go if you need a
reliable BE-compatible driver. I think it will be less scary if you wrap
the switch with a helper, and/or move it inside mte_compress_to_buf(),
so that the mte_compress will stay unchanged.
Anyways, hope the above helped.
Thanks,
Yury