Re: The type of bitops

From: Andy Lutomirski
Date: Tue May 07 2013 - 19:17:36 EST


On 05/06/2013 05:51 PM, Linus Torvalds wrote:
> The only place where the actual size of the data matters is in the
> underlying bitop bitmap itse;f, and that has always been defined to be
> an array of "unsigned long", and we long logn since made sure people
> aren't confused about that. But the index? It's just a number, there's
> no structure associated with it. Somebody might use an "unsigned char"
> as the index into a bitmap, and the operations fundamentally wouldn't
> care - casting the bit index it to an "int" (or a "long") doesn't
> change the operation.

The one and only time I tried to use this, I thought this was odd. Long
has a different size on 32 vs 64 bit architectures, and bit ops seem
like they'd want to be the same size everywhere so you can allocate the
appropriate number of bits. (Also, if you only want 32 bits, you have
to do some evil cheating, and I don't trust casting int* to long* on
big-endian architectures.)

Would offering a u32* option make sense?

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/