Re: Style question: comparison between signed and unsigned?

Teunis Peters (teunis@usa.net)
Wed, 24 Sep 1997 10:59:12 -0600 (MDT)


On Wed, 24 Sep 1997, Mike Jagdis wrote:

> On Tue, 23 Sep 1997, Theodore Y. Ts'o wrote:
>
> > The fact of the matter is, by having the compiler issue these warnings,
> > it makes folks much more likely to ignore *all* compiler warnings, since
> > so many of them will be false positives.
>
> In my experience it means that programmers start to figure C is
> so dumb they have to explicitly cast *everything* - which not
> only makes it impossible to read their code but ensures neither
> the compiler nor the human has the faintest idea whether the
> programmer *really* intended a type conversion or if there is
> a bug lurking. Not good.

Speaking as someone who does a _LOT_ of typecasts (for storing/reading
byte streams... :) IMHO the proper way to handle typecasts
_WHEN_YOU_NEED_THEM_ is via macros :)

Eg.
#define M_s32(data) (*((s32*) (data)))
#define M_s64(data) (*((s64*) (data)))

or (more evil)
#define LE_u32(a,b,c,d) \
(((((d)&0xff)<<24)|(((c)&0xff)<<16)|(((b)&0xff)<<8)|((a)&0xff)))

#define MLE_s32(data) \
(s32)LE_u32((u8*)data[0],(u8*)data[1], \
(u8*)data[2],(u8*)data[3])

[can anyone see a problem with this? :]
(s[xx] = signed [xx]; eg s32=signed int)
(u[xx] = unsigned [xx]; but you knew that already :)

Incidentally - the signed-vs-unsigned _IS_ handy for finding bugs...
Enable it, check the warned areas for bugs, then _DISABLE_ it!
.. If nothing else, leave it disabled by default unless you're working on
something that has to be made rigorously by spec (eg. GNU code <ick> :)

And enough of this crazy thread BTW - it's distracting us from
kernel-hacking <g>...

G'day, eh? :)
- Teunis