Re: [PATCH] x86: Add optimized popcnt variants

From: Peter Zijlstra
Date: Tue Mar 09 2010 - 11:33:02 EST


On Tue, 2010-03-09 at 17:23 +0100, Borislav Petkov wrote:
> From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Date: Tue, Mar 09, 2010 at 04:50:40PM +0100
>
> > Should all be fine as it is, that patch
> > ( http://lkml.org/lkml/2010/2/4/119 ) is against a kernel with fce877e3
> > in, I've just checked and it still applies to tip/master as of this
> > writing (although it grew a single 2 line offset for 1 hunk).
>
> Well, this way, I'm getting
>
> ...
> In file included from include/linux/kernel.h:15,
> from /home/linux-2.6/arch/x86/include/asm/percpu.h:45,
> from /home/linux-2.6/arch/x86/include/asm/current.h:5,
> from /home/linux-2.6/arch/x86/include/asm/processor.h:15,
> from /home/linux-2.6/arch/x86/include/asm/atomic.h:6,
> from include/linux/crypto.h:20,
> from arch/x86/kernel/asm-offsets_64.c:8,
> from arch/x86/kernel/asm-offsets.c:4:
> include/linux/bitops.h:52:1: warning: "HWEIGHT8" redefined
> ...
>
> due to the fact that we have multiple definitions of HWEIGHT*:
>
> The one batch is in <include/linux/bitops.h> introduced by fce877e3.
>
> The other is in <include/asm-generic/bitops/const_hweight.h> which
> is pulled in into <include/linux/bitops.h> through "#include
> <asm/bitops.h>", which, in turn, <includes asm/arch_hweight.h> and
> <include/asm-generic/bitops/const_hweight.h>.
>
> The obvious resolution is to remove the HWEIGHT* batch from
> <include/asm-generic/bitops/const_hweight.h> since they're functionally
> identical with the ones in <include/linux/bitops.h>, no?

I thought the patch did that, see this hunk (straight from
http://lkml.org/lkml/2010/2/4/119 ):


---
Index: linux-2.6/include/linux/bitops.h
===================================================================
--- linux-2.6.orig/include/linux/bitops.h
+++ linux-2.6/include/linux/bitops.h
@@ -45,31 +45,6 @@ static inline unsigned long hweight_long
return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
}

-/*
- * Clearly slow versions of the hweightN() functions, their benefit is
- * of course compile time evaluation of constant arguments.
- */
-#define HWEIGHT8(w) \
- ( BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + \
- (!!((w) & (1ULL << 0))) + \
- (!!((w) & (1ULL << 1))) + \
- (!!((w) & (1ULL << 2))) + \
- (!!((w) & (1ULL << 3))) + \
- (!!((w) & (1ULL << 4))) + \
- (!!((w) & (1ULL << 5))) + \
- (!!((w) & (1ULL << 6))) + \
- (!!((w) & (1ULL << 7))) )
-
-#define HWEIGHT16(w) (HWEIGHT8(w) + HWEIGHT8((w) >> 8))
-#define HWEIGHT32(w) (HWEIGHT16(w) + HWEIGHT16((w) >> 16))
-#define HWEIGHT64(w) (HWEIGHT32(w) + HWEIGHT32((w) >> 32))
-
-/*
- * Type invariant version that simply casts things to the
- * largest type.
- */
-#define HWEIGHT(w) HWEIGHT64((u64)(w))
-
/**
* rol32 - rotate a 32-bit value left
* @word: value to rotate

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/