Re: [PATCH] drivers: staging: lustre: Use 'force_die' instead of 'die' to avoid compiling issue

From: Chen Gang
Date: Sun Jul 13 2014 - 17:42:25 EST


On 07/14/2014 03:05 AM, Greg Kroah-Hartman wrote:
> On Sun, Jul 13, 2014 at 10:50:55PM +0800, Chen Gang wrote:
>> Some of architectures have already defined 'die' as macro, so can not use
>> this common name as declaration in other modules, or will cause compiling
>> issue. So use more precise name 'force_die' (like 'wrap_bulk') instead of.
>>
>> The related error (with allmodconfig under score):
>>
>> CC [M] drivers/staging/lustre/lustre/ptlrpc/sec.o
>> drivers/staging/lustre/lustre/ptlrpc/sec.c: In function 'sptlrpc_cli_ctx_expire':
>> drivers/staging/lustre/lustre/ptlrpc/sec.c:309:13: error: 'struct ptlrpc_ctx_ops' has no member named '__die'
>> ctx->cc_ops->die(ctx, 0);
>> ^
>> drivers/staging/lustre/lustre/ptlrpc/sec.c: In function 'ctx_refresh_timeout':
>> drivers/staging/lustre/lustre/ptlrpc/sec.c:594:26: error: 'struct ptlrpc_ctx_ops' has no member named '__die'
>> req->rq_cli_ctx->cc_ops->die(req->rq_cli_ctx, 0);
>> ^
>> make[5]: *** [drivers/staging/lustre/lustre/ptlrpc/sec.o] Error 1
>> make[4]: *** [drivers/staging/lustre/lustre/ptlrpc] Error 2
>> make[3]: *** [drivers/staging/lustre/lustre] Error 2
>> make[2]: *** [drivers/staging/lustre] Error 2
>> make[1]: *** [drivers/staging] Error 2
>> make: *** [drivers] Error 2
>>
>>
>> Signed-off-by: Chen Gang <gang.chen.5i5j@xxxxxxxxx>
>> ---
>> drivers/staging/lustre/lustre/include/lustre_sec.h | 2 +-
>> drivers/staging/lustre/lustre/ptlrpc/sec.c | 6 +++---
>> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> This doesn't apply to my tree, can you please refresh it against the
> staging-next branch of staging.git so that I can apply it?
>

OK, I shall go to that tree and send patch for it (I shall finish within
2 days).

Thanks.
--
Chen Gang

Open share and attitude like air water and life which God blessed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/