Re: [PATCH RFC] x86: uaccess s/might_sleep/might_fault/

From: Ingo Molnar
Date: Thu May 02 2013 - 04:52:52 EST



* Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:

> The only reason uaccess routines might sleep
> is if they fault. Make this explicit for
> __copy_from_user_nocache, and consistent with
> copy_from_user and friends.
>
> Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
> ---
>
> I've updated all other arches as well - still
> build-testing. Any objections to the x86 patch?
>
> arch/x86/include/asm/uaccess_64.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
> index 142810c..4f7923d 100644
> --- a/arch/x86/include/asm/uaccess_64.h
> +++ b/arch/x86/include/asm/uaccess_64.h
> @@ -235,7 +235,7 @@ extern long __copy_user_nocache(void *dst, const void __user *src,
> static inline int
> __copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
> {
> - might_sleep();
> + might_fault();
> return __copy_user_nocache(dst, src, size, 1);

Looks good to me:

Acked-by: Ingo Molnar <mingo@xxxxxxxxxx>


... but while reviewing the effects I noticed a bug in might_fault():

#ifdef CONFIG_PROVE_LOCKING
void might_fault(void)
{
/*
* Some code (nfs/sunrpc) uses socket ops on kernel memory while
* holding the mmap_sem, this is safe because kernel memory doesn't
* get paged out, therefore we'll never actually fault, and the
* below annotations will generate false positives.
*/
if (segment_eq(get_fs(), KERNEL_DS))
return;

might_sleep();

the might_sleep() call should come first. With the current code
might_fault() schedules differently depending on CONFIG_PROVE_LOCKING,
which is an undesired semantical side effect ...

So please fix that too while at it.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/