[PATCH x86_64]: Correct copy_user_generic return value when exception happens

From: Jin, Gordon
Date: Wed Oct 13 2004 - 04:50:40 EST



Fix a bug that arch/x86_64/lib/copy_user:copy_user_generic will return a wrong
value when exception happens.
In the case the address is not 8-byte aligned (i.e. go into Lbad_alignment),
if exception happens in Ls11, %rdx will be wrong number of copied bytes,
then copy_user_generic returns wrong value.
It also fixed a bug of zeroing wrong number of bytes of destination at this
situation. (In Lzero_rest)

Signed-off-by: Yanmin Zhang <yanmin.zhang@xxxxxxxxx>
Signed-off-by: Nanhai Zou <nanhai.zou@xxxxxxxxx>
Signed-off-by: Gordon Jin <gordon.jin@xxxxxxxxx>
Signed-off-by: Suresh Siddha <suresh.b.siddha@xxxxxxxxx>

--- linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S~ 2004-07-15 13:30:14.376131432 -0700
+++ linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S 2004-07-15 23:41:40.572981784 -0700
@@ -73,7 +73,7 @@
* rdx count
*
* Output:
- * eax uncopied bytes or 0 if successfull.
+ * eax uncopied bytes or 0 if successful.
*/
.globl copy_user_generic
.p2align 4
@@ -179,9 +179,9 @@
movl $8,%r9d
subl %ecx,%r9d
movl %r9d,%ecx
- subq %r9,%rdx
- jz .Lsmall_align
- js .Lsmall_align
+ cmpq %r9,%rdx
+ jz .Lhandle_7
+ js .Lhandle_7
.Lalign_1:
.Ls11: movb (%rsi),%bl
.Ld11: movb %bl,(%rdi)
@@ -189,10 +189,8 @@
incq %rdi
decl %ecx
jnz .Lalign_1
+ subq %r9,%rdx
jmp .Lafter_bad_alignment
-.Lsmall_align:
- addq %r9,%rdx
- jmp .Lhandle_7
#endif

/* table sorted by exception address */
@@ -219,8 +217,8 @@
.quad .Ls10,.Le_byte
.quad .Ld10,.Le_byte
#ifdef FIX_ALIGNMENT
- .quad .Ls11,.Le_byte
- .quad .Ld11,.Le_byte
+ .quad .Ls11,.Lzero_rest
+ .quad .Ld11,.Lzero_rest
#endif
.quad .Le5,.Le_zero
.previous

Thanks,
Gordon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/