diff options
author | Matheus Castanho <msc@linux.ibm.com> | 2021-03-17 10:14:15 -0300 |
---|---|---|
committer | Matheus Castanho <msc@linux.ibm.com> | 2021-04-16 08:40:37 -0300 |
commit | 5d61fc2021922b4f572be218dad5b299e2939346 (patch) | |
tree | 9a10e23ddd596d15dda4641e555ad48c535353cc /sysdeps | |
parent | 5ad1a81c8e84eed232ed42a2bf50a160c1447600 (diff) | |
download | glibc-5d61fc2021922b4f572be218dad5b299e2939346.zip glibc-5d61fc2021922b4f572be218dad5b299e2939346.tar.gz glibc-5d61fc2021922b4f572be218dad5b299e2939346.tar.bz2 |
powerpc: Add missing registers to clobbers list for syscalls [BZ #27623]
Some registers that can be clobbered by the kernel during a syscall are not
listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.
For syscalls using sc:
- XER is zeroed by the kernel on exit
For syscalls using scv:
- XER is zeroed by the kernel on exit
- Different from the sc case, most CR fields can be clobbered (according to
the ELF ABI and the Linux kernel's syscall ABI for powerpc
(linux/Documentation/powerpc/syscall64-abi.rst)
The same should apply to vsyscalls, which effectively execute a function call
but are not currently adding these registers as clobbers either.
These are likely not causing issues today, but they should be added to the
clobbers list just in case things change on the kernel side in the future.
Reported-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Diffstat (limited to 'sysdeps')
-rw-r--r-- | sysdeps/unix/sysv/linux/powerpc/sysdep.h | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/sysdeps/unix/sysv/linux/powerpc/sysdep.h b/sysdeps/unix/sysv/linux/powerpc/sysdep.h index 6b99464..2f31f91 100644 --- a/sysdeps/unix/sysv/linux/powerpc/sysdep.h +++ b/sysdeps/unix/sysv/linux/powerpc/sysdep.h @@ -56,7 +56,9 @@ "0:" \ : "+r" (r0), "+r" (r3), "+r" (r4), "+r" (r5), "+r" (r6), \ "+r" (r7), "+r" (r8) \ - : : "r9", "r10", "r11", "r12", "cr0", "ctr", "lr", "memory"); \ + : : "r9", "r10", "r11", "r12", \ + "cr0", "cr1", "cr5", "cr6", "cr7", \ + "xer", "lr", "ctr", "memory"); \ __asm__ __volatile__ ("" : "=r" (rval) : "r" (r3)); \ (long int) r0 & (1 << 28) ? -rval : rval; \ }) @@ -86,7 +88,8 @@ "=&r" (r6), "=&r" (r7), "=&r" (r8) \ : ASM_INPUT_##nr \ : "r9", "r10", "r11", "r12", \ - "lr", "ctr", "memory"); \ + "cr0", "cr1", "cr5", "cr6", "cr7", \ + "xer", "lr", "ctr", "memory"); \ r3; \ }) @@ -101,7 +104,7 @@ "=&r" (r6), "=&r" (r7), "=&r" (r8) \ : ASM_INPUT_##nr \ : "r9", "r10", "r11", "r12", \ - "cr0", "ctr", "memory"); \ + "xer", "cr0", "ctr", "memory"); \ r0 & (1 << 28) ? -r3 : r3; \ }) |