diff options
author | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2024-06-25 16:17:44 -0300 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2024-08-23 14:27:43 -0300 |
commit | 89b53077d2a58f00e7debdfe58afabe953dac60d (patch) | |
tree | bde66cc5442036f7448d199444fb0fc675350f91 /sysdeps/unix/sysdep.h | |
parent | 55cd51d971b84fbb2cc0fe8140cc8581f98582c7 (diff) | |
download | glibc-89b53077d2a58f00e7debdfe58afabe953dac60d.zip glibc-89b53077d2a58f00e7debdfe58afabe953dac60d.tar.gz glibc-89b53077d2a58f00e7debdfe58afabe953dac60d.tar.bz2 |
nptl: Fix Race conditions in pthread cancellation [BZ#12683]
The current racy approach is to enable asynchronous cancellation
before making the syscall and restore the previous cancellation
type once the syscall returns, and check if cancellation has happen
during the cancellation entrypoint.
As described in BZ#12683, this approach shows 2 problems:
1. Cancellation can act after the syscall has returned from the
kernel, but before userspace saves the return value. It might
result in a resource leak if the syscall allocated a resource or a
side effect (partial read/write), and there is no way to program
handle it with cancellation handlers.
2. If a signal is handled while the thread is blocked at a cancellable
syscall, the entire signal handler runs with asynchronous
cancellation enabled. This can lead to issues if the signal
handler call functions which are async-signal-safe but not
async-cancel-safe.
For the cancellation to work correctly, there are 5 points at which the
cancellation signal could arrive:
[ ... )[ ... )[ syscall ]( ...
1 2 3 4 5
1. Before initial testcancel, e.g. [*... testcancel)
2. Between testcancel and syscall start, e.g. [testcancel...syscall start)
3. While syscall is blocked and no side effects have yet taken
place, e.g. [ syscall ]
4. Same as 3 but with side-effects having occurred (e.g. a partial
read or write).
5. After syscall end e.g. (syscall end...*]
And libc wants to act on cancellation in cases 1, 2, and 3 but not
in cases 4 or 5. For the 4 and 5 cases, the cancellation will eventually
happen in the next cancellable entrypoint without any further external
event.
The proposed solution for each case is:
1. Do a conditional branch based on whether the thread has received
a cancellation request;
2. It can be caught by the signal handler determining that the saved
program counter (from the ucontext_t) is in some address range
beginning just before the "testcancel" and ending with the
syscall instruction.
3. SIGCANCEL can be caught by the signal handler and determine that
the saved program counter (from the ucontext_t) is in the address
range beginning just before "testcancel" and ending with the first
uninterruptable (via a signal) syscall instruction that enters the
kernel.
4. In this case, except for certain syscalls that ALWAYS fail with
EINTR even for non-interrupting signals, the kernel will reset
the program counter to point at the syscall instruction during
signal handling, so that the syscall is restarted when the signal
handler returns. So, from the signal handler's standpoint, this
looks the same as case 2, and thus it's taken care of.
5. For syscalls with side-effects, the kernel cannot restart the
syscall; when it's interrupted by a signal, the kernel must cause
the syscall to return with whatever partial result is obtained
(e.g. partial read or write).
6. The saved program counter points just after the syscall
instruction, so the signal handler won't act on cancellation.
This is similar to 4. since the program counter is past the syscall
instruction.
So The proposed fixes are:
1. Remove the enable_asynccancel/disable_asynccancel function usage in
cancellable syscall definition and instead make them call a common
symbol that will check if cancellation is enabled (__syscall_cancel
at nptl/cancellation.c), call the arch-specific cancellable
entry-point (__syscall_cancel_arch), and cancel the thread when
required.
2. Provide an arch-specific generic system call wrapper function
that contains global markers. These markers will be used in
SIGCANCEL signal handler to check if the interruption has been
called in a valid syscall and if the syscalls has side-effects.
A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c
is provided. However, the markers may not be set on correct
expected places depending on how INTERNAL_SYSCALL_NCS is
implemented by the architecture. It is expected that all
architectures add an arch-specific implementation.
3. Rewrite SIGCANCEL asynchronous handler to check for both canceling
type and if current IP from signal handler falls between the global
markers and act accordingly.
4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to
use the appropriate cancelable syscalls.
5. Adjust 'lowlevellock-futex.h' arch-specific implementations to
provide cancelable futex calls.
Some architectures require specific support on syscall handling:
* On i386 the syscall cancel bridge needs to use the old int80
instruction because the optimized vDSO symbol the resulting PC value
for an interrupted syscall points to an address outside the expected
markers in __syscall_cancel_arch. It has been discussed in LKML [1]
on how kernel could help userland to accomplish it, but afaik
discussion has stalled.
Also, sysenter should not be used directly by libc since its calling
convention is set by the kernel depending of the underlying x86 chip
(check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60).
* mips o32 is the only kABI that requires 7 argument syscall, and to
avoid add a requirement on all architectures to support it, mips
support is added with extra internal defines.
Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu,
powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and
x86_64-linux-gnu.
[1] https://lkml.org/lkml/2016/3/8/1105
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Diffstat (limited to 'sysdeps/unix/sysdep.h')
-rw-r--r-- | sysdeps/unix/sysdep.h | 173 |
1 files changed, 141 insertions, 32 deletions
diff --git a/sysdeps/unix/sysdep.h b/sysdeps/unix/sysdep.h index a19e841..adc8d71 100644 --- a/sysdeps/unix/sysdep.h +++ b/sysdeps/unix/sysdep.h @@ -24,6 +24,9 @@ #define SYSCALL__(name, args) PSEUDO (__##name, name, args) #define SYSCALL(name, args) PSEUDO (name, name, args) +#ifndef __ASSEMBLER__ +# include <errno.h> + #define __SYSCALL_CONCAT_X(a,b) a##b #define __SYSCALL_CONCAT(a,b) __SYSCALL_CONCAT_X (a, b) @@ -108,42 +111,148 @@ #define INLINE_SYSCALL_CALL(...) \ __INLINE_SYSCALL_DISP (__INLINE_SYSCALL, __VA_ARGS__) -#if IS_IN (rtld) -/* All cancellation points are compiled out in the dynamic loader. */ -# define NO_SYSCALL_CANCEL_CHECKING 1 +#define __INTERNAL_SYSCALL_NCS0(name) \ + INTERNAL_SYSCALL_NCS (name, 0) +#define __INTERNAL_SYSCALL_NCS1(name, a1) \ + INTERNAL_SYSCALL_NCS (name, 1, a1) +#define __INTERNAL_SYSCALL_NCS2(name, a1, a2) \ + INTERNAL_SYSCALL_NCS (name, 2, a1, a2) +#define __INTERNAL_SYSCALL_NCS3(name, a1, a2, a3) \ + INTERNAL_SYSCALL_NCS (name, 3, a1, a2, a3) +#define __INTERNAL_SYSCALL_NCS4(name, a1, a2, a3, a4) \ + INTERNAL_SYSCALL_NCS (name, 4, a1, a2, a3, a4) +#define __INTERNAL_SYSCALL_NCS5(name, a1, a2, a3, a4, a5) \ + INTERNAL_SYSCALL_NCS (name, 5, a1, a2, a3, a4, a5) +#define __INTERNAL_SYSCALL_NCS6(name, a1, a2, a3, a4, a5, a6) \ + INTERNAL_SYSCALL_NCS (name, 6, a1, a2, a3, a4, a5, a6) +#define __INTERNAL_SYSCALL_NCS7(name, a1, a2, a3, a4, a5, a6, a7) \ + INTERNAL_SYSCALL_NCS (name, 7, a1, a2, a3, a4, a5, a6, a7) + +/* Issue a syscall defined by syscall number plus any other argument required. + It is similar to INTERNAL_SYSCALL_NCS macro, but without the need to pass + the expected argument number as third parameter. */ +#define INTERNAL_SYSCALL_NCS_CALL(...) \ + __INTERNAL_SYSCALL_DISP (__INTERNAL_SYSCALL_NCS, __VA_ARGS__) + +/* Cancellation macros. */ +#include <syscall_types.h> + +/* Adjust both the __syscall_cancel and the SYSCALL_CANCEL macro to support + 7 arguments instead of default 6 (curently only mip32). It avoid add + the requirement to each architecture to support 7 argument macros + {INTERNAL,INLINE}_SYSCALL. */ +#ifdef HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS +# define __SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t a7, +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF ,__syscall_arg_t a7 +# define __SYSCALL_CANCEL7_ARG 0, +# define __SYSCALL_CANCEL7_ARG7 a7, +# define __SYSCALL_CANCEL7_ARCH_ARG7 , a7 #else -# define NO_SYSCALL_CANCEL_CHECKING SINGLE_THREAD_P +# define __SYSCALL_CANCEL7_ARG_DEF +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF +# define __SYSCALL_CANCEL7_ARG +# define __SYSCALL_CANCEL7_ARG7 +# define __SYSCALL_CANCEL7_ARCH_ARG7 #endif +long int __internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2, + __syscall_arg_t a3, __syscall_arg_t a4, + __syscall_arg_t a5, __syscall_arg_t a6, + __SYSCALL_CANCEL7_ARG_DEF + __syscall_arg_t nr) attribute_hidden; -#define SYSCALL_CANCEL(...) \ - ({ \ - long int sc_ret; \ - if (NO_SYSCALL_CANCEL_CHECKING) \ - sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \ - else \ - { \ - int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \ - sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \ - LIBC_CANCEL_RESET (sc_cancel_oldtype); \ - } \ - sc_ret; \ - }) +long int __syscall_cancel (__syscall_arg_t arg1, __syscall_arg_t arg2, + __syscall_arg_t arg3, __syscall_arg_t arg4, + __syscall_arg_t arg5, __syscall_arg_t arg6, + __SYSCALL_CANCEL7_ARG_DEF + __syscall_arg_t nr) attribute_hidden; -/* Issue a syscall defined by syscall number plus any other argument - required. Any error will be returned unmodified (including errno). */ -#define INTERNAL_SYSCALL_CANCEL(...) \ - ({ \ - long int sc_ret; \ - if (NO_SYSCALL_CANCEL_CHECKING) \ - sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \ - else \ - { \ - int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \ - sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \ - LIBC_CANCEL_RESET (sc_cancel_oldtype); \ - } \ - sc_ret; \ - }) +#define __SYSCALL_CANCEL0(name) \ + __syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL1(name, a1) \ + __syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL2(name, a1, a2) \ + __syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL3(name, a1, a2, a3) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL4(name, a1, a2, a3, a4) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC(a4), \ + __SSC (a5), 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \ + __SSC (a5), __SSC (a6), __SYSCALL_CANCEL7_ARG \ + __NR_##name) +#define __SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \ + __SSC (a5), __SSC (a6), __SSC (a7), __NR_##name) + +#define __SYSCALL_CANCEL_NARGS_X(a,b,c,d,e,f,g,h,n,...) n +#define __SYSCALL_CANCEL_NARGS(...) \ + __SYSCALL_CANCEL_NARGS_X (__VA_ARGS__,7,6,5,4,3,2,1,0,) +#define __SYSCALL_CANCEL_CONCAT_X(a,b) a##b +#define __SYSCALL_CANCEL_CONCAT(a,b) __SYSCALL_CANCEL_CONCAT_X (a, b) +#define __SYSCALL_CANCEL_DISP(b,...) \ + __SYSCALL_CANCEL_CONCAT (b,__SYSCALL_CANCEL_NARGS(__VA_ARGS__))(__VA_ARGS__) + +/* Issue a cancellable syscall defined first argument plus any other argument + required. If and error occurs its value, the macro returns -1 and sets + errno accordingly. */ +#define __SYSCALL_CANCEL_CALL(...) \ + __SYSCALL_CANCEL_DISP (__SYSCALL_CANCEL, __VA_ARGS__) + +#define __INTERNAL_SYSCALL_CANCEL0(name) \ + __internal_syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG \ + __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL1(name, a1) \ + __internal_syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL2(name, a1, a2) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL3(name, a1, a2, a3) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, \ + 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL4(name, a1, a2, a3, a4) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), __SSC (a5), 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC (a4), __SSC (a5), __SSC (a6), \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC (a4), __SSC (a5), __SSC (a6), \ + __SSC (a7), __NR_##name) + +/* Issue a cancellable syscall defined by syscall number NAME plus any other + argument required. If an error occurs its value is returned as an negative + number unmodified and errno is not set. */ +#define __INTERNAL_SYSCALL_CANCEL_CALL(...) \ + __SYSCALL_CANCEL_DISP (__INTERNAL_SYSCALL_CANCEL, __VA_ARGS__) + +#if IS_IN (rtld) +/* The loader does not need to handle thread cancellation, use direct + syscall instead. */ +# define INTERNAL_SYSCALL_CANCEL(...) INTERNAL_SYSCALL_CALL(__VA_ARGS__) +# define SYSCALL_CANCEL(...) INLINE_SYSCALL_CALL (__VA_ARGS__) +#else +# define INTERNAL_SYSCALL_CANCEL(...) \ + __INTERNAL_SYSCALL_CANCEL_CALL (__VA_ARGS__) +# define SYSCALL_CANCEL(...) \ + __SYSCALL_CANCEL_CALL (__VA_ARGS__) +#endif + +#endif /* __ASSEMBLER__ */ /* Machine-dependent sysdep.h files are expected to define the macro PSEUDO (function_name, syscall_name) to emit assembly code to define the |