diff options
author | Logikable <seanluchen@google.com> | 2024-06-04 11:19:34 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-06-04 11:19:34 -0700 |
commit | b62b7a42bbee4a3bbf9094808f460fdc9c119bd7 (patch) | |
tree | ae7b8de7f273aa9c104993845ff80c95d1d0e160 /compiler-rt | |
parent | c1654c38e8b82a075613fd60f19a179b1c7df2a2 (diff) | |
download | llvm-b62b7a42bbee4a3bbf9094808f460fdc9c119bd7.zip llvm-b62b7a42bbee4a3bbf9094808f460fdc9c119bd7.tar.gz llvm-b62b7a42bbee4a3bbf9094808f460fdc9c119bd7.tar.bz2 |
[compiler-rt][builtins] Switch libatomic locks to pthread_mutex_t (#94374)
When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.
This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.
On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.
pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.
While pthread_mutex_lock has a larger overhead (due to the function
call and some dispatching), a dispatch to libatomic already predicates
a lack of performance guarantees.
Diffstat (limited to 'compiler-rt')
-rw-r--r-- | compiler-rt/lib/builtins/atomic.c | 17 |
1 files changed, 5 insertions, 12 deletions
diff --git a/compiler-rt/lib/builtins/atomic.c b/compiler-rt/lib/builtins/atomic.c index 852bb20..159c364 100644 --- a/compiler-rt/lib/builtins/atomic.c +++ b/compiler-rt/lib/builtins/atomic.c @@ -94,19 +94,12 @@ static Lock locks[SPINLOCK_COUNT]; // initialized to OS_SPINLOCK_INIT which is 0 #else _Static_assert(__atomic_always_lock_free(sizeof(uintptr_t), 0), "Implementation assumes lock-free pointer-size cmpxchg"); -typedef _Atomic(uintptr_t) Lock; +#include <pthread.h> +typedef pthread_mutex_t Lock; /// Unlock a lock. This is a release operation. -__inline static void unlock(Lock *l) { - __c11_atomic_store(l, 0, __ATOMIC_RELEASE); -} -/// Locks a lock. In the current implementation, this is potentially -/// unbounded in the contended case. -__inline static void lock(Lock *l) { - uintptr_t old = 0; - while (!__c11_atomic_compare_exchange_weak(l, &old, 1, __ATOMIC_ACQUIRE, - __ATOMIC_RELAXED)) - old = 0; -} +__inline static void unlock(Lock *l) { pthread_mutex_unlock(l); } +/// Locks a lock. +__inline static void lock(Lock *l) { pthread_mutex_lock(l); } /// locks for atomic operations static Lock locks[SPINLOCK_COUNT]; #endif |