diff options
author | Uros Bizjak <ubizjak@gmail.com> | 2025-09-08 14:38:21 +0200 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2025-09-09 07:44:41 -0700 |
commit | e6b5ad1b1d9f8dcb80b711747f3abffec29408e3 (patch) | |
tree | 16adef09a29606b885e7f08e2a59b7729272e33b | |
parent | 4eef002328ddf70f6d5f4af856f923e701ffe7e3 (diff) | |
download | glibc-e6b5ad1b1d9f8dcb80b711747f3abffec29408e3.zip glibc-e6b5ad1b1d9f8dcb80b711747f3abffec29408e3.tar.gz glibc-e6b5ad1b1d9f8dcb80b711747f3abffec29408e3.tar.bz2 |
x86: Define atomic_full_barrier using __sync_synchronize
For x86_64 targets, __sync_synchronize emits a full 64-bit
'LOCK ORQ $0x0,(%rsp)' instead of 'LOCK ORL $0x0,(%rsp)'.
No functional changes.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Cc: Collin Funk <collin.funk1@gmail.com>
Cc: H.J.Lu <hjl.tools@gmail.com>
Cc: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
-rw-r--r-- | sysdeps/x86/atomic-machine.h | 8 |
1 files changed, 2 insertions, 6 deletions
diff --git a/sysdeps/x86/atomic-machine.h b/sysdeps/x86/atomic-machine.h index d5b2d49..c0c2c34 100644 --- a/sysdeps/x86/atomic-machine.h +++ b/sysdeps/x86/atomic-machine.h @@ -26,15 +26,14 @@ #ifdef __x86_64__ # define __HAVE_64B_ATOMICS 1 -# define SP_REG "rsp" #else /* Since the Pentium, i386 CPUs have supported 64-bit atomics, but the i386 psABI supplement provides only 4-byte alignment for uint64_t inside structs, so it is currently not possible to use 64-bit atomics on this platform. */ # define __HAVE_64B_ATOMICS 0 -# define SP_REG "esp" #endif + #define ATOMIC_EXCHANGE_USES_CAS 0 #define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ @@ -74,10 +73,7 @@ #define catomic_exchange_and_add(mem, value) \ __atomic_fetch_add (mem, value, __ATOMIC_ACQUIRE) -/* We don't use mfence because it is supposedly slower due to having to - provide stronger guarantees (e.g., regarding self-modifying code). */ -#define atomic_full_barrier() \ - __asm __volatile (LOCK_PREFIX "orl $0, (%%" SP_REG ")" ::: "memory") +#define atomic_full_barrier() __sync_synchronize () #define atomic_read_barrier() __asm ("" ::: "memory") #define atomic_write_barrier() __asm ("" ::: "memory") |