diff options
author | Torvald Riegel <torvald@gcc.gnu.org> | 2017-02-01 17:21:59 +0000 |
---|---|---|
committer | Torvald Riegel <torvald@gcc.gnu.org> | 2017-02-01 17:21:59 +0000 |
commit | 969a32ce9354585f5f2b89df2e025f52eb0e1644 (patch) | |
tree | ba5dc4787f7d4f9d23224810508207f4fcc188dc /libatomic/glfree.c | |
parent | 55e75c7c6bcfe386d0ecbf4611cff81040af00b3 (diff) | |
download | gcc-969a32ce9354585f5f2b89df2e025f52eb0e1644.zip gcc-969a32ce9354585f5f2b89df2e025f52eb0e1644.tar.gz gcc-969a32ce9354585f5f2b89df2e025f52eb0e1644.tar.bz2 |
Fix __atomic to not implement atomic loads with CAS.
gcc/
* builtins.c (fold_builtin_atomic_always_lock_free): Make "lock-free"
conditional on existance of a fast atomic load.
* optabs-query.c (can_atomic_load_p): New function.
* optabs-query.h (can_atomic_load_p): Declare it.
* optabs.c (expand_atomic_exchange): Always delegate to libatomic if
no fast atomic load is available for the particular size of access.
(expand_atomic_compare_and_swap): Likewise.
(expand_atomic_load): Likewise.
(expand_atomic_store): Likewise.
(expand_atomic_fetch_op): Likewise.
* testsuite/lib/target-supports.exp
(check_effective_target_sync_int_128): Remove x86 because it provides
no fast atomic load.
(check_effective_target_sync_int_128_runtime): Likewise.
libatomic/
* acinclude.m4: Add #define FAST_ATOMIC_LDST_*.
* auto-config.h.in: Regenerate.
* config/x86/host-config.h (FAST_ATOMIC_LDST_16): Define to 0.
(atomic_compare_exchange_n): New.
* glfree.c (EXACT, LARGER): Change condition and add comments.
From-SVN: r245098
Diffstat (limited to 'libatomic/glfree.c')
-rw-r--r-- | libatomic/glfree.c | 21 |
1 files changed, 18 insertions, 3 deletions
diff --git a/libatomic/glfree.c b/libatomic/glfree.c index b68dec7..59fe533 100644 --- a/libatomic/glfree.c +++ b/libatomic/glfree.c @@ -24,26 +24,41 @@ #include "libatomic_i.h" - +/* Accesses with a power-of-two size are not lock-free if we don't have an + integer type of this size or if they are not naturally aligned. They + are lock-free if such a naturally aligned access is always lock-free + according to the compiler, which requires that both atomic loads and CAS + are available. + In all other cases, we fall through to LARGER (see below). */ #define EXACT(N) \ do { \ if (!C2(HAVE_INT,N)) break; \ if ((uintptr_t)ptr & (N - 1)) break; \ if (__atomic_always_lock_free(N, 0)) return true; \ - if (C2(MAYBE_HAVE_ATOMIC_CAS_,N)) return true; \ + if (!C2(MAYBE_HAVE_ATOMIC_CAS_,N)) break; \ + if (C2(FAST_ATOMIC_LDST_,N)) return true; \ } while (0) +/* We next check to see if an access of a larger size is lock-free. We use + a similar check as in EXACT, except that we also check that the alignment + of the access is so that the data to be accessed is completely covered + by the larger access. */ #define LARGER(N) \ do { \ uintptr_t r = (uintptr_t)ptr & (N - 1); \ if (!C2(HAVE_INT,N)) break; \ - if (!C2(HAVE_ATOMIC_LDST_,N)) break; \ + if (!C2(FAST_ATOMIC_LDST_,N)) break; \ if (!C2(MAYBE_HAVE_ATOMIC_CAS_,N)) break; \ if (r + n <= N) return true; \ } while (0) +/* Note that this can return that a size/alignment is not lock-free even if + all the operations that we use to implement the respective accesses provide + lock-free forward progress as specified in C++14: Users likely expect + "lock-free" to also mean "fast", which is why we do not return true if, for + example, we implement loads with this size/alignment using a CAS. */ bool libat_is_lock_free (size_t n, void *ptr) { |