aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-vect-stmts.c
diff options
context:
space:
mode:
authorUros Bizjak <ubizjak@gmail.com>2020-07-20 20:34:46 +0200
committerUros Bizjak <ubizjak@gmail.com>2020-07-20 20:37:10 +0200
commit3c5e83d5b32c31b11cf1684bf5d1ab3e7174685c (patch)
treecc7e1025a52c224d67d2c5e9721fabe1a242d6af /gcc/tree-vect-stmts.c
parentd5803b9876b3d11c93d1a10fabb3fbb1c4a14bd6 (diff)
downloadgcc-3c5e83d5b32c31b11cf1684bf5d1ab3e7174685c.zip
gcc-3c5e83d5b32c31b11cf1684bf5d1ab3e7174685c.tar.gz
gcc-3c5e83d5b32c31b11cf1684bf5d1ab3e7174685c.tar.bz2
i386: Use lock prefixed insn instead of MFENCE [PR95750]
Currently, __atomic_thread_fence(seq_cst) on x86 and x86-64 generates mfence instruction. A dummy atomic instruction (a lock-prefixed instruction or xchg with a memory operand) would provide the same sequential consistency guarantees while being more efficient on most current CPUs. The mfence instruction additionally orders non-temporal stores, which is not relevant for atomic operations and are not ordered by seq_cst atomic operations anyway. 2020-07-20 Uroš Bizjak <ubizjak@gmail.com> gcc/ChangeLog: PR target/95750 * config/i386/i386.h (TARGET_AVOID_MFENCE): Rename from TARGET_USE_XCHG_FOR_ATOMIC_STORE. * config/i386/sync.md (mfence_sse2): Disable for TARGET_AVOID_MFENCE. (mfence_nosse): Enable also for TARGET_AVOID_MFENCE. Emit stack referred memory in word_mode. (mem_thread_fence): Do not generate mfence_sse2 pattern when TARGET_AVOID_MFENCE is true. (atomic_store<mode>): Update for rename. * config/i386/x86-tune.def (X86_TUNE_AVOID_MFENCE): Rename from X86_TUNE_USE_XCHG_FOR_ATOMIC_STORE. gcc/testsuite/ChangeLog: PR target/95750 * gcc.target/i386/pr95750.c: New test.
Diffstat (limited to 'gcc/tree-vect-stmts.c')
0 files changed, 0 insertions, 0 deletions