aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib
diff options
context:
space:
mode:
authorFlorian Hahn <flo@fhahn.com>2020-09-09 19:36:41 +0100
committerFlorian Hahn <flo@fhahn.com>2020-09-09 23:01:58 +0100
commit9969c317ff0877ed6155043422c70e1d4c028a35 (patch)
tree9b0262a224a7d2246168ff21fafdf32c80f81e5e /llvm/lib
parent0a5dc7effb191eff740e0e7ae7bd8e1f6bdb3ad9 (diff)
downloadllvm-9969c317ff0877ed6155043422c70e1d4c028a35.zip
llvm-9969c317ff0877ed6155043422c70e1d4c028a35.tar.gz
llvm-9969c317ff0877ed6155043422c70e1d4c028a35.tar.bz2
[DSE,MemorySSA] Handle atomic stores explicitly in isReadClobber.
Atomic stores are modeled as MemoryDef to model the fact that they may not be reordered, depending on the ordering constraints. Atomic stores that are monotonic or weaker do not limit re-ordering, so we do not have to treat them as potential read clobbers. Note that llvm/test/Transforms/DeadStoreElimination/MSSA/atomic.ll already contains a set of negative test cases. Reviewed By: asbirlea Differential Revision: https://reviews.llvm.org/D87386
Diffstat (limited to 'llvm/lib')
-rw-r--r--llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp5
1 files changed, 5 insertions, 0 deletions
diff --git a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index 1427bd4..12514be 100644
--- a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -1824,6 +1824,11 @@ struct DSEState {
// Returns true if \p Use may read from \p DefLoc.
bool isReadClobber(MemoryLocation DefLoc, Instruction *UseInst) {
+ // Monotonic or weaker atomic stores can be re-ordered and do not need to be
+ // treated as read clobber.
+ if (auto SI = dyn_cast<StoreInst>(UseInst))
+ return isStrongerThan(SI->getOrdering(), AtomicOrdering::Monotonic);
+
if (!UseInst->mayReadFromMemory())
return false;