aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis
AgeCommit message (Collapse)AuthorFilesLines
2016-09-12[LVI] Complete the abstract of the cache layer [NFCI]Philip Reames1-72/+94
Convert the previous introduced is-a relationship between the LVICache and LVIImple clases into a has-a relationship and hide all the implementation details of the cache from the lazy query layer. The only slightly concerning change here is removing the addition of a queried block into the SeenBlock set in LVIImpl::getBlockValue. As far as I can tell, this was effectively dead code. I think it *used* to be the case that getCachedValueInfo wasn't const and might end up inserting elements in the cache during lookup. That's no longer true and hasn't been for a while. I did fixup the const usage to make that more obvious. llvm-svn: 281272
2016-09-12[LVI] Sink a couple more cache manipulation routines into the cache itself ↵Philip Reames1-36/+45
[NFCI] The only interesting bit here is the refactor of the handle callback and even that's pretty straight-forward. llvm-svn: 281267
2016-09-12[LVI] Abstract out the actual cache logic [NFCI]Philip Reames1-89/+97
Seperate the caching logic from the implementation of the lazy analysis. For the moment, the lazy analysis impl has a is-a relationship with the cache; this will change to a has-a relationship shortly. This was done as two steps merely to keep the changes simple and the diff understandable. llvm-svn: 281266
2016-09-11Add handling of !invariant.load to PropagateMetadata.Justin Lebar1-6/+6
Summary: This will let e.g. the load/store vectorizer propagate this metadata appropriately. Reviewers: arsenm Subscribers: tra, jholewinski, hfinkel, mzolotukhin Differential Revision: https://reviews.llvm.org/D23479 llvm-svn: 281153
2016-09-09Do not widen load for different variable in GVN.Dehao Chen1-37/+1
Summary: Widening load in GVN is too early because it will block other optimizations like PRE, LICM. https://llvm.org/bugs/show_bug.cgi?id=29110 The SPECCPU2006 benchmark impact of this patch: Reference: o2_nopatch (1): o2_patched Benchmark Base:Reference (1) ------------------------------------------------------- spec/2006/fp/C++/444.namd 25.2 -0.08% spec/2006/fp/C++/447.dealII 45.92 +1.05% spec/2006/fp/C++/450.soplex 41.7 -0.26% spec/2006/fp/C++/453.povray 35.65 +1.68% spec/2006/fp/C/433.milc 23.79 +0.42% spec/2006/fp/C/470.lbm 41.88 -1.12% spec/2006/fp/C/482.sphinx3 47.94 +1.67% spec/2006/int/C++/471.omnetpp 22.46 -0.36% spec/2006/int/C++/473.astar 21.19 +0.24% spec/2006/int/C++/483.xalancbmk 36.09 -0.11% spec/2006/int/C/400.perlbench 33.28 +1.35% spec/2006/int/C/401.bzip2 22.76 -0.04% spec/2006/int/C/403.gcc 32.36 +0.12% spec/2006/int/C/429.mcf 41.04 -0.41% spec/2006/int/C/445.gobmk 26.94 +0.04% spec/2006/int/C/456.hmmer 24.5 -0.20% spec/2006/int/C/458.sjeng 28 -0.46% spec/2006/int/C/462.libquantum 55.25 +0.27% spec/2006/int/C/464.h264ref 45.87 +0.72% geometric mean +0.23% For most benchmarks, it's a wash, but we do see stable improvements on some benchmarks, e.g. 447,453,482,400. Reviewers: davidxl, hfinkel, dberlin, sanjoy, reames Subscribers: gberry, junbuml Differential Revision: https://reviews.llvm.org/D24096 llvm-svn: 281074
2016-09-04[LCG] Clean up and make NDEBUG verify calls more rigorous withChandler Carruth1-32/+38
make_scope_exit now that we have that utility. This makes the code much more clear and readable by isolating the check. It also makes it easy to go through and make sure all the interesting update routines have a start and end verify so we don't slowly let the graph drift into an invalid state. llvm-svn: 280619
2016-09-04[LCG] A NFC refactoring to extract the logic for doingChandler Carruth1-111/+184
a postorder-sequence based update after edge insertion into a generic helper function. This separates the SCC-specific logic into two fairly simple lambdas and extracts the rest into a generic helper template function. I think this is a net win on its own merits because it disentangles different pieces of the algorithm. Now there is one place that does the two-step partition to identify a set of newly connected components and at the same time update the postorder sequence. However, I'm also hoping to re-use this an upcoming patch to update a cached post-order sequence of RefSCCs when doing the analogous update to the RefSCC graph, and I don't want to have two copies. The diff is quite messy but this really is just moving things around and making types generic rather than specific. llvm-svn: 280618
2016-09-02Simplify code a bit. No functional change intended.Andrea Di Biagio1-15/+16
We don't need to call `GetCompareTy(LHS)' every single time true or false is returned from function SimplifyFCmpInst as suggested by Sanjay in review D24142. llvm-svn: 280491
2016-09-02[instsimplify] Fix incorrect folding of an ordered fcmp with a vector of all ↵Andrea Di Biagio1-1/+1
NaN. This patch fixes a crash caused by an incorrect folding of an ordered comparison between a packed floating point vector and a splat vector of NaN. An ordered comparison between a vector and a constant vector of NaN, should always be folded into a constant vector where each element is i1 false. Since revision 266175, SimplifyFCmpInst folds the ordered fcmp into a scalar 'false'. Later on, this would cause an assertion failure, since the value type of the folded value doesn't match the expected value type of the uses of the original instruction: "Assertion failed: New->getType() == getType() && "replaceAllUses of value with new value of different type!". This patch fixes the issue and adds a test case to the already existing test InstSimplify/floating-point-compares.ll. Differential Revision: https://reviews.llvm.org/D24143 llvm-svn: 280488
2016-08-31[LoopInfo] Add verification by recomputation.Michael Zolotukhin1-3/+6
Summary: Current implementation of LI verifier isn't ideal and fails to detect some cases when LI is incorrect. For instance, it checks that all recorded loops are in a correct form, but it has no way to check if there are no more other (unrecorded in LI) loops in the function. This patch adds a way to detect such bugs. Reviewers: chandlerc, sanjoy, hfinkel Subscribers: llvm-commits, silvas, mzolotukhin Differential Revision: https://reviews.llvm.org/D23437 llvm-svn: 280280
2016-08-31Fix indent. NFC.Chad Rosier1-2/+2
llvm-svn: 280270
2016-08-31s/static inline/static/ for headers I have changed in r279475. NFC.Tim Shen1-1/+1
llvm-svn: 280257
2016-08-31[Loads] Properly populate the visited set in isDereferenceableAndAlignedPointerDavid Majnemer1-2/+5
There were paths where we wouldn't populate the visited set, causing us to recurse forever if an SSA variable was defined in terms of itself. This fixes PR30210. llvm-svn: 280191
2016-08-30Fixup r279618, instantiate ↵NAKAMURA Takumi1-2/+2
*AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC>, instead of *AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC,LazyCallGraph&>, for PassID. Or they were not instantiated as expected; llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID llvm-svn: 280105
2016-08-30NFC: add early exit in ModuleSummaryAnalysisPiotr Padlewski1-29/+32
Summary: Changed this code because it was not very readable. The one question that I got after changing it is, should we count calls to intrinsics? We don't add them to caller summary, so maybe we shouldn't also count them? Reviewers: tejohnson, eraman, mehdi_amini Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23949 llvm-svn: 280036
2016-08-29Fix a thinko in r278189.Easwaran Raman1-1/+1
llvm-svn: 280008
2016-08-28[Loop Vectorizer] Fixed memory confilict checks.Elena Demikhovsky1-3/+29
Fixed a bug in run-time checks for possible memory conflicts inside loop. The bug is in Low <-> High boundaries calculation. The High boundary should be calculated as "last memory access pointer + element size". Differential revision: https://reviews.llvm.org/D23176 llvm-svn: 279930
2016-08-26[Inliner] Report when inlining fails because callee's def is unavailableAdam Nemet1-10/+13
Summary: This is obviously an interesting case because it may motivate code restructuring or LTO. Reporting this requires instantiation of ORE in the loop where the call sites are first gathered. I've checked compile-time overhead *with* -Rpass-with-hotness and the worst slow-down was 6% in mcf and quickly tailing off. As before without -Rpass-with-hotness there is no overhead. Because this could be a pretty noisy diagnostics, it is currently qualified as 'verbose'. As of this patch, 'verbose' diagnostics are only emitted with -Rpass-with-hotness, i.e. when the output is expected to be filtered. Reviewers: eraman, chandlerc, davidxl, hfinkel Subscribers: tejohnson, Prazek, davide, llvm-commits Differential Revision: https://reviews.llvm.org/D23415 llvm-svn: 279860
2016-08-26limit the number of instructions per block examined by dead store eliminationBob Haarman1-6/+17
Summary: Dead store elimination gets very expensive when large numbers of instructions need to be analyzed. This patch limits the number of instructions analyzed per store to the value of the memdep-block-scan-limit parameter (which defaults to 100). This resulted in no observed difference in performance of the generated code, and no change in the statistics for the dead store elimination pass, but improved compilation time on some files by more than an order of magnitude. Reviewers: dexonsmith, bruno, george.burgess.iv, dberlin, reames, davidxl Subscribers: davide, chandlerc, dberlin, davidxl, eraman, tejohnson, mbodart, llvm-commits Differential Revision: https://reviews.llvm.org/D15537 llvm-svn: 279833
2016-08-25Update a comment.George Burgess IV1-3/+2
r279696, which changed `LLVM_CONSTEXPR AliasAttr` to `const AliasAttr`, made this comment make less sense. llvm-svn: 279699
2016-08-25Make some LLVM_CONSTEXPR variables const. NFC.George Burgess IV4-21/+18
This patch changes LLVM_CONSTEXPR variable declarations to const variable declarations, since LLVM_CONSTEXPR expands to nothing if the current compiler doesn't support constexpr. In all of the changed cases, it looks like the code intended the variable to be const instead of sometimes-constexpr sometimes-not. llvm-svn: 279696
2016-08-25Fix some Clang-tidy modernize-use-using and Include What You Use warnings; ↵Eugene Zelenko2-11/+34
other minor fixes. Differential revision: https://reviews.llvm.org/D23861 llvm-svn: 279695
2016-08-24The patch improves ValueTracking on left shift with nsw flag.Evgeny Stupachenko1-5/+23
Summary: The patch fixes PR28946. Reviewers: majnemer, sanjoy Differential Revision: http://reviews.llvm.org/D23296 From: Li Huang llvm-svn: 279684
2016-08-24[PM] Introduce basic update capabilities to the new PM's CGSCC passChandler Carruth2-28/+361
manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 llvm-svn: 279618
2016-08-23[ValueTracking] Use a function_ref to avoid multiple instantiationsDavid Majnemer1-5/+5
No functional change intended, this should just be a code size improvement. llvm-svn: 279563
2016-08-23[InstSimplify] allow icmp with constant folds for splat vectors, part 2Sanjay Patel1-83/+77
Completes the m_APInt changes for simplifyICmpWithConstant(). Other commits in this series: https://reviews.llvm.org/rL279492 https://reviews.llvm.org/rL279530 https://reviews.llvm.org/rL279534 https://reviews.llvm.org/rL279538 llvm-svn: 279543
2016-08-23[InstSimplify] allow icmp with constant folds for splat vectors, part 1Sanjay Patel1-6/+10
llvm-svn: 279538
2016-08-22[InstSimplify] add helper function for SimplifyICmpInst(); NFCISanjay Patel1-133/+143
And add a FIXME because the helper excludes folds for vectors. It's not clear yet how many of these are actually testable (and therefore necessary?) because later analysis uses computeKnownBits and other methods to catch many of these cases. llvm-svn: 279492
2016-08-22[GraphTraits] Replace all NodeType usage with NodeRefTim Shen2-12/+6
This should finish the GraphTraits migration. Differential Revision: http://reviews.llvm.org/D23730 llvm-svn: 279475
2016-08-22Revert -r278267 [ValueTracking] An improvement to IR ValueTracking on ↵Artur Pilipenko1-37/+1
Non-negative Integers This change cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks. See https://reviews.llvm.org/D18777 for details. llvm-svn: 279433
2016-08-19[GraphTraits] Make nodes_iterator dereference to NodeType*/NodeRefTim Shen1-3/+3
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later. Differential Revision: https://reviews.llvm.org/D23704 Differential Revision: https://reviews.llvm.org/D23705 llvm-svn: 279326
2016-08-19[AliasSetTracker] Degrade AliasSetTracker when may-alias sets get too large.Michael Kuperstein1-9/+116
Repeated inserts into AliasSetTracker have quadratic behavior - inserting a pointer into AST is linear, since it requires walking over all "may" alias sets and running an alias check vs. every pointer in the set. We can avoid this by tracking the total number of pointers in "may" sets, and when that number exceeds a threshold, declare the tracker "saturated". This lumps all pointers into a single "may" set that aliases every other pointer. (This is a stop-gap solution until we migrate to MemorySSA) This fixes PR28832. Differential Revision: https://reviews.llvm.org/D23432 llvm-svn: 279274
2016-08-19[PM] Rework the new PM support for building the ModuleSummaryIndex toChandler Carruth1-33/+30
directly produce the index as the value type result. This requires making the index movable which is straightforward. It greatly simplifies things by allowing us to completely avoid the builder API and the layers of abstraction inherent there. Instead both pass managers can directly construct these when run by value. They still won't be constructed truly eagerly thanks to the optional in the legacy PM. The code that directly builds the index can also just share a direct function. A notable change here is that the result type of the analysis for the new PM is no longer a reference type. This was really problematic when making changes to how we handle result types to make our interface requirements *much* more strict and precise. But I think this is an overall improvement. Differential Revision: https://reviews.llvm.org/D23701 llvm-svn: 279216
2016-08-18[Assumptions] Make collecting ephemeral values not quadratic in theChandler Carruth1-23/+38
number of assume intrinsics. The classical way to have a cache-friendly vector style container when we need queue semantics for BFS instead of stack semantics for DFS is to use an ever-growing vector and an index. Erasing from the front requires O(size) work, and unless we expect the worklist to grow *very* large, its probably cheaper to just grow and race down the list. But that makes it more bad that we're putting the assume intrinsics in this at all. We end up looking at the (by definition empty) use list to see if they're ephemeral (when we've already put them in that set), etc. Instead, directly populate the worklist with the operands when we mark the assume intrinsics as ephemeral. Also, test the visited set *before* putting things into the worklist so we don't accumulate the same value in the list 100s of times. It would be nice to use a set-vector for this but I think its useful to test the set earlier to avoid repeatedly querying whether the same instruction is safe to speculate. Hopefully with these changes the number of values pushed onto the worklist is smaller, and we avoid quadratic work by letting it grow as necessary. Differential Revision: https://reviews.llvm.org/D23396 llvm-svn: 279099
2016-08-17SCEV: Don't assert about non-SCEV-able value in isSCEVExprNeverPoison() ↵Hans Wennborg1-0/+4
(PR28932) Differential Revision: https://reviews.llvm.org/D23594 llvm-svn: 278999
2016-08-17Replace a few more "fall through" comments with LLVM_FALLTHROUGHJustin Bogner5-9/+13
Follow up to r278902. I had missed "fall through", with a space. llvm-svn: 278970
2016-08-17[GraphWriter] Change GraphWriter to use NodeRef in GraphTraitsTim Shen1-0/+1
Summary: This is part of the "NodeType* -> NodeRef" migration. Notice that since GraphWriter prints object address as identity, I added a static_assert on NodeRef to be a pointer type. Reviewers: dblaikie Subscribers: llvm-commits, MatzeB Differential Revision: https://reviews.llvm.org/D23580 llvm-svn: 278966
2016-08-17[LoopStrenghtReduce] Refactoring and addition of a new target cost function.Jonas Paulsson1-0/+5
Refactored so that a LSRUse owns its fixups, as oppsed to letting the LSRInstance own them. This makes it easier to rate formulas for LSRUses, since the fixups are available directly. The Offsets vector has been removed since it was no longer necessary. New target hook isFoldableMemAccessOffset(), which is used during formula rating. For SystemZ, this is useful to express that loads and stores with float or vector types with a big/negative offset should be avoided in loops. Without this, LSR will generate a lot of negative offsets that would require extra instructions for loading the address. Updated tests: test/CodeGen/SystemZ/loop-01.ll Reviewed by: Quentin Colombet and Ulrich Weigand. https://reviews.llvm.org/D19152 llvm-svn: 278927
2016-08-17Replace "fallthrough" comments with LLVM_FALLTHROUGHJustin Bogner2-9/+9
This is a mechanical change of comments in switches like fallthrough, fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead. llvm-svn: 278902
2016-08-17ObjCARC: Don't increment or dereference end() when scanning argsDuncan P. N. Exon Smith1-33/+37
When there's only one argument and it doesn't match one of the known functions, return ARCInstKind::CallOrUser rather than falling through to the two argument case. The old behaviour both incremented past and dereferenced end(). llvm-svn: 278881
2016-08-16Revert "Enhance SCEV to compute the trip count for some loops with unknown ↵Reid Kleckner1-77/+4
stride." This reverts commit r278731. It caused http://crbug.com/638314 llvm-svn: 278853
2016-08-16[InstSimplify] Fold gep (gep V, C), (xor V, -1) to C-1David Majnemer1-1/+7
llvm-svn: 278779
2016-08-15Revert "[ValueTracking] Improve ValueTracking on left shift with nsw flag"Sanjoy Das1-13/+4
This reverts commit r278172. It causes PR28946. llvm-svn: 278740
2016-08-15Enhance SCEV to compute the trip count for some loops with unknown stride.David L Kreitzer1-4/+77
Patch by Pankaj Chawla Differential Revision: https://reviews.llvm.org/D22377 llvm-svn: 278731
2016-08-15[ScopedNoAliasAA] collectMDInDomain should be a free functionDavid Majnemer1-3/+2
collectMDInDomain doesn't use any class members, making it a free function is not a functional change. llvm-svn: 278651
2016-08-15[ScopedNoAliasAA] Only collect noalias nodes if we have alias.scope nodesDavid Majnemer1-2/+4
No functional change is intended. llvm-svn: 278646
2016-08-15[ScopedNoAliasAA] Replace !ScopeNodes.size() with ScopeNodes.empty()David Majnemer1-1/+1
No functional change is intended. llvm-svn: 278645
2016-08-15Revert "[ScopedNoAliasAA] Remove an unneccesary set"David Majnemer1-13/+20
This reverts commit r278641. I'm not sure why but this has upset the multistage builders... llvm-svn: 278644
2016-08-15[ScopedNoAliasAA] Remove an unneccesary setDavid Majnemer1-20/+13
We are trying to prove that one group of operands is a subset of another. We did this by populating two Sets and determining that every element within one was inside the other. However, this is unnecessary. We can simply construct a single set and test if each operand is within it. llvm-svn: 278641
2016-08-13Constify ValueTracking. NFC.Pete Cooper1-99/+125
Almost all of the method here are only analysing Value's as opposed to mutating them. Mark all of the easy ones as const. llvm-svn: 278585