aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis/ValueTracking.cpp
AgeCommit message (Collapse)AuthorFilesLines
2018-05-25Recommit r333226 "[ValueTracking] Teach computeKnownBits that the result of ↵Craig Topper1-0/+6
an absolute value pattern that uses nsw flag is always positive." Libfuzzer tests have been fixed to prevent being optimized. Original commit message: If the nsw flag is used in the absolute value then it is undefined for INT_MIN. For all other value it will produce a positive number. So we can assume the result is positive. This breaks some InstCombine abs/nabs combining tests because we simplify the second compare from known bits rather than as the whole pattern. Looks like we can probably fix it by adding a neg+abs/nabs combine to just swap the select operands. N Differential Revision: https://reviews.llvm.org/D47041 llvm-svn: 333300
2018-05-25Revert r333226 "[ValueTracking] Teach computeKnownBits that the result of an ↵Craig Topper1-6/+0
absolute value pattern that uses nsw flag is always positive." This breaks some libFuzzer tests. http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fuzzer/builds/15589/steps/check-fuzzer/logs/stdio Reverting to investigate llvm-svn: 333253
2018-05-24[ValueTracking] Teach computeKnownBits that the result of an absolute value ↵Craig Topper1-0/+6
pattern that uses nsw flag is always positive. If the nsw flag is used in the absolute value then it is undefined for INT_MIN. For all other value it will produce a positive number. So we can assume the result is positive. This breaks some InstCombine abs/nabs combining tests because we simplify the second compare from known bits rather than as the whole pattern. Looks like we can probably fix it by adding a neg+abs/nabs combine to just swap the select operands. Need to check alive to make sure there are no corner cases. Differential Revision: https://reviews.llvm.org/D47041 llvm-svn: 333226
2018-05-23Fix aliasing of launder.invariant.groupPiotr Padlewski1-5/+29
Summary: Patch for capture tracking broke bootstrap of clang with -fstict-vtable-pointers which resulted in debbugging nightmare. It was fixed https://reviews.llvm.org/D46900 but as it turned out, there were other parts like inliner (computing of noalias metadata) that I found after bootstraping with enabled assertions. Reviewers: hfinkel, rsmith, chandlerc, amharc, kuhar Subscribers: JDevlieghere, eraman, llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D47088 llvm-svn: 333070
2018-05-22[InstCombine] Remove calloc transformationsDavid Bolvansky1-29/+1
Summary: Previous patch does not care if a value is changed between calloc and strlen. This needs to be removed from InstCombine and maybe moved to DSE later after some rework. Reviewers: efriedma Reviewed By: efriedma Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D47218 llvm-svn: 333022
2018-05-22[InstCombine] Calloc-ed strings optimizationsDavid Bolvansky1-2/+31
Summary: Example cases: strlen(calloc(...)) -> 0 Reviewers: efriedma, bkramer Reviewed By: bkramer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D47059 llvm-svn: 332990
2018-05-21[EarlyCSE] Improve EarlyCSE of some absolute value cases.Craig Topper1-0/+2
Change matchSelectPattern to return X and -X for ABS/NABS in a well defined order. Adjust EarlyCSE to account for this. Ensure the SPF result is some kind of min/max and not abs/nabs in one place in InstCombine that made me nervous. Prevously we returned the two operands of the compare part of the abs pattern. The RHS is always going to be a 0i, 1 or -1 constant. This isn't a very meaningful thing to return for any one. There's also some freedom in the abs pattern as to what happens when the value is equal to 0. This freedom led to early cse failing to match when different constants were used in otherwise equivalent operations. By returning the input and its negation in a defined order we can ensure an exact match. This also makes sure both patterns use the exact same subtract instruction for the negation. I believe CSE should evebntually make this happen and properly merge the nsw/nuw flags. But I'm not familiar with CSE and what order it does things in so it seemed like it might be good to really enforce that they were the same. Differential Revision: https://reviews.llvm.org/D47037 llvm-svn: 332865
2018-05-18Propagate nonnull and dereferenceable throught launderPiotr Padlewski1-1/+4
Summary: invariant.group.launder should not stop propagation of nonnull and dereferenceable, because e.g. we would not be able to hoist loads speculatively. Reviewers: rsmith, amharc, kuhar, xbolva00, hfinkel Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D46972 llvm-svn: 332788
2018-05-10[InstCombine] Moving overflow computation logic from InstCombine to ↵Omer Paparo Bivas1-0/+83
ValueTracking; NFC Differential Revision: https://reviews.llvm.org/D46704 Change-Id: Ifabcbe431a2169743b3cc310f2a34fd706f13f02 llvm-svn: 332026
2018-05-09[DebugInfo] Add DILabel metadata and intrinsic llvm.dbg.label.Shiva Chen1-0/+1
In order to set breakpoints on labels and list source code around labels, we need collect debug information for labels, i.e., label name, the function label belong, line number in the file, and the address label located. In order to keep these information in LLVM IR and to allow backend to generate debug information correctly. We create a new kind of metadata for labels, DILabel. The format of DILabel is !DILabel(scope: !1, name: "foo", file: !2, line: 3) We hope to keep debug information as much as possible even the code is optimized. So, we create a new kind of intrinsic for label metadata to avoid the metadata is eliminated with basic block. The intrinsic will keep existing if we keep it from optimized out. The format of the intrinsic is llvm.dbg.label(metadata !1) It has only one argument, that is the DILabel metadata. The intrinsic will follow the label immediately. Backend could get the label metadata through the intrinsic's parameter. We also create DIBuilder API for labels to be used by Frontend. Frontend could use createLabel() to allocate DILabel objects, and use insertLabel() to insert llvm.dbg.label intrinsic in LLVM IR. Differential Revision: https://reviews.llvm.org/D45024 Patch by Hsiangkai Wang. llvm-svn: 331841
2018-05-01Remove \brief commands from doxygen comments.Adrian Prantl1-3/+3
We've been running doxygen with the autobrief option for a couple of years now. This makes the \brief markers into our comments redundant. Since they are a visual distraction and we don't want to encourage more \brief markers in new code either, this patch removes them all. Patch produced by for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done Differential Revision: https://reviews.llvm.org/D46290 llvm-svn: 331272
2018-04-27[PatternMatch] Stabilize the matching order of commutative matchersRoman Lebedev1-5/+2
Summary: Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the `LHS` and `RHS` matchers: 1. match `RHS` matcher to the `first` operand of binary operator, 2. and then match `LHS` matcher to the `second` operand of binary operator. This works ok. But it complicates writing of commutative matchers, where one would like to match (`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side. This is additionally complicated by the fact that `m_Specific()` stores the `Value *`, not `Value **`, so it won't work at all out of the box. The last problem is trivially solved by adding a new `m_c_Specific()` that stores the `Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing one because i guess all the current users are ok with existing behavior, and this additional pointer indirection may have performance drawbacks. Also, i'm storing pointer, not reference, because for some mysterious-to-me reason it did not work with the reference. The first one appears trivial, too. Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**: 1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator, 2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator. Surprisingly, `$ ninja check-llvm` still passes with this. But i expect the bots will disagree.. The motivational unittest is included. I'd like to use this in D45664. Reviewers: spatel, craig.topper, arsenm, RKSimon Reviewed By: craig.topper Subscribers: xbolva00, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D45828 llvm-svn: 331085
2018-04-15[InstCombine] Simplify 'add' to 'or' if no common bits are set.Roman Lebedev1-0/+8
Summary: In order to get the whole fold as specified in [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]], let's first handle the simple straight-forward things. Let's start with the `and` -> `or` simplification. The one obvious thing missing here: the constant mask is not handled. I have an idea how to handle it, but it will require some thinking, and is not strictly required here, so i've left that for later. https://rise4fun.com/Alive/Pkmg Reviewers: spatel, craig.topper, eli.friedman, jingyue Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45631 llvm-svn: 330101
2018-03-25[PatternMatch] allow undef elements when matching vector FP +0.0Sanjay Patel1-1/+1
This continues the FP constant pattern matching improvements from: https://reviews.llvm.org/rL327627 https://reviews.llvm.org/rL327339 https://reviews.llvm.org/rL327307 Several integer constant matchers also have this ability. I'm separating matching of integer/pointer null from FP positive zero and renaming/commenting to make the functionality clearer. llvm-svn: 328461
2018-03-08[NFC] Factor out a helper function for checking if a block has a potential ↵Philip Reames1-0/+9
early implicit exit. llvm-svn: 327065
2018-03-06[ValueTracking] move helpers for SelectPatterns from InstCombine to ↵Sanjay Patel1-0/+24
ValueTracking Most of the folds based on SelectPatternResult belong in InstSimplify rather than InstCombine, so the helper code should be available to other passes/analysis. llvm-svn: 326812
2018-03-02Fix more spelling mistakes in comments of LLVM Analysis passesVedant Kumar1-1/+1
Patch by Reshabh Sharma! Differential Revision: https://reviews.llvm.org/D43939 llvm-svn: 326601
2018-02-28Fixed spelling mistake in comments of LLVM Analysis passesVedant Kumar1-8/+8
Patch by Reshabh Sharma! Differential Revision: https://reviews.llvm.org/D43861 llvm-svn: 326352
2018-02-27[ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to look through ↵Craig Topper1-0/+6
ExtractElement. This is similar to what's done in computeKnownBits and computeSignBits. Don't do anything fancy just collect information valid for any element. Differential Revision: https://reviews.llvm.org/D43789 llvm-svn: 326237
2018-02-26[ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to handle vector ↵Craig Topper1-0/+18
constants. Summary: This allows vector fabs to be removed in more cases. Reviewers: spatel, arsenm, RKSimon Reviewed By: spatel Subscribers: wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D43739 llvm-svn: 326138
2018-02-14Adding a width of the GEP index to the Data Layout.Elena Demikhovsky1-7/+18
Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout. p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits. The index size parameter is optional, if not specified, it is equal to the pointer size. Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width. It works fine if you can convert pointer to integer for address calculation and all registered targets do this. But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout. http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account. This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size. Differential Revision: https://reviews.llvm.org/D42123 llvm-svn: 325102
2018-02-08[ValueTracking] don't crash when assumptions conflict (PR36270)Sanjay Patel1-0/+8
The last assume in the test says that %B12 is 0. The first assume says that %and1 is less than %B12. Therefore, %and1 is unsigned less than 0...does not compute. That means this line: Known.Zero.setHighBits(RHSKnown.countMinLeadingZeros() + 1); ...tries to set more bits than exist. Differential Revision: https://reviews.llvm.org/D43052 llvm-svn: 324610
2018-02-06[InstCombine][ValueTracking] Match non-uniform constant power-of-two vectorsSimon Pilgrim1-8/+5
Generalize existing constant matching to work with non-uniform constant vectors as well. Differential Revision: https://reviews.llvm.org/D42818 llvm-svn: 324369
2018-01-24[ValueTracking] add recursion depth param to matchSelectPattern Sanjay Patel1-11/+18
We're getting bug reports: https://bugs.llvm.org/show_bug.cgi?id=35807 https://bugs.llvm.org/show_bug.cgi?id=35840 https://bugs.llvm.org/show_bug.cgi?id=36045 ...where we blow up the stack in value tracking because other passes are sending in selects that have an operand that is itself the select. We don't currently have a reliable way to avoid analyzing dead code that may take non-standard forms, so bail out when things go too far. This mimics the recursion depth limitations in other parts of value tracking. Unfortunately, this pushes the underlying problems for other passes (jump-threading, simplifycfg, correlated-propagation) into hiding. If someone wants to uncover those again, the first draft of this patch on Phab would do that (it would assert rather than bail out). Differential Revision: https://reviews.llvm.org/D42442 llvm-svn: 323331
2018-01-11[ValueTracking] recognize min/max-of-min/max with notted ops (PR35875)Sanjay Patel1-12/+31
This was originally planned as the fix for: https://bugs.llvm.org/show_bug.cgi?id=35834 ...but simpler transforms handled that case, so I implemented a lesser solution. It turns out we need to handle the case with 'not' ops too because the real code example that we are trying to solve: https://bugs.llvm.org/show_bug.cgi?id=35875 ...has extra uses of the intermediate values, so we can't rely on smaller canonicalizations to get us to the goal. As with rL321672, I've tried to show every possibility in the codegen tests because that's the simplest way to prove we're doing the right thing in the wide variety of permutations of this pattern. We can also show an InstCombine win because we added a fold for this case in: rL321998 / D41603 An Alive proof for one variant of the pattern to show that the InstCombine and codegen results are correct: https://rise4fun.com/Alive/vd1 Name: min3_nots %nx = xor i8 %x, -1 %ny = xor i8 %y, -1 %nz = xor i8 %z, -1 %cmpxz = icmp slt i8 %nx, %nz %minxz = select i1 %cmpxz, i8 %nx, i8 %nz %cmpyz = icmp slt i8 %ny, %nz %minyz = select i1 %cmpyz, i8 %ny, i8 %nz %cmpyx = icmp slt i8 %y, %x %r = select i1 %cmpyx, i8 %minxz, i8 %minyz => %cmpxyz = icmp slt i8 %minxz, %ny %r = select i1 %cmpxyz, i8 %minxz, i8 %ny Name: min3_nots_alt %nx = xor i8 %x, -1 %ny = xor i8 %y, -1 %nz = xor i8 %z, -1 %cmpxz = icmp slt i8 %nx, %nz %minxz = select i1 %cmpxz, i8 %nx, i8 %nz %cmpyz = icmp slt i8 %ny, %nz %minyz = select i1 %cmpyz, i8 %ny, i8 %nz %cmpyx = icmp slt i8 %y, %x %r = select i1 %cmpyx, i8 %minxz, i8 %minyz => %xz = icmp sgt i8 %x, %z %maxxz = select i1 %xz, i8 %x, i8 %z %xyz = icmp sgt i8 %maxxz, %y %maxxyz = select i1 %xyz, i8 %maxxz, i8 %y %r = xor i8 %maxxyz, -1 llvm-svn: 322283
2018-01-08[ValueTracking] remove overzealous assertSanjay Patel1-1/+1
The test is derived from a failing fuzz test: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5008 Credit to @rksimon for pointing out the problem. llvm-svn: 322016
2018-01-02[ValueTracking] recognize min/max of min/max patternsSanjay Patel1-0/+79
This is part of solving PR35717: https://bugs.llvm.org/show_bug.cgi?id=35717 The larger IR optimization is proposed in D41603, but we can show the improvement in ValueTracking using codegen tests because SelectionDAG creates min/max nodes based on ValueTracking. Any target with min/max ops should show wins here. I chose AArch64 vector ops because they're clean and uniform. Some Alive proofs for the tests (can't put more than 2 tests in 1 page currently because the web app says it's too long): https://rise4fun.com/Alive/WRN https://rise4fun.com/Alive/iPm https://rise4fun.com/Alive/HmY https://rise4fun.com/Alive/CNm https://rise4fun.com/Alive/LYf llvm-svn: 321672
2018-01-01[ValueTracking] Don't assume shift values are in rangeSimon Pilgrim1-4/+4
Reduced (as best I could...) from oss-fuzz #4857 test case llvm-svn: 321634
2017-12-26[ValueTracking] ignore FP signed-zero when detecting a casted-to-integer ↵Sanjay Patel1-8/+18
fmin/fmax pattern This is a preliminary step for the patch discussed in D41136 (and denoted here with the FIXME comment). When we match an FP min/max that is cast to integer, any intermediate difference between +0.0 or -0.0 should be muted in the result by the conversion (either fptosi or fptoui) of the result. Thus, we can enable 'nsz' for the purpose of matching fmin/fmax. Note that there's probably room to generalize this more, possibly by fixing the current calls to the weak version of isKnownNonZero() in matchSelectPattern() to the more powerful recursive version. Differential Revision: https://reviews.llvm.org/D41333 llvm-svn: 321456
2017-12-15[InlineCost] Find repeated loads in the calleeHaicheng Wu1-1/+1
SROA analysis of InlineCost can figure out that some stores can be removed after inlining and then the repeated loads clobbered by these stores are also free. This patch finds these clobbered loads and adjust the inline cost accordingly. Differential Revision: https://reviews.llvm.org/D33946 llvm-svn: 320814
2017-12-09Infer lowest bits of an integer Multiply when the low bits of the operands ↵Simon Dardis1-9/+66
are known When the lowest bits of the operands to an integer multiply are known, the low bits of the result are deducible. Code to deduce known-zero bottom bits already existed, but this change improves on that by deducing known-ones. Patch by: Pedro Ferreira Reviewers: craig.topper, sanjoy, efriedma Differential Revision: https://reviews.llvm.org/D34029 llvm-svn: 320269
2017-12-09Hardware-assisted AddressSanitizer (llvm part).Evgeniy Stepanov1-1/+2
Summary: This is LLVM instrumentation for the new HWASan tool. It is basically a stripped down copy of ASan at this point, w/o stack or global support. Instrumenation adds a global constructor + runtime callbacks for every load and store. HWASan comes with its own IR attribute. A brief design document can be found in clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier). Reviewers: kcc, pcc, alekseyshl Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D40932 llvm-svn: 320217
2017-12-05[InstCombine] Don't crash on out of bounds shiftsIgor Laevsky1-13/+17
Differential Revision: https://reviews.llvm.org/D40649 llvm-svn: 319761
2017-12-04Revert "[ValueTracking] Pass only a single lambda to ↵Sam McCall1-29/+37
computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI" This reverts commit r319624, which seems to cause a miscompile (breaks the multistage PPC buildbots) llvm-svn: 319652
2017-12-02[ValueTracking] Pass only a single lambda to ↵Craig Topper1-37/+29
computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI llvm-svn: 319624
2017-11-13[ValueTracking] use 'auto' with 'dyn_cast'; NFCSanjay Patel1-11/+13
llvm-svn: 318058
2017-11-13[ValueTracking] simplify code in CannotBeNegativeZero() with match(); NFCISanjay Patel1-5/+3
llvm-svn: 318055
2017-11-08Add an @llvm.sideeffect intrinsicDan Gohman1-1/+3
This patch implements Chandler's idea [0] for supporting languages that require support for infinite loops with side effects, such as Rust, providing part of a solution to bug 965 [1]. Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual effect, but which appears to optimization passes to have obscure side effects, such that they don't optimize away loops containing it. It also teaches several optimization passes to ignore this intrinsic, so that it doesn't significantly impact optimization in most cases. As discussed on llvm-dev [2], this patch is the first of two major parts. The second part, to change LLVM's semantics to have defined behavior on infinite loops by default, with a function attribute for opting into potential-undefined-behavior, will be implemented and posted for review in a separate patch. [0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html [1] https://bugs.llvm.org/show_bug.cgi?id=965 [2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html Differential Revision: https://reviews.llvm.org/D38336 llvm-svn: 317729
2017-11-08[ValueTracking] Use APInt::isNullValue/isOneValue which are more efficient ↵Craig Topper1-3/+6
for large APInts. llvm-svn: 317712
2017-11-06[ValueTracking] readonly (const) is a requirement for converting sqrt to ↵Sanjay Patel1-3/+1
llvm.sqrt; nnan is not As discussed in D39204, this is effectively a revert of rL265521 which required nnan to vectorize sqrt libcalls based on the old LangRef definition of llvm.sqrt. Now that the definition has been updated so the libcall and intrinsic have the same semantics apart from potentially setting errno, we can remove the nnan requirement. We have the right check to know that errno is not set: if (!ICS.onlyReadsMemory()) ...ahead of the switch. This will solve https://bugs.llvm.org/show_bug.cgi?id=27435 assuming that's being built for a target with -fno-math-errno. Differential Revision: https://reviews.llvm.org/D39642 llvm-svn: 317519
2017-10-27Improve clamp recognition in ValueTracking.Artur Gainullin1-12/+26
Summary: ValueTracking was recognizing not all variations of clamp. Swapping of true value and false value of select was added to fix this problem. The first patch was reverted because it caused miscompile in NVPTX target. Added corresponding test cases. Reviewers: spatel, majnemer, efriedma, reames Subscribers: llvm-commits, jholewinski Differential Revision: https://reviews.llvm.org/D39240 llvm-svn: 316795
2017-10-21[ValueTracking] Remove unnecessary temporary APInt from ↵Craig Topper1-5/+1
computeNumSignBitsVectorConstant. We can just use getNumSignBits instead of inverting negative numbers. llvm-svn: 316266
2017-10-21[ValueTracking] Simplify the known bits code for constant vectors a little.Craig Topper1-4/+2
Neither of these cases really require a temporary APInt outside the loop. For the ConstantDataSequential case the APInt will never be larger than 64-bits so its fine to just call getElementAsAPInt. For ConstantVector we can get the APInt by reference and only make a copy where the inversion is needed. llvm-svn: 316265
2017-10-20[ValueTracking] Enabling ValueTracking patch by default Nikolai Bozhenov1-9/+0
(recommit #2 after checking for timeout issue). The original patch was an improvement to IR ValueTracking on non-negative integers. It has been checked in to trunk (D18777, r284022). But was disabled by default due to performance regressions. Perf impact has improved. The patch would be enabled by default. Reviewers: reames, hfinkel Differential Revision: https://reviews.llvm.org/D34101 Patch by: Olga Chupina <olga.chupina@intel.com> llvm-svn: 316208
2017-10-19Revert r315992 because of a found miscompilation failureNikolai Bozhenov1-33/+12
llvm-svn: 316164
2017-10-18Fixup patch for revision rL316070.Nikolai Bozhenov1-1/+2
Added check that type of CmpConst and source type of trunc are equal for correct matching of the case when we can set widened C constant equal to CmpConstant. %cond = cmp iN %x, CmpConst %tr = trunc iN %x to iK %narrowsel = select i1 %cond, iK %t, iK C Patch by: Gainullin, Artur <artur.gainullin@intel.com> llvm-svn: 316082
2017-10-18Improve lookThroughCast function.Nikolai Bozhenov1-1/+41
Summary: When we have the following case: %cond = cmp iN %x, CmpConst %tr = trunc iN %x to iK %narrowsel = select i1 %cond, iK %t, iK C We could possibly match only min/max pattern after looking through cast. So it is more profitable if widened C constant will be equal CmpConst. That is why just set widened C constant equal to CmpConst, because there is a further check in this function that trunc CmpConst == C. Also description for lookTroughCast function was added. Reviewers: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D38536 Patch by: Artur Gainullin <artur.gainullin@intel.com> llvm-svn: 316070
2017-10-17Improve clamp recognition in ValueTracking.Nikolai Bozhenov1-12/+33
Summary: ValueTracking was recognizing not all variations of clamp. Swapping of true value and false value of select was added to fix this problem. This change breaks the canonical form of cmp inside the matchMinMax function, that is why additional checks for compare predicates is needed. Added corresponding test cases. Reviewers: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D38531 Patch by: Artur Gainullin <artur.gainullin@intel.com> llvm-svn: 315992
2017-10-16[ValueTracking] fix typos, formatting; NFCSanjay Patel1-11/+10
llvm-svn: 315909
2017-10-12[ValueTracking] return zero when there's conflict in known bits of a shift ↵Sanjay Patel1-14/+12
(PR34838) Poison allows us to return a better result than undef. llvm-svn: 315595