aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis/ValueTracking.cpp
AgeCommit message (Collapse)AuthorFilesLines
2025-05-07ValueTracking: Handle minimumnum and maximumnum in computeKnownFPClass (#138737)Matt Arsenault1-9/+19
For now use the same treatment as minnum/maxnum, but these should diverge. alive2 seems happy with this, except for some preexisting bugs with weird denormal modes.
2025-05-06ValueTracking: Handle minimumnum/maximumnum in canCreateUndefOrPoison (#138729)Matt Arsenault1-0/+2
2025-04-30[LVI][ValueTracking] Take UB-implying attributes into account in ↵Yingwei Zheng1-9/+12
`isSafeToSpeculativelyExecute` (#137604) Closes https://github.com/llvm/llvm-project/issues/137582. In the original case, LVI uses the edge information in `%entry -> %if.end` to get a more precise result. However, since the call to `smin` has an `noundef` return attribute, an immediate UB will be triggered after optimization. Currently, `isSafeToSpeculativelyExecuteWithOpcode(%min)` returns true because https://github.com/llvm/llvm-project/commit/6a288c1e32351d4be3b7630841af078fa1c3bb8b only checks whether the function is speculatable. However, it is not enough in this case. This patch takes UB-implying attributes into account if `IgnoreUBImplyingAttrs` is set to false. If it is set to true, the caller is responsible for correctly propagating UB-implying attributes.
2025-04-22Reapply [ValueTracking] Drop ucmp/scmp from getIntrinsicRange() (NFCI)Nikita Popov1-4/+0
Reapply after d51b2785abf77978d9218a7b6fb5b8ec6c770c31, which should fix optimization regressions. After #135642 we have a range attribute on the intrinsic declaration, so we should not need the special handling here.
2025-04-22Revert "[ValueTracking] Drop ucmp/scmp from getIntrinsicRange() (NFCI)"Hans Wennborg1-0/+4
This does seem to cause some functionality to change, see comment on https://github.com/llvm/llvm-project/commit/278c429d11e63bc709ea8c537b23c4e350ce2a07 This reverts commit 278c429d11e63bc709ea8c537b23c4e350ce2a07.
2025-04-22[ValueTracking] Drop ucmp/scmp from getIntrinsicRange() (NFCI)Nikita Popov1-4/+0
After #135642 we have a range attribute on the intrinsic declaration, so we should not need the special handling here.
2025-04-18[ValueTracking] Refactor `isKnownNonEqualFromContext` (#127388)Yingwei Zheng1-44/+61
This patch avoids adding RHS for comparisons with two variable operands (https://github.com/llvm/llvm-project/pull/118493#discussion_r1949397482). Instead, we iterate over related dominating conditions of both V1 and V2 in `isKnownNonEqualFromContext`, as suggested by goldsteinn (https://github.com/llvm/llvm-project/pull/117442#discussion_r1944058002). Compile-time improvement: https://llvm-compile-time-tracker.com/compare.php?from=c6d95c441a29a45782ff72d6cb82839b86fd0e4a&to=88464baedd7b1731281eaa0ce4438122b4d218a7&stat=instructions:u
2025-04-10[ValueTracking] Handle assume(trunc x to i1) in ComputeKnownBits (#118406)Andreas Jonson1-0/+10
proof: https://alive2.llvm.org/ce/z/zAspzb
2025-04-09ValueTracking: Do not look at users of constants for ephemeral values (#134618)Matt Arsenault1-13/+16
2025-04-03Ensure KnownBits passed when calculating from range md has right size (#132985)LU-JOHN1-0/+4
KnownBits passed to computeKnownBitsFromRangeMetadata must have the same bit width as the range metadata bit width. Otherwise the calculated results will be incorrect. --------- Signed-off-by: John Lu <John.Lu@amd.com>
2025-03-28[Analysis][NFC] Extract KnownFPClass (#133457)Tim Gymnich1-95/+136
- extract KnownFPClass for future use inside of GISelKnownBits --------- Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
2025-03-21[llvm:ir] Add support for constant data exceeding 4GiB (#126481)pzzp1-1/+1
The test file is over 4GiB, which is too big, so I didn’t submit it.
2025-03-09[ValueTracking] Bail out on x86_fp80 when computing fpclass with knownbits ↵Yingwei Zheng1-1/+2
(#130477) In https://github.com/llvm/llvm-project/pull/97762, we assume the minimum possible value of X is NaN implies X is NaN. But it doesn't hold for x86_fp80 format. If the knownbits of X are `?'011111111111110'????????????????????????????????????????????????????????????????`, the minimum possible value of X is NaN/unnormal. However, it can be a normal value. Closes https://github.com/llvm/llvm-project/issues/130408.
2025-03-07[ValueTracking] Skip incoming values that are the same as the phi in ↵DianQK1-0/+2
`isGuaranteedNotToBeUndefOrPoison` (#130111) Fixes (keep it open) #130110. If the incoming value is PHI itself, we can skip this. If we can guarantee that the other incoming values are neither undef nor poison, then we can also guarantee that the value isn't either. If we cannot guarantee that, it makes no sense in calculating it.
2025-03-06[ValueTracking] ComputeNumSignBitsImpl - add basic handling of BITCAST nodes ↵Narayan1-0/+25
(#127218) When a wider scalar/vector type containing all sign bits is bitcast to a narrower vector type, we can deduce that the resulting narrow elements will also be all sign bits. This matches existing behavior in SelectionDAG and helps optimize cases involving SSE intrinsics where sign-extended values are bitcast between different vector types. The current implementation fails to recognize that an arithmetic right shift is redundant when applied to elements that are already known to be all sign bits. This PR improves ComputeNumSignBitsImpl to track this information through bitcasts, enabling the optimization of such cases. ``` %ext = sext <1 x i1> %cmp to <1 x i8> %sub = bitcast <1 x i8> %ext to <4 x i2> %sra = ashr <4 x i2> %sub, <i2 1, i2 1, i2 1, i2 1> ; Can be simplified to just: %sub = bitcast <1 x i8> %ext to <4 x i2> ``` Closes #87624
2025-02-18[RISCV] Move the RISCVII namespaced enums into RISCVVType namespace in ↵Craig Topper1-1/+1
RISCVTargetParser.h. NFC (#127585) The VLMUL and policy enums originally lived in RISCVBaseInfo.h in the backend which is where everything else in the RISCVII namespace is defined. RISCVTargetParser.h is used by much more of the compiler and it doesn't really make sense to have 2 different namespaces exposed. These enums are both associated with VTYPE so using the RISCVVType namespace seems like a good home for them.
2025-02-17[Analysis] Remove getGuaranteedNonPoisonOps (#127461)Kazu Hirata1-8/+0
commit 0517772b4ac20c5d3a0de0d4703354a179833248 Author: Philip Reames <preames@rivosinc.com> Date: Thu Dec 19 14:14:11 2024 -0800
2025-02-17[Analysis] Remove getGuaranteedWellDefinedOps (#127453)Kazu Hirata1-8/+0
The last use was removed in: commit ac9e67756e0157793d565c2cceaf82e4403f58ba Author: Yingwei Zheng <dtcxzyw2333@gmail.com> Date: Mon Feb 26 01:53:16 2024 +0800
2025-02-12[ValueTracking] Infer NonEqual from dominating conditions/assumptions (#117442)Yingwei Zheng1-2/+47
This patch adds context-sensitive analysis support for `isKnownNonEqual`. It is required for https://github.com/llvm/llvm-project/issues/117436.
2025-02-11[ValueTracking] Handle trunc to i1 as condition in dominating condition. ↵Andreas Jonson1-1/+23
(#126414) proof: https://alive2.llvm.org/ce/z/gALGmv
2025-02-10[ValueTracking] Handle not in dominating condition. (#126423)Andreas Jonson1-0/+11
General handling of not in dominating condition. proof: https://alive2.llvm.org/ce/z/FjJN8q
2025-02-07ValueTracking: modernize isKnownInversion (NFC) (#126234)Ramkumar Ramachandra1-3/+2
2025-02-05[NFC][ValueTracking] Hoist the matching of RHS constant (#125818)Yingwei Zheng1-33/+33
2025-02-05[ValueTracking] Remove unused `V ^ Mask == C` from ↵Yingwei Zheng1-8/+2
`computeKnownBitsFromCmp`. NFCI. (#125666) I believe it is unused since we always convert it into `V == Mask ^ C`. Code coverage: https://dtcxzyw.github.io/llvm-opt-benchmark/coverage/data/zyw/opt-ci/actions-runner/_work/llvm-opt-benchmark/llvm-opt-benchmark/llvm/llvm-project/llvm/lib/Analysis/ValueTracking.cpp.html#L706
2025-02-04[ValueTracking] Fix bit width handling in computeKnownBits() for GEPs (#125532)Nikita Popov1-30/+36
For GEPs, we have three bit widths involved: The pointer bit width, the index bit width, and the bit width of the GEP operands. The correct behavior here is: * We need to sextOrTrunc the GEP operand to the index width *before* multiplying by the scale. * If the index width and pointer width differ, GEP only ever modifies the low bits. Adds should not overflow into the high bits. I'm testing this via unit tests because it's a bit tricky to test in IR with InstCombine canonicalization getting in the way.
2025-02-03[ValueTracking] Handle `trunc nuw` in `computeKnownBitsFromICmpCond` (#125414)Yingwei Zheng1-1/+4
This patch extends https://github.com/llvm/llvm-project/pull/82803 to further infer high bits when `nuw` is set. It will save some and instructions on induction variables. No real-world benefit is observed for `trunc nsw`. Alive2: https://alive2.llvm.org/ce/z/j-YFvt
2025-01-31[Analysis] Fix a warningKazu Hirata1-18/+0
This patch fixes: llvm/lib/Analysis/ValueTracking.cpp:116:27: error: unused function 'safeCxtI' [-Werror,-Wunused-function]
2025-02-01[ValueTracking] Use `SimplifyQuery` in `isKnownNonEqual` (#124942)Yingwei Zheng1-6/+2
It is needed by https://github.com/llvm/llvm-project/pull/117442.
2025-01-29[ValueTracking] Handle nonnull attributes at callsite (#124908)Yingwei Zheng1-17/+19
Alive2: https://alive2.llvm.org/ce/z/yJfskv Closes https://github.com/llvm/llvm-project/issues/124540.
2025-01-28[ValueTracking] Fix bug of using wrong condition for deducing KnownBits ↵goldsteinn1-6/+13
(#124481) - **[ValueTracking] Add test for issue 124275** - **[ValueTracking] Fix bug of using wrong condition for deducing KnownBits** Fixes https://github.com/llvm/llvm-project/issues/124275 Bug was introduced by https://github.com/llvm/llvm-project/pull/114689 Now that computeKnownBits supports breaking out of recursive Phi nodes, `IncValue` can be an operand of a different Phi than `P`. This breaks the previous assumptions we had when using the possibly condition at `CxtI` to constrain `IncValue`.
2025-01-24[ValueTracking] Pass changed predicate `SignedLPred` to ↵DianQK1-2/+2
`isImpliedByMatchingCmp` (#124271) Fixes #124267. Since we are using the new predicate, we should also update the parameters of `isImpliedByMatchingCmp`.
2025-01-24[NFC][DebugInfo] Use iterator-flavour getFirstNonPHI at many call-sites ↵Jeremy Morse1-1/+1
(#123737) As part of the "RemoveDIs" project, BasicBlock::iterator now carries a debug-info bit that's needed when getFirstNonPHI and similar feed into instruction insertion positions. Call-sites where that's necessary were updated a year ago; but to ensure some type safety however, we'd like to have all calls to getFirstNonPHI use the iterator-returning version. This patch changes a bunch of call-sites calling getFirstNonPHI to use getFirstNonPHIIt, which returns an iterator. All these call sites are where it's obviously safe to fetch the iterator then dereference it. A follow-up patch will contain less-obviously-safe changes. We'll eventually deprecate and remove the instruction-pointer getFirstNonPHI, but not before adding concise documentation of what considerations are needed (very few). --------- Co-authored-by: Stephen Tozer <Melamoto@gmail.com>
2025-01-22[ValueTracking] Handle recursive select/PHI in ComputeKnownBits (#114689)goldsteinn1-33/+40
Finish porting #114008 to `KnownBits` (Follow up to #113707).
2025-01-16[ValueTracking] Return `poison` for zero-sized types (#122647)Pedro Lobo1-2/+2
Return `poison` for zero-sized types in `isBitwiseValue`.
2025-01-15[ValueTracking] Provide getUnderlyingObjectAggressive fallback (#123019)Heejin Ahn1-1/+1
This callsite assumes `getUnderlyingObjectAggressive` returns a non-null pointer: https://github.com/llvm/llvm-project/blob/273a94b3d5a78cd9122c7b3bbb5d5a87147735d2/llvm/lib/Transforms/IPO/FunctionAttrs.cpp#L124 But it can return null when there are cycles in the value chain so there is no more `Worklist` item anymore to explore, in which case it just returns `Object` at the end of the function without ever setting it: https://github.com/llvm/llvm-project/blob/9b5857a68381652dbea2a0c9efa734b6c4cf38c9/llvm/lib/Analysis/ValueTracking.cpp#L6866-L6867 https://github.com/llvm/llvm-project/blob/9b5857a68381652dbea2a0c9efa734b6c4cf38c9/llvm/lib/Analysis/ValueTracking.cpp#L6889 `getUnderlyingObject` does not seem to return null either judging by looking at its code and its callsites, so I think it is not likely to be the author's intention that `getUnderlyingObjectAggressive` returns null. So this checks whether `Object` is null at the end, and if so, falls back to the original first value. --- The test case here was reduced by bugpoint and further reduced manually, but I find it hard to reduce it further. To trigger this bug, the memory operation should not be reachable from the entry BB, because the `phi`s should form a cycle without introducing another value from the entry. I tried a minimal `phi` cycle with three BBs (entry BB + two BBs in a cycle), but it was skipped here: https://github.com/llvm/llvm-project/blob/273a94b3d5a78cd9122c7b3bbb5d5a87147735d2/llvm/lib/Transforms/IPO/FunctionAttrs.cpp#L121-L122 To get the result that's not `ModRefInfo::NoModRef`, the length of `phi` chain needed to be greater than the `MaxLookup` value set in this function: https://github.com/llvm/llvm-project/blob/02403f4e450b86d93197dd34045ff40a34b21494/llvm/lib/Analysis/BasicAliasAnalysis.cpp#L744 But just lengthening the `phi` chain to 8 didn't trigger the same error in `getUnderlyingObjectAggressive` because `getUnderlyingObject` here passes through a single-chain `phi`s so not all `phi`s end up in `Visited`: https://github.com/llvm/llvm-project/blob/9b5857a68381652dbea2a0c9efa734b6c4cf38c9/llvm/lib/Analysis/ValueTracking.cpp#L6863 So I just submit here the smallest test case I managed to create. --- Fixes #117308 and fixes #122166.
2025-01-14[ValueTracking] Squash compile-time regression from 66badf2 (#122700)Ramkumar Ramachandra1-4/+5
66badf2 (VT: teach a special-case optz about samesign) introduced a compile-time regression due to the use of CmpPredicate::getMatching, which is unnecessarily inefficient. Introduce CmpPredicate::getPreferredSignedPredicate, which alleviates the inefficiency problem and squashes the compile-time regression.
2025-01-13IR: introduce ICmpInst::isImpliedByMatchingCmp (#122597)Ramkumar Ramachandra1-16/+3
Create an abstraction over isImplied{True,False}ByMatchingCmp to faithfully communicate the result of both functions, cleaning up code in callsites. While at it, fix a bug in the implied-false version of the function, which was inadvertedenly dropping samesign information.
2025-01-12VT: teach a special-case optz about samesign (#122590)Ramkumar Ramachandra1-2/+4
There is a narrow special-case in isImpliedCondICmps that can benefit from being taught about samesign. Since it costs us nothing to implement it, teach it about samesign, for completeness. This patch marks the completion of the effort to teach ValueTracking about samesign.
2025-01-11[ValueTracking] Take into account whether zero is poison when computing CR ↵goldsteinn1-4/+11
for `ct{t,l}z` (#122548)
2025-01-11VT: teach isImpliedCondMatchingOperands about samesign (#122474)Ramkumar Ramachandra1-5/+4
Move isImplied{True,False}ByMatchingCmp from CmpInst to ICmpInst, so that it can operate on CmpPredicate instead of CmpInst::Predicate, and teach it about samesign. There are two callers of this function, and we choose to migrate the one in ValueTracking, namely isImpliedCondMatchingOperands to CmpPredicate, hence teaching it about samesign, with visible test impact.
2025-01-10[ValueTracking] Add rotate idiom to haveNoCommonBitsSet special cases (#122165)Alex MacLean1-0/+13
An occasional idiom for rotation is "(A << B) + (A >> (BitWidth - B))". Currently this is not well handled on targets with native funnel-shift/rotate support. Add a special case to haveNoCommonBitsSet to ensure that the addition is converted to a disjoint or in InstCombine so during instruction selection the idiom can be converted to an efficient rotation implementation. Proof: https://alive2.llvm.org/ce/z/WdCZsN
2025-01-10VT: teach implied-cond-cr about samesign (#122447)Ramkumar Ramachandra1-10/+25
Teach isImpliedCondCommonOperandWithCR about samesign, noting that the only case we need to handle is when exactly one of the icmps have samesign.
2025-01-10VT: teach isImpliedCondOperands about samesign (#120263)Ramkumar Ramachandra1-13/+11
isImpliedCondICmps() and its callers in ValueTracking can greatly benefit from being taught about samesign. As a first step, teach one caller, namely isImpliedCondOperands(). Very minimal changes are required for this, as CmpPredicate::getMatching() does most of the work.
2025-01-08[ValueTracking] Move `getFlippedStrictnessPredicateAndConstant` into ↵Yingwei Zheng1-0/+74
ValueTracking. NFC. (#122064) Needed by https://github.com/llvm/llvm-project/pull/121958.
2024-12-28[ValueTracking] Fix a bug for signed min-max clamping (#121206)adam-bzowski1-1/+2
Correctly handle the case where the clamp is over the full range. This fixes an issue introduced in #121206.
2024-12-25[ValueTracking] Improve KnownBits for signed min-max clamping (#120576)adam-bzowski1-49/+59
A signed min-max clamp is the sequence of smin and smax intrinsics, which constrain a signed value into the range: smin <= value <= smax. The patch improves the calculation of KnownBits for a value subjected to the signed clamping.
2024-12-18[InstCombine] Widen Sel width after Cmp to generate Max/Min intrinsics. ↵tianleliu1-33/+60
(#118932) When Sel(Cmp) are in different integer type, From: (K and N mean width, K < N; a and b are src operands.) bN = Ext(bK) cond = Cmp(aN, bN) aK = Trunc aN retK = Sel(cond, aK, bK) To: bN = Ext(bK) cond = Cmp(aN, bN) retN = Sel(cond, aN, bN) retK = Trunc retN Though Sel's operands width becomes larger, the benefit of making type width in Sel the same as Cmp, is for combing to max/min intrinsics, and also better performance for SIMD instructions. References of correctness: https://alive2.llvm.org/ce/z/Y4Kegm https://alive2.llvm.org/ce/z/qFtjtR Reference of generated code comparision: https://gcc.godbolt.org/z/o97svGvYM https://gcc.godbolt.org/z/59Ynj91ov
2024-12-13PatternMatch: migrate to CmpPredicate (#118534)Ramkumar Ramachandra1-11/+11
With the introduction of CmpPredicate in 51a895a (IR: introduce struct with CmpInst::Predicate and samesign), PatternMatch is one of the first key pieces of infrastructure that must be updated to match a CmpInst respecting samesign information. Implement this change to Cmp-matchers. This is a preparatory step in migrating the codebase over to CmpPredicate. Since we no functional changes are desired at this stage, we have chosen not to migrate CmpPredicate::operator==(CmpPredicate) calls to use CmpPredicate::getMatching(), as that would have visible impact on tests that are not yet written: instead, we call CmpPredicate::operator==(Predicate), preserving the old behavior, while also inserting a few FIXME comments for follow-ups.
2024-12-12[ValueTracking] Add missing operand checks in `computeKnownFPClassFromCond` ↵Yingwei Zheng1-2/+2
(#119579) After https://github.com/llvm/llvm-project/pull/118257, we may call `computeKnownFPClassFromCond` with unrelated conditions. Then miscompilations may occur due to a lack of operand checks. This bug was introduced by https://github.com/llvm/llvm-project/commit/d2404ea6ced5fce9442260bde08a02d607fdd50d and https://github.com/llvm/llvm-project/pull/80740. However, the miscompilation couldn't have happened before https://github.com/llvm/llvm-project/pull/118257, because we only added related conditions to `DomConditionCache/AssumptionCache`. Fix the miscompilation reported in https://github.com/llvm/llvm-project/pull/118257#issuecomment-2536182166.
2024-12-03IR: introduce struct with CmpInst::Predicate and samesign (#116867)Ramkumar Ramachandra1-5/+5
Introduce llvm::CmpPredicate, an abstraction over a floating-point predicate, and a pack of an integer predicate with samesign information, in order to ease extending large portions of the codebase that take a CmpInst::Predicate to respect the samesign flag. We have chosen to demonstrate the utility of this new abstraction by migrating parts of ValueTracking, InstructionSimplify, and InstCombine from CmpInst::Predicate to llvm::CmpPredicate. There should be no functional changes, as we don't perform any extra optimizations with samesign in this patch, or use CmpPredicate::getMatching. The design approach taken by this patch allows for unaudited callers of APIs that take a llvm::CmpPredicate to silently drop the samesign information; it does not pose a correctness issue, and allows us to migrate the codebase piece-wise.