aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/CodeGen/CodeGenPrepare.cpp
AgeCommit message (Collapse)AuthorFilesLines
2023-03-08[CodeGenPrepare] Stop llvm.vscale() -> getelementptr(null, 1) transformation.Paul Walker1-18/+0
I've pulled this change from D145404 to land in isolation because I'm concerned the code might be more important than the test coverage might suggest (NOTE: the code has no test coverage).
2023-03-08[NFC] Remove dead code in ExtAddrMode::print checked by coverty toolXiang1 Zhang1-1/+1
2023-03-02[AArch64][SME2] Add CodeGen support for target("aarch64.svcount").Sander de Smalen1-1/+1
This patch adds AArch64 CodeGen support such that the type can be passed and returned to/from functions, and also adds support to use this type in load/store operations and PHI nodes. Reviewed By: paulwalker-arm Differential Revision: https://reviews.llvm.org/D136862
2023-02-19Use APInt::getSignificantBits instead of APInt::getMinSignedBits (NFC)Kazu Hirata1-1/+1
Note that getMinSignedBits has been soft-deprecated in favor of getSignificantBits.
2023-02-14Revert "[CGP] Add generic TargetLowering::shouldAlignPointerArgs() ↵Jake Egan1-2/+2
implementation" These commits are causing a test-suite build failure on AIX. Revert for now for time to investigate. https://lab.llvm.org/buildbot/#/builders/214/builds/5779/steps/9/logs/stdio This reverts commit bd87a2449da0c82e63cebdf9c131c54a5472e3a7 and 4c72266830ffa332ebb7cf1d3bbd6c56d001fa0f.
2023-02-09[CGP] Add generic TargetLowering::shouldAlignPointerArgs() implementationAlex Richardson1-2/+2
This function was added for ARM targets, but aligning global/stack pointer arguments passed to memcpy/memmove/memset can improve code size and performance for all targets that don't have fast unaligned accesses. This adds a generic implementation that adjusts the alignment to pointer size if unaligned accesses are slow. Review D134168 suggests that this significantly improves performance on synthetic benchmarks such as Dhrystone on RV32 as it avoids memcpy() calls. Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D134282
2023-01-22Use llvm::popcount instead of llvm::countPopulation(NFC)Kazu Hirata1-1/+1
2023-01-22[NFC] Fix "form/from" typosPiotr Fusik1-1/+1
Reviewed By: #libc, ldionne Differential Revision: https://reviews.llvm.org/D142007
2023-01-21[Cost] Add CostKind to getVectorInstrCost and its related usersShihPo Hung1-3/+3
LoopUnroll estimates the loop size via getInstructionCost(), but getInstructionCost() cannot pass CostKind to getVectorInstrCost(). And so does getShuffleCost() to getBroadcastShuffleOverhead(), getPermuteShuffleOverhead(), getExtractSubvectorOverhead(), and getInsertSubvectorOverhead(). To address this, this patch adds an argument CostKind to these functions. Reviewed By: RKSimon Differential Revision: https://reviews.llvm.org/D142116
2023-01-11[NFC] Use TypeSize::geFixedValue() instead of TypeSize::getFixedSize()Guillaume Chatelet1-1/+1
This change is one of a series to implement the discussion from https://reviews.llvm.org/D141134.
2023-01-11[NFC] Use TypeSize::getKnownMinValue() instead of TypeSize::getKnownMinSize()Guillaume Chatelet1-1/+1
This change is one of a series to implement the discussion from https://reviews.llvm.org/D141134.
2023-01-06[DebugInfo][NFC] Rename is/setUndef to is/setKilllocationOCHyams1-1/+1
These names better reflect the semantics and also the implementation, since it's not just "undef" operands that are sentinels used to signal that the debug intrinsic terminates dominating locations definitions. Related to https://discourse.llvm.org/t/auto-undef-debug-uses-of-a-deleted-value Reviewed By: StephenTozer Differential Revision: https://reviews.llvm.org/D140903
2023-01-05Move from llvm::makeArrayRef to ArrayRef deduction guides - llvm/ partserge-sans-paille1-7/+5
Use deduction guides instead of helper functions. The only non-automatic changes have been: 1. ArrayRef(some_uint8_pointer, 0) needs to be changed into ArrayRef(some_uint8_pointer, (size_t)0) to avoid an ambiguous call with ArrayRef((uint8_t*), (uint8_t*)) 2. CVSymbol sym(makeArrayRef(symStorage)); needed to be rewritten as CVSymbol sym{ArrayRef(symStorage)}; otherwise the compiler is confused and thinks we have a (bad) function prototype. There was a few similar situation across the codebase. 3. ADL doesn't seem to work the same for deduction-guides and functions, so at some point the llvm namespace must be explicitly stated. 4. The "reference mode" of makeArrayRef(ArrayRef<T> &) that acts as no-op is not supported (a constructor cannot achieve that). Per reviewers' comment, some useless makeArrayRef have been removed in the process. This is a follow-up to https://reviews.llvm.org/D140896 that introduced the deduction guides. Differential Revision: https://reviews.llvm.org/D140955
2022-12-16Correct typos (NFC)Sprite1-2/+2
Just found some typos while reading the llvm/circt project. compliment -> complement emitsd -> emits
2022-12-04[llvm] Use std::nullopt instead of None in comments (NFC)Kazu Hirata1-1/+1
This is part of an effort to migrate from llvm::Optional to std::optional: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-02[CodeGen] Use std::nullopt instead of None (NFC)Kazu Hirata1-7/+7
This patch mechanically replaces None with std::nullopt where the compiler would warn if None were deprecated. The intent is to reduce the amount of manual work required in migrating from Optional to std::optional. This is part of an effort to migrate from llvm::Optional to std::optional: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-11-26[CodeGen] Use std::optional in CodeGenPrepare.cpp (NFC)Kazu Hirata1-2/+3
This is part of an effort to migrate from llvm::Optional to std::optional: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-11-21[Assignment Tracking][25/*] Replace sunk address uses in dbg.assign intrinsicsOCHyams1-0/+1
The Assignment Tracking debug-info feature is outlined in this RFC: https://discourse.llvm.org/t/ rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir Reviewed By: StephenTozer Differential Revision: https://reviews.llvm.org/D136255
2022-11-17[CGP] Update MemIntrinsic alignment if possibleAlex Richardson1-13/+13
Previously it was only being done if shouldAlignPointerArgs() returned true, which right now is only true for ARM targets. Updating the argument alignment attributes of memcpy/memset intrinsics if the underlying object has larger alignment can be beneficial even when CGP didn't increase alignment (as can be seen from the test changes), so invert the loop and if condition. Differential Revision: https://reviews.llvm.org/D134281
2022-11-04[CodeGenPrep] Change ValueToSExts from DeseMap to MapVectorHaohai Wen1-1/+1
mergeSExts iterates throught ValueToSExts. Using DenseMap result in unstable optimization path so that output IR may vary even if the input IR is same. Reviewed By: wxiao3 Differential Revision: https://reviews.llvm.org/D137234
2022-10-13[CodeGenPrep] Handle constants in ConvertPhiTypeDavid Green1-3/+6
This is a simple addition to the convertPhiTypes in CodeGenPrepare to consider and convert constants as it converts the phi type. Someone fixed the bug in the motivating example, so the undef is now a constant 0. This does mean converting between integer and floating point constants, which may have different materialization. Differential Revision: https://reviews.llvm.org/D135561
2022-09-16[AArch64] Use tbl for truncating vector FPtoUI conversions.Florian Hahn1-1/+1
On AArch64, doing the vector truncate separately after the fptoui conversion can be lowered more efficiently using tbl.4, building on D133495. https://alive2.llvm.org/ce/z/T538CC Depends on D133495 Reviewed By: t.p.northover Differential Revision: https://reviews.llvm.org/D133496
2022-09-16[AArch64] Lower vector trunc using tbl.Florian Hahn1-2/+3
Similar to using tbl to lower vector ZExts, tbl4 can be used to lower vector truncates. The initial version support i32->i8 conversions. Depends on D120571 Reviewed By: t.p.northover Differential Revision: https://reviews.llvm.org/D133495
2022-09-16[AArch64] Lower extending uitofp using tbl.Florian Hahn1-0/+4
On AArch64, doing the zero-extend separately first can be lowered more efficiently using tbl, building on D120571. https://alive2.llvm.org/ce/z/8Je595 Depends on D120571 Reviewed By: t.p.northover Differential Revision: https://reviews.llvm.org/D133494
2022-09-15[CGP,AArch64] Replace zexts with shuffle that can be lowered using tbl.Florian Hahn1-0/+4
This patch extends CodeGenPrepare to lower zext v16i8 -> v16i32 in loops using a wide shuffle creating a v64i8 vector, selecting groups of 3 zero elements and an element from the input. This is profitable on AArch64 where such shuffles can be lowered to tbl instructions, but only in loops, because it requires materializing 4 masks, which can be done in the loop preheader. This is the only reason the transform is part of CGP. If there's a better alternative I missed, please let me know. The same goes for the shouldReplaceZExtWithShuffle hook which guards this. I am not sure if this transform will be beneficial on other targets, but it seems like there is no way other convenient way. This improves the generated code for loops like the one below in combination with D96522. int foo(uint8_t *p, int N) { unsigned long long sum = 0; for (int i = 0; i < N ; i++, p++) { unsigned int v = *p; sum += (v < 127) ? v : 256 - v; } return sum; } https://clang.godbolt.org/z/Wco866MjY Reviewed By: t.p.northover Differential Revision: https://reviews.llvm.org/D120571
2022-09-07[CodeGen] Limit building time in CodeGenPrepare for huge functionXiang1 Zhang1-60/+195
Details: Currently CodeGenPrepare is very time consuming in handling big functions. Old Algorithm : It iterate each BB in function, and go on handle very instructions in BB. Due to some instruction optimizations may affect the BBs' dominate tree. The old logic will re-iterate and try optimize for each BB. Suppose we have a big function with 20000 BBs, If we handled the last BB with fine tuning the dominate tree. We need totally re-iterate and try optimize the 20000 BBs from the beginning. The Complex is near N! And we really encounter somes big tests (> 20000 BBs) that cost more than 30 mins in this pass. (Debug version compiler will cost 2 hours here) What this patch do for huge function ? It mainly changes the iteration way for optimization. 1 We do optimizeBlock for each BB (that is same with old way). And, in the meaning time, If BB is changed/updated in the optimization, it will be put into FreshBBs (try do optimizeBlock again). The new created BB at previous iteration will also put into FreshBBs. 2 For the BBs which not updated at previous iteration, we directly skip it. Strictly speaking, here may miss some opportunity, but the probability is very small. 3 For Instructions in single BB, we do optimizeInst for each instruction. If optimizeInst change the instruction dominator in this BB, rather than break and go back to optimize the first BB (the old way), we directly iterate instructions (to do optimizeInst) in this updated BB again (the new way). What this patch do for small/normal (not huge) function ? It is same with the Old Algorithm. (NFC) Reviewed By: LuoYuanke Differential Revision: https://reviews.llvm.org/D129352
2022-09-03[TTI] Add isExpensiveToSpeculativelyExecute wrapperSimon Pilgrim1-2/+1
CGP uses a raw `getInstructionCost(I, TargetTransformInfo::TCK_SizeAndLatency) >= TCC_Expensive` check to see if its better to move an expensive instruction used in a select behind a branch instead. This is causing issues with upcoming improvements to TCK_SizeAndLatency costs on X86 as we need to use TCK_SizeAndLatency as an uop count (so its compatible with various target-specific buffer sizes - see D132288), but we can have instructions that have a low TCK_SizeAndLatency value but should still be treated as 'expensive' (FDIV for example) - by adding a isExpensiveToSpeculativelyExecute wrapper we can keep the current behaviour but still add an x86 override in a future patch when the cost tables are updated to compensate.
2022-08-30[NFC] Clang-format for CodeGenPrepare.cppXiang1 Zhang1-414/+420
2022-08-24[X86] Promote i8/i16 CTTZ (BSF) instructions and remove speculation branchSimon Pilgrim1-3/+3
This patch adds a Type operand to the TLI isCheapToSpeculateCttz/isCheapToSpeculateCtlz callbacks, allowing targets to decide whether branches should occur on a type-by-type/legality basis. For X86, this patch proposes to allow CTTZ speculation for i8/i16 types that will lower to promoted i32 BSF instructions by masking the operand above the msb (we already do something similar for i8/i16 TZCNT). This required a minor tweak to CTTZ lowering - if the src operand is known never zero (i.e. due to the promotion masking) we can remove the CMOV zero src handling. Although BSF isn't very fast, most CPUs from the last 20 years don't do that bad a job with it, although there are some annoying passthrough EFLAGS dependencies. Additionally, now that we emit 'REP BSF' in most cases, we are tending towards assuming this will most likely be executed as a TZCNT instruction on any semi-modern CPU. Differential Revision: https://reviews.llvm.org/D132520
2022-08-22[TTI] Remove OperandValueKind/Properties from getArithmeticInstrCost ↵Philip Reames1-9/+8
interface [nfc] This completes the client side transition to the OperandValueInfo version of this routine. Backend TTI implementations still use the prior versions for now.
2022-08-18[CostModel] Replace getUserCost with getInstructionCostSimon Pilgrim1-2/+2
* Replace getUserCost with getInstructionCost, covering all cost kinds. * Remove getInstructionLatency, it's not implemented by any backends, and we should fold the functionality into getUserCost (now getInstructionCost) to make it easier for targets to handle the cost kinds with their existing cost callbacks. Original Patch by @samparker (Sam Parker) Differential Revision: https://reviews.llvm.org/D79483
2022-08-14Use llvm::none_of (NFC)Kazu Hirata1-2/+2
2022-08-04[TTI] Change new getVectorInstrCost overload to use const reference after ↵Fangrui Song1-1/+1
D131114 A const reference is preferred over a non-null const pointer. `Type *` is kept as is to match the other overload. Reviewed By: davidxl Differential Revision: https://reviews.llvm.org/D131197
2022-08-04[AArch64][TTI][NFC] Overload method 'getVectorInstrCost' to provide vector ↵Mingming Liu1-1/+1
instruction itself, as a context information for cost estimation. 1) Overloaded (instruction-based) method is a wrapper around the current (opcode-based) method. 2) This patch also changes a few callsites (VectorCombine.cpp, SLPVectorizer.cpp, CodeGenPrepare.cpp) to call the overloaded method. 3) This is a split of D128302. Differential Revision: https://reviews.llvm.org/D131114
2022-08-03[llvm][NFC] Refactor code to use ProfDataUtilsPaul Kirth1-3/+4
In this patch we replace common code patterns with the use of utility functions for dealing with profiling metadata. There should be no change in functionality, as the existing checks should be preserved in all cases. Reviewed By: bogner, davidxl Differential Revision: https://reviews.llvm.org/D128860
2022-08-02[SelectOpti] Auto-disable other cmov optis when the new select-opti pass is ↵Sotiris Apostolakis1-0/+4
enabled Reviewed By: davidxl Differential Revision: https://reviews.llvm.org/D129817
2022-07-27Revert "[llvm][NFC] Refactor code to use ProfDataUtils"Paul Kirth1-4/+3
This reverts commit 300c9a78819b4608b96bb26f9320bea6b8a0c4d0. We will reland once these issues are ironed out.
2022-07-27[llvm][NFC] Refactor code to use ProfDataUtilsPaul Kirth1-3/+4
In this patch we replace common code patterns with the use of utility functions for dealing with profiling metadata. There should be no change in functionality, as the existing checks should be preserved in all cases. Reviewed By: bogner, davidxl Differential Revision: https://reviews.llvm.org/D128860
2022-07-27[CodeGen] Fixed ambiguous symbol ExtAddrMode in case of NDEBUG and ↵Dmitry Vassiliev1-2/+2
LLVM_ENABLE_DUMP This patch fixes the following error with MSVC 16.9.2 in case of NDEBUG and LLVM_ENABLE_DUMP: llvm/lib/CodeGen/CodeGenPrepare.cpp(2581): error C2872: 'ExtAddrMode': ambiguous symbol llvm/include/llvm/CodeGen/TargetInstrInfo.h(86): note: could be 'llvm::ExtAddrMode' llvm/lib/CodeGen/CodeGenPrepare.cpp(2447): note: or '`anonymous-namespace'::ExtAddrMode' llvm/lib/CodeGen/CodeGenPrepare.cpp(2581): error C2039: 'print': is not a member of 'llvm::ExtAddrMode' Reviewed By: aaron.ballman Differential Revision: https://reviews.llvm.org/D130426
2022-07-20[llvm] Use llvm::any_of and llvm::none_of (NFC)Kazu Hirata1-3/+5
2022-07-17[CodeGen] Qualify auto variables in for loops (NFC)Kazu Hirata1-2/+2
2022-07-16Don't sink ptrtoint/inttoptr sequences into non-noop addrspacecasts.Tim Besard1-5/+31
In https://reviews.llvm.org/D30114, support for mismatching address spaces was introduced to CodeGenPrepare's optimizeMemoryInst, using addrspacecast as it was argued that only no-op addrspacecasts would be considered when constructing the address mode. However, by doing inttoptr/ptrtoint, it's possible to get CGP to emit an addrspace that's not actually no-op, introducing a miscompilation: define void @kernel(i8* %julia_ptr) { %intptr = ptrtoint i8* %julia_ptr to i64 %ptr = inttoptr i64 %intptr to i32 addrspace(3)* br label %end end: store atomic i32 1, i32 addrspace(3)* %ptr unordered, align 4 ret void } Gets compiled to: define void @kernel(i8* %julia_ptr) { end: %0 = addrspacecast i8* %julia_ptr to i32 addrspace(3)* store atomic i32 1, i32 addrspace(3)* %0 unordered, align 4 ret void } In the case of NVPTX, this introduces a cvta.to.shared, whereas leaving out the %end block and branch doesn't trigger this optimization. This results in illegal memory accesses as seen in https://github.com/JuliaGPU/CUDA.jl/issues/558 In this change, I introduced a check before doing the pointer cast that verifies address spaces are the same. If not, it emits a ptrtoint/inttoptr combination to get a no-op cast between address spaces. I decided against disallowing ptrtoint/inttoptr with non-default AS in matchOperationAddr, because now its still possible to look through multiple sequences of them that ultimately do not result in a address space mismatch (i.e. the second lit test).
2022-06-30[NFC] Switch a few uses of undef to poison as placeholders for unreachble codeNuno Lopes1-3/+3
2022-06-26[CodeGenPrepare] Avoid double map lookup. NFCICraig Topper1-2/+1
2022-06-18[llvm] Call *set::insert without checking membership first (NFC)Kazu Hirata1-10/+4
2022-06-14[NFC][Alignment] Use Align in shouldAlignPointerArgsGuillaume Chatelet1-5/+6
2022-06-10[CGP] Also freeze ctlz/cttz operand when despeculatingNikita Popov1-2/+2
D125887 changed the ctlz/cttz despeculation transform to insert a freeze for the introduced branch on zero. While this does fix the "branch on poison" issue, we may still get in trouble if we pick a different value for the branch and for the ctz argument (i.e. non-zero for the branch, but zero for the ctz). To avoid this, we should use the same frozen value in both positions. This does cause a regression in RISCV codegen by introducing an additional sext. The DAG looks like this: t0: ch = EntryToken t2: i64,ch = CopyFromReg t0, Register:i64 %3 t4: i64 = AssertSext t2, ValueType:ch:i32 t23: i64 = freeze t4 t9: ch = CopyToReg t0, Register:i64 %0, t23 t16: ch = CopyToReg t0, Register:i64 %4, Constant:i64<32> t18: ch = TokenFactor t9, t16 t25: i64 = sign_extend_inreg t23, ValueType:ch:i32 t24: i64 = setcc t25, Constant:i64<0>, seteq:ch t28: i64 = and t24, Constant:i64<1> t19: ch = brcond t18, t28, BasicBlock:ch<cond.end 0x8311f68> t21: ch = br t19, BasicBlock:ch<cond.false 0x8311e80> I don't see a really obvious way to improve this, as we can't push the freeze past the AssertSext (which may produce poison). Differential Revision: https://reviews.llvm.org/D126638
2022-06-09[NFC] format InstructionSimplify & lowerCaseFunctionNamesSimon Moll1-2/+2
Clang-format InstructionSimplify and convert all "FunctionName"s to "functionName". This patch does touch a lot of files but gets done with the cleanup of InstructionSimplify in one commit. This is the alternative to the less invasive clang-format only patch: D126783 Reviewed By: spatel, rengolin Differential Revision: https://reviews.llvm.org/D126889
2022-06-08[NFC] Remove commented cerr debugging loggingsChuanqi Xu1-1/+0
There are some unused cerr debugging loggings in the codes. It is weird to remain such commented debug helpers in the product.
2022-06-05Remove unneeded cl::ZeroOrMore for cl::opt/cl::list optionsFangrui Song1-2/+1