aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/CodeGen/CodeGenPrepare.cpp
diff options
context:
space:
mode:
authorDavid Sherwood <david.sherwood@arm.com>2021-11-22 11:38:06 +0000
committerDavid Sherwood <david.sherwood@arm.com>2022-01-13 09:43:07 +0000
commit31009f0b5afb504fc1f30769c038e1b7be6ea45b (patch)
tree452570f6868b7ac33debfbbf2c618b3d7ebff6e3 /llvm/lib/CodeGen/CodeGenPrepare.cpp
parent7ce48be0fd83fb4fe3d0104f324bbbcfcc82983c (diff)
downloadllvm-31009f0b5afb504fc1f30769c038e1b7be6ea45b.zip
llvm-31009f0b5afb504fc1f30769c038e1b7be6ea45b.tar.gz
llvm-31009f0b5afb504fc1f30769c038e1b7be6ea45b.tar.bz2
[CodeGen][AArch64] Ensure isSExtCheaperThanZExt returns true for negative constants
When we know the value we're extending is a negative constant then it makes sense to use SIGN_EXTEND because this may improve code quality in some cases, particularly when doing a constant splat of an unpacked vector type. For example, for SVE when splatting the value -1 into all elements of a vector of type <vscale x 2 x i32> the element type will get promoted from i32 -> i64. In this case we want the splat value to sign-extend from (i32 -1) -> (i64 -1), whereas currently it zero-extends from (i32 -1) -> (i64 0xFFFFFFFF). Sign-extending the constant means we can use a single mov immediate instruction. New tests added here: CodeGen/AArch64/sve-vector-splat.ll I believe we see some code quality improvements in these existing tests too: CodeGen/AArch64/dag-numsignbits.ll CodeGen/AArch64/reduce-and.ll CodeGen/AArch64/unfold-masked-merge-vector-variablemask.ll The apparent regressions in CodeGen/AArch64/fast-isel-cmp-vec.ll only occur because the test disables codegen prepare and branch folding. Differential Revision: https://reviews.llvm.org/D114357
Diffstat (limited to 'llvm/lib/CodeGen/CodeGenPrepare.cpp')
-rw-r--r--llvm/lib/CodeGen/CodeGenPrepare.cpp2
1 files changed, 1 insertions, 1 deletions
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 747f4e4..8053f1d 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -7004,7 +7004,7 @@ bool CodeGenPrepare::optimizeSwitchInst(SwitchInst *SI) {
// matching the argument extension instead.
Instruction::CastOps ExtType = Instruction::ZExt;
// Some targets prefer SExt over ZExt.
- if (TLI->isSExtCheaperThanZExt(OldVT, RegType))
+ if (TLI->isSExtCheaperThanZExt(OldVT, RegType, SDValue()))
ExtType = Instruction::SExt;
if (auto *Arg = dyn_cast<Argument>(Cond)) {