diff options
author | Amara Emerson <amara@apple.com> | 2020-09-27 01:45:09 -0700 |
---|---|---|
committer | Amara Emerson <amara@apple.com> | 2020-12-18 11:57:38 -0800 |
commit | 43ff75f2c3feef64f9d73328230d34dac8832a91 (patch) | |
tree | 964b7c502b484c20049e76a60456a960d9b44213 /llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir | |
parent | 9caca7241d447266a23a99ea0536f30faaf19694 (diff) | |
download | llvm-llvmorg-11.0.1.zip llvm-llvmorg-11.0.1.tar.gz llvm-llvmorg-11.0.1.tar.bz2 |
[AArch64][GlobalISel] Promote scalar G_SHL constant shift amounts to s64.llvmorg-11.0.1-rc2llvmorg-11.0.1
This was supposed to be done in the first place as is currently the case for
G_ASHR and G_LSHR but was forgotten when the original shift legalization
overhaul was done last year.
This was exposed because we started falling back on s32 = s32, s64 SHLs
due to a recent combiner change.
Gives a very minor (0.1%) code size -O0 improvement on consumer-typeset.
Diffstat (limited to 'llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir')
-rw-r--r-- | llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir index 7d7b77a..6dc28e7 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-non-pow2-load-store.mir @@ -28,12 +28,11 @@ body: | ; CHECK: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 2 ; CHECK: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C1]](s64) ; CHECK: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p0) :: (load 1 from %ir.ptr + 2, align 4) - ; CHECK: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 16 - ; CHECK: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[LOAD]], [[C2]](s32) + ; CHECK: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 16 + ; CHECK: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[LOAD]], [[C2]](s64) ; CHECK: [[OR:%[0-9]+]]:_(s32) = G_OR [[SHL]], [[ZEXTLOAD]] ; CHECK: [[COPY2:%[0-9]+]]:_(s32) = COPY [[OR]](s32) - ; CHECK: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 16 - ; CHECK: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[COPY2]], [[C3]](s64) + ; CHECK: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[COPY2]], [[C2]](s64) ; CHECK: [[PTR_ADD1:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY1]], [[C1]](s64) ; CHECK: G_STORE [[COPY2]](s32), [[COPY1]](p0) :: (store 2 into %ir.ptr2, align 4) ; CHECK: G_STORE [[LSHR]](s32), [[PTR_ADD1]](p0) :: (store 1 into %ir.ptr2 + 2, align 4) |