diff options
author | Sanjay Patel <spatel@rotateright.com> | 2021-02-10 14:57:31 -0500 |
---|---|---|
committer | Sanjay Patel <spatel@rotateright.com> | 2021-02-10 15:02:31 -0500 |
commit | 6e2053983e0d3f69b0d9219923d7ba1eae592e12 (patch) | |
tree | 5642066da22cc196d558f212aa0089e79a3d4036 /llvm/lib/CodeGen/StackProtector.cpp | |
parent | 6bcc1fd461eeeb7946184cbfe886eead9291919c (diff) | |
download | llvm-6e2053983e0d3f69b0d9219923d7ba1eae592e12.zip llvm-6e2053983e0d3f69b0d9219923d7ba1eae592e12.tar.gz llvm-6e2053983e0d3f69b0d9219923d7ba1eae592e12.tar.bz2 |
[InstCombine] fold lshr(mul X, SplatC), C2
This is a special-case multiply that replicates bits of
the source operand. We need this fold to avoid regression
if we make canonicalization to `mul` more aggressive for
shl+or patterns.
I did not see a way to make Alive generalize the bit width
condition for even-number-of-bits only, but an example of
the proof is:
Name: i32
Pre: isPowerOf2(C1 - 1) && log2(C1) == C2 && (C2 * 2 == width(C2))
%m = mul nuw i32 %x, C1
%t = lshr i32 %m, C2
=>
%t = and i32 %x, C1 - 2
Name: i14
%m = mul nuw i14 %x, 129
%t = lshr i14 %m, 7
=>
%t = and i14 %x, 127
https://rise4fun.com/Alive/e52
Diffstat (limited to 'llvm/lib/CodeGen/StackProtector.cpp')
0 files changed, 0 insertions, 0 deletions