diff options
author | Simon Pilgrim <llvm-dev@redking.me.uk> | 2025-10-03 15:01:58 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-10-03 15:01:58 +0100 |
commit | ab9611e7353be46351b9ce52836e56431ac5a45c (patch) | |
tree | 4c7f674b0d0586201ac56efe49a69e70b84489dc /llvm/unittests/Analysis/FunctionPropertiesAnalysisTest.cpp | |
parent | 375f48942b9a3f3fbd82133390af25b6c96f1460 (diff) | |
download | llvm-ab9611e7353be46351b9ce52836e56431ac5a45c.zip llvm-ab9611e7353be46351b9ce52836e56431ac5a45c.tar.gz llvm-ab9611e7353be46351b9ce52836e56431ac5a45c.tar.bz2 |
[X86] Fold ADD(x,x) -> X86ISD::VSHLI(x,1) (#161843)
Now that #161007 will attempt to fold this back to ADD(x,x) in
X86FixupInstTunings, we can more aggressively create X86ISD::VSHLI nodes
to avoid missed optimisations due to oneuse limits, avoids unnecessary
freezes and allows AVX512 to fold to mi memory folding variants.
I've currently limited SSE targets to cases where ADD is the only user
of x to prevent extra moves - AVX shift patterns benefit from breaking
the ADD+ADD+ADD chains into shifts, but its not so beneficial on SSE
with the extra moves.
Diffstat (limited to 'llvm/unittests/Analysis/FunctionPropertiesAnalysisTest.cpp')
0 files changed, 0 insertions, 0 deletions