diff options
author | Philip Reames <preames@rivosinc.com> | 2025-01-24 10:08:42 -0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-01-24 10:08:42 -0800 |
commit | a9ad601f7c5486919d6fabc5dd3cb6e96f63ac61 (patch) | |
tree | 370d7526680de1348d5ad979a5b4e79d8937d4f8 /llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp | |
parent | 7293455cf292cfaa263ea04fc1bc2aee4ceab6a6 (diff) | |
download | llvm-a9ad601f7c5486919d6fabc5dd3cb6e96f63ac61.zip llvm-a9ad601f7c5486919d6fabc5dd3cb6e96f63ac61.tar.gz llvm-a9ad601f7c5486919d6fabc5dd3cb6e96f63ac61.tar.bz2 |
[RISCV] Use vrsub for select of add and sub of the same operands (#123400)
If we have a (vselect c, a+b, a-b), we can combine this to a+(vselect c,
b, -b). That by itself isn't hugely profitable, but if we reverse the
select, we get a form which matches a masked vrsub.vi with zero. The
result is that we can use a masked vrsub *before* the add instead of a
masked add or sub. This doesn't change the critical path (since we
already had the pass through on the masked second op), but does reduce
register pressure since a, b, and (a+b) don't need to all be alive at
once.
In addition to the vselect form, we can also see the same pattern with a
vector_shuffle encoding the vselect. I explored canonicalizing these to
vselects instead, but that exposes several unrelated missing combines.
Diffstat (limited to 'llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp')
0 files changed, 0 insertions, 0 deletions