aboutsummaryrefslogtreecommitdiff
path: root/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServer.cpp
diff options
context:
space:
mode:
authorSimon Pilgrim <llvm-dev@redking.me.uk>2021-11-25 11:14:06 +0000
committerSimon Pilgrim <llvm-dev@redking.me.uk>2021-11-25 11:14:15 +0000
commit63b1e58f0738cc9977b47f947679ef5544808b73 (patch)
treebf39fa91311d1452a28e40306eaf4abdb401ec6b /lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServer.cpp
parentd44f2a6db2c71be04a588431a8ffb80d2d9e76f1 (diff)
downloadllvm-63b1e58f0738cc9977b47f947679ef5544808b73.zip
llvm-63b1e58f0738cc9977b47f947679ef5544808b73.tar.gz
llvm-63b1e58f0738cc9977b47f947679ef5544808b73.tar.bz2
[DAG] SimplifyDemandedBits - simplify rotl/rotr to shl/srl (REAPPLIED)
If we only demand bits from one half of a rotation pattern, see if we can simplify to a logical shift. For the ARM/AArch64 rev16/32 patterns, I had to drop a fold to prevent srl(bswap()) -> rotr(bswap) -> srl(bswap) infinite loops. I've replaced this with an isel PatFrag which should do the same task. Reapplied with fix for AArch64 rev patterns to matching the ARM fix. https://alive2.llvm.org/ce/z/iroxki (rol -> shl by amt iff demanded bits has at least as many trailing zeros as the shift amount) https://alive2.llvm.org/ce/z/4ez_U- (ror -> shl by revamt iff demanded bits has at least as many trailing zeros as the reverse shift amount) https://alive2.llvm.org/ce/z/cD7dR- (ror -> lshr by amt iff demanded bits has at least as many leading zeros as the shift amount) https://alive2.llvm.org/ce/z/_XGHtQ (rol -> lshr by revamt iff demanded bits has at least as many leading zeros as the reverse shift amount) Differential Revision: https://reviews.llvm.org/D114354
Diffstat (limited to 'lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunicationServer.cpp')
0 files changed, 0 insertions, 0 deletions