diff options
author | Andrew Pinski <apinski@marvell.com> | 2023-03-10 00:53:39 +0000 |
---|---|---|
committer | Andrew Pinski <apinski@marvell.com> | 2023-03-10 17:51:54 +0000 |
commit | fcbc5c190c40b51c82f827830ce403c07d628960 (patch) | |
tree | 89071f26d2ff6e06cbca307e38ad9ad383c26088 /gcc/emit-rtl.h | |
parent | f8332e52a498df480f72303de32ad0751ad899fe (diff) | |
download | gcc-fcbc5c190c40b51c82f827830ce403c07d628960.zip gcc-fcbc5c190c40b51c82f827830ce403c07d628960.tar.gz gcc-fcbc5c190c40b51c82f827830ce403c07d628960.tar.bz2 |
Fix PR 108874: aarch64 code regression with shift and ands
After r6-2044-g98e30e515f184b, code like "((x & 0xff00ff00U) >> 8)"
would be optimized like (x >> 8) & 0xff00ffU which is normally better
except on aarch64, the shift right could be combined with another
operation in some cases. So we need to add a few define_splits
to the aarch64 backends that match "((x >> shift) & CST0) OP Y"
and splits it to:
TMP = X & CST1
(TMP >> shift) OP Y
Note this also gets us to matching rev16 back too so I added a
testcase to make sure we don't lose that matching any more.
Note when the generic patch to recognize those as bswap ROT 16,
we might regress again and need to add a few more patterns to
the aarch64 backend but will deal with that once that happens.
Committed as approved after a bootstrapp/test on aarch64-linux-gnu with no regressions.
gcc/ChangeLog:
* config/aarch64/aarch64.md: Add a new define_split
to help combine.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/rev16_2.c: New test.
* gcc.target/aarch64/shift_and_operator-1.c: New test.
Diffstat (limited to 'gcc/emit-rtl.h')
0 files changed, 0 insertions, 0 deletions