aboutsummaryrefslogtreecommitdiff
path: root/gcc/fortran/error.cc
diff options
context:
space:
mode:
authorKyrylo Tkachov <ktkachov@nvidia.com>2024-11-05 05:10:22 -0800
committerKyrylo Tkachov <ktkachov@nvidia.com>2024-11-05 17:58:00 +0100
commit161e246cf32f1298400aa3c1d86110490a3cd0ce (patch)
tree8f3bfe5a21604b500f6a914f9ff8c1eac391f870 /gcc/fortran/error.cc
parentf185a89fc4b6e6f5ae5475cd7c723b3acf39976b (diff)
downloadgcc-161e246cf32f1298400aa3c1d86110490a3cd0ce.zip
gcc-161e246cf32f1298400aa3c1d86110490a3cd0ce.tar.gz
gcc-161e246cf32f1298400aa3c1d86110490a3cd0ce.tar.bz2
PR target/117449: Restrict vector rotate match and split to pre-reload
The vector rotate splitter has some logic to deal with post-reload splitting but not all cases in aarch64_emit_opt_vec_rotate are post-reload-safe. In particular the ROTATE+XOR expansion for TARGET_SHA3 can create RTL that can later be simplified to a simple ROTATE post-reload, which would then match the insn again and try to split it. So do a clean split pre-reload and avoid going down this path post-reload by restricting the insn_and_split to can_create_pseudo_p (). Bootstrapped and tested on aarch64-none-linux. Signed-off-by: Kyrylo Tkachov <ktkachov@nvidia.com> gcc/ PR target/117449 * config/aarch64/aarch64-simd.md (*aarch64_simd_rotate_imm<mode>): Match only when can_create_pseudo_p (). * config/aarch64/aarch64.cc (aarch64_emit_opt_vec_rotate): Assume can_create_pseudo_p (). gcc/testsuite/ PR target/117449 * gcc.c-torture/compile/pr117449.c: New test.
Diffstat (limited to 'gcc/fortran/error.cc')
0 files changed, 0 insertions, 0 deletions