diff options
author | Richard Sandiford <richard.sandiford@linaro.org> | 2018-02-01 11:02:52 +0000 |
---|---|---|
committer | Richard Sandiford <rsandifo@gcc.gnu.org> | 2018-02-01 11:02:52 +0000 |
commit | 9a1b9cb4d6fcf88d68f55b97c7d9d09c5606fed7 (patch) | |
tree | e9113a183b853716bdd60e811c85a208236d42c6 | |
parent | 31b6733b1628a861de4c545bff40acc97850dbbf (diff) | |
download | gcc-9a1b9cb4d6fcf88d68f55b97c7d9d09c5606fed7.zip gcc-9a1b9cb4d6fcf88d68f55b97c7d9d09c5606fed7.tar.gz gcc-9a1b9cb4d6fcf88d68f55b97c7d9d09c5606fed7.tar.bz2 |
[AArch64] Tighten aarch64_secondary_reload condition (PR 83845)
aarch64_secondary_reload enforced a secondary reload via
aarch64_sve_reload_be for memory and pseudo registers, but failed
to do the same for subregs of pseudo registers. To avoid this and
any similar problems, the patch instead tests for things that the move
patterns handle directly; if the operand isn't one of those, we should
use the reload pattern instead.
The patch fixes an ICE in sve/mask_struct_store_3.c for aarch64_be,
where the bogus target description was (rightly) causing LRA to cycle.
2018-02-01 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tearget/83845
* config/aarch64/aarch64.c (aarch64_secondary_reload): Tighten
check for operands that need to go through aarch64_sve_reload_be.
Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
From-SVN: r257285
-rw-r--r-- | gcc/ChangeLog | 6 | ||||
-rw-r--r-- | gcc/config/aarch64/aarch64.c | 7 |
2 files changed, 12 insertions, 1 deletions
diff --git a/gcc/ChangeLog b/gcc/ChangeLog index b1c5617..adaec48 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,3 +1,9 @@ +2018-02-01 Richard Sandiford <richard.sandiford@linaro.org> + + PR tearget/83845 + * config/aarch64/aarch64.c (aarch64_secondary_reload): Tighten + check for operands that need to go through aarch64_sve_reload_be. + 2018-02-01 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/81661 diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 174310c..656dd76 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -7249,9 +7249,14 @@ aarch64_secondary_reload (bool in_p ATTRIBUTE_UNUSED, rtx x, machine_mode mode, secondary_reload_info *sri) { + /* Use aarch64_sve_reload_be for SVE reloads that cannot be handled + directly by the *aarch64_sve_mov<mode>_be move pattern. See the + comment at the head of aarch64-sve.md for more details about the + big-endian handling. */ if (BYTES_BIG_ENDIAN && reg_class_subset_p (rclass, FP_REGS) - && (MEM_P (x) || (REG_P (x) && !HARD_REGISTER_P (x))) + && !((REG_P (x) && HARD_REGISTER_P (x)) + || aarch64_simd_valid_immediate (x, NULL)) && aarch64_sve_data_mode_p (mode)) { sri->icode = CODE_FOR_aarch64_sve_reload_be; |