diff options
author | Tamar Christina <tamar.christina@arm.com> | 2019-02-13 14:04:41 +0000 |
---|---|---|
committer | Tamar Christina <tnfchris@gcc.gnu.org> | 2019-02-13 14:04:41 +0000 |
commit | 0c63a8ee9168d4d3cb0e7b97b78e324f65e1a22a (patch) | |
tree | 59c4c9c332df1e868221e1341e8dbdaee4c6712a /gcc/fold-const.c | |
parent | dbcdd5612f98d84c4d37769944af28b8d89a1aa3 (diff) | |
download | gcc-0c63a8ee9168d4d3cb0e7b97b78e324f65e1a22a.zip gcc-0c63a8ee9168d4d3cb0e7b97b78e324f65e1a22a.tar.gz gcc-0c63a8ee9168d4d3cb0e7b97b78e324f65e1a22a.tar.bz2 |
AArch64: Allow any offset for SVE addressing modes before reload.
On AArch64 aarch64_classify_address has a case for when it's non-strict
that will allow it to accept any byte offset from a reg when validating
an address in a given addressing mode.
This because reload would later make the address valid. SVE however requires
the address always be valid, but currently allows any address when a MEM +
offset is used. This causes an ICE as nothing later forces the address to be
legitimate.
The patch forces aarch64_emit_sve_pred_move via expand_insn to ensure that
the addressing mode is valid for any loads/stores it creates, which follows
the SVE way of handling address classifications.
gcc/ChangeLog:
PR target/88847
* config/aarch64/aarch64-sve.md (*pred_mov<mode>, pred_mov<mode>):
Expose as @aarch64_pred_mov.
* config/aarch64/aarch64.c (aarch64_classify_address):
Use expand_insn which legitimizes operands.
gcc/testsuite/ChangeLog:
PR target/88847
* gcc.target/aarch64/sve/pr88847.c: New test.
From-SVN: r268845
Diffstat (limited to 'gcc/fold-const.c')
0 files changed, 0 insertions, 0 deletions