aboutsummaryrefslogtreecommitdiff
path: root/gcc/rtl.h
diff options
context:
space:
mode:
authorKyrylo Tkachov <kyrylo.tkachov@arm.com>2023-06-01 09:37:06 +0100
committerKyrylo Tkachov <kyrylo.tkachov@arm.com>2023-06-01 09:37:06 +0100
commit12e71b593ea0c64d919df525cd75ea10b7be8a4b (patch)
treed6ca160cad485316ad68de0d2401a9530a2efa03 /gcc/rtl.h
parent2df7e45188f32e3c448e004af38d56eb9ab8d959 (diff)
downloadgcc-12e71b593ea0c64d919df525cd75ea10b7be8a4b.zip
gcc-12e71b593ea0c64d919df525cd75ea10b7be8a4b.tar.gz
gcc-12e71b593ea0c64d919df525cd75ea10b7be8a4b.tar.bz2
aarch64: Add =r,m and =m,r alternatives to 64-bit vector move patterns
We can use the X registers to load and store 64-bit vector modes, we just need to add the alternatives to the mov patterns. This straightforward patch does that and for the pair variants too. For the testcase in the code we now generate the optimal assembly without any superfluous GP<->SIMD moves. Bootstrapped and tested on aarch64-none-linux-gnu and aarch64_be-none-elf. gcc/ChangeLog: * config/aarch64/aarch64-simd.md (*aarch64_simd_mov<VDMOV:mode>): Add =r,m and =r,m alternatives. (load_pair<DREG:mode><DREG2:mode>): Likewise. (vec_store_pair<DREG:mode><DREG2:mode>): Likewise. gcc/testsuite/ChangeLog: * gcc.target/aarch64/xreg-vec-modes_1.c: New test.
Diffstat (limited to 'gcc/rtl.h')
0 files changed, 0 insertions, 0 deletions