diff options
author | Kyrylo Tkachov <kyrylo.tkachov@arm.com> | 2021-01-20 18:11:20 +0000 |
---|---|---|
committer | Kyrylo Tkachov <kyrylo.tkachov@arm.com> | 2021-01-20 19:29:42 +0000 |
commit | e140f5fd3e235c5a37dc99b79f37a5ad4dc59064 (patch) | |
tree | af6c82b56028d4c97ed7e3d44f0a1cf997f41d59 /gcc/fortran/resolve.c | |
parent | f8c677776617ab91826af1d950b00d853eaff622 (diff) | |
download | gcc-e140f5fd3e235c5a37dc99b79f37a5ad4dc59064.zip gcc-e140f5fd3e235c5a37dc99b79f37a5ad4dc59064.tar.gz gcc-e140f5fd3e235c5a37dc99b79f37a5ad4dc59064.tar.bz2 |
aarch64: Split vec_selects of bottom elements into simple move
In certain intrinsics use cases GCC leaves SETs of a bottom-element vec
select lying around:
(vec_select:DI (reg:V2DI 34 v2 [orig:128 __o ] [128])
(parallel [
(const_int 0 [0])
])))
This can be treated as a simple move in aarch64 when done between SIMD
registers for all normal widths.
These go through the aarch64_get_lane pattern.
This patch adds a splitter there to simplify these extracts to a move
that can, perhaps, be optimised a way.
Another benefit is if the destination is memory we can use a simpler STR
instruction rather than ST1-lane.
gcc/
* config/aarch64/aarch64-simd.md (aarch64_get_lane<mode>):
Convert to define_insn_and_split. Split into simple move when moving
bottom element.
gcc/testsuite/
* gcc.target/aarch64/vdup_lane_2.c: Scan for fmov rather than
dup.
Diffstat (limited to 'gcc/fortran/resolve.c')
0 files changed, 0 insertions, 0 deletions