diff options
author | Jonathan Wright <jonathan.wright@arm.com> | 2021-06-02 16:55:00 +0100 |
---|---|---|
committer | Jonathan Wright <jonathan.wright@arm.com> | 2021-07-13 21:02:58 +0100 |
commit | 8695bf78dad1a42636775843ca832a2f4dba4da3 (patch) | |
tree | ef451d228433838626da5180c03eee39f86b1bfc /gcc/config | |
parent | 60aee15bb7ed57d70face854834468b8b9a3ec39 (diff) | |
download | gcc-8695bf78dad1a42636775843ca832a2f4dba4da3.zip gcc-8695bf78dad1a42636775843ca832a2f4dba4da3.tar.gz gcc-8695bf78dad1a42636775843ca832a2f4dba4da3.tar.bz2 |
gcc: Add vec_select -> subreg RTL simplification
Add a new RTL simplification for the case of a VEC_SELECT selecting
the low part of a vector. The simplification returns a SUBREG.
The primary goal of this patch is to enable better combinations of
Neon RTL patterns - specifically allowing generation of 'write-to-
high-half' narrowing intructions.
Adding this RTL simplification means that the expected results for a
number of tests need to be updated:
* aarch64 Neon: Update the scan-assembler regex for intrinsics tests
to expect a scalar register instead of lane 0 of a vector.
* aarch64 SVE: Likewise.
* arm MVE: Use lane 1 instead of lane 0 for lane-extraction
intrinsics tests (as the move instructions get optimized away for
lane 0.)
This patch also adds new code generation tests to
narrow_high_combine.c to verify the benefit of this RTL
simplification.
gcc/ChangeLog:
2021-06-08 Jonathan Wright <jonathan.wright@arm.com>
* combine.c (combine_simplify_rtx): Add vec_select -> subreg
simplification.
* config/aarch64/aarch64.md (*zero_extend<SHORT:mode><GPI:mode>2_aarch64):
Add Neon to general purpose register case for zero-extend
pattern.
* config/arm/vfp.md (*arm_movsi_vfp): Remove "*" from *t -> r
case to prevent some cases opting to go through memory.
* cse.c (fold_rtx): Add vec_select -> subreg simplification.
* rtl.c (rtvec_series_p): Define predicate to determine
whether a vector contains a linear series of integers.
* rtl.h (rtvec_series_p): Define.
* rtlanal.c (vec_series_lowpart_p): Define predicate to
determine if a vector selection is equivalent to the low part
of the vector.
* rtlanal.h (vec_series_lowpart_p): Define.
* simplify-rtx.c (simplify_context::simplify_binary_operation_1):
Add vec_select -> subreg simplification.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/extract_zero_extend.c: Remove dump scan
for RTL pattern match.
* gcc.target/aarch64/narrow_high_combine.c: Add new tests.
* gcc.target/aarch64/simd/vmulx_laneq_f64_1.c: Update
scan-assembler regex to look for a scalar register instead of
lane 0 of a vector.
* gcc.target/aarch64/simd/vmulxd_laneq_f64_1.c: Likewise.
* gcc.target/aarch64/simd/vmulxs_lane_f32_1.c: Likewise.
* gcc.target/aarch64/simd/vmulxs_laneq_f32_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmlalh_lane_s16.c: Likewise.
* gcc.target/aarch64/simd/vqdmlals_lane_s32.c: Likewise.
* gcc.target/aarch64/simd/vqdmlslh_lane_s16.c: Likewise.
* gcc.target/aarch64/simd/vqdmlsls_lane_s32.c: Likewise.
* gcc.target/aarch64/simd/vqdmullh_lane_s16.c: Likewise.
* gcc.target/aarch64/simd/vqdmullh_laneq_s16.c: Likewise.
* gcc.target/aarch64/simd/vqdmulls_lane_s32.c: Likewise.
* gcc.target/aarch64/simd/vqdmulls_laneq_s32.c: Likewise.
* gcc.target/aarch64/sve/dup_lane_1.c: Likewise.
* gcc.target/aarch64/sve/extract_1.c: Likewise.
* gcc.target/aarch64/sve/extract_2.c: Likewise.
* gcc.target/aarch64/sve/extract_3.c: Likewise.
* gcc.target/aarch64/sve/extract_4.c: Likewise.
* gcc.target/aarch64/sve/live_1.c: Update scan-assembler regex
cases to look for 'b' and 'h' registers instead of 'w'.
* gcc.target/arm/crypto-vsha1cq_u32.c: Update scan-assembler
regex to reflect lane 0 vector extractions being simplified
to scalar register moves.
* gcc.target/arm/crypto-vsha1h_u32.c: Likewise.
* gcc.target/arm/crypto-vsha1mq_u32.c: Likewise.
* gcc.target/arm/crypto-vsha1pq_u32.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_f16.c: Extract
lane 1 as the moves for lane 0 now get optimized away.
* gcc.target/arm/mve/intrinsics/vgetq_lane_f32.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_s16.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_s32.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_s8.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_u16.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_u32.c: Likewise.
* gcc.target/arm/mve/intrinsics/vgetq_lane_u8.c: Likewise.
Diffstat (limited to 'gcc/config')
-rw-r--r-- | gcc/config/aarch64/aarch64.md | 11 | ||||
-rw-r--r-- | gcc/config/arm/vfp.md | 2 |
2 files changed, 7 insertions, 6 deletions
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index aef6da9..f12a0be 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -1884,15 +1884,16 @@ ) (define_insn "*zero_extend<SHORT:mode><GPI:mode>2_aarch64" - [(set (match_operand:GPI 0 "register_operand" "=r,r,w") - (zero_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand" "r,m,m")))] + [(set (match_operand:GPI 0 "register_operand" "=r,r,w,r") + (zero_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand" "r,m,m,w")))] "" "@ and\t%<GPI:w>0, %<GPI:w>1, <SHORT:short_mask> ldr<SHORT:size>\t%w0, %1 - ldr\t%<SHORT:size>0, %1" - [(set_attr "type" "logic_imm,load_4,f_loads") - (set_attr "arch" "*,*,fp")] + ldr\t%<SHORT:size>0, %1 + umov\t%w0, %1.<SHORT:size>[0]" + [(set_attr "type" "logic_imm,load_4,f_loads,neon_to_gp") + (set_attr "arch" "*,*,fp,fp")] ) (define_expand "<optab>qihi2" diff --git a/gcc/config/arm/vfp.md b/gcc/config/arm/vfp.md index 55b6c1a..93e96369 100644 --- a/gcc/config/arm/vfp.md +++ b/gcc/config/arm/vfp.md @@ -224,7 +224,7 @@ ;; problems because small constants get converted into adds. (define_insn "*arm_movsi_vfp" [(set (match_operand:SI 0 "nonimmediate_operand" "=rk,r,r,r,rk,m ,*t,r,*t,*t, *Uv") - (match_operand:SI 1 "general_operand" "rk, I,K,j,mi,rk,r,*t,*t,*Uvi,*t"))] + (match_operand:SI 1 "general_operand" "rk, I,K,j,mi,rk,r,t,*t,*Uvi,*t"))] "TARGET_ARM && TARGET_HARD_FLOAT && ( s_register_operand (operands[0], SImode) || s_register_operand (operands[1], SImode))" |