diff options
author | Juzhe-Zhong <juzhe.zhong@rivai.ai> | 2023-11-24 07:18:00 +0800 |
---|---|---|
committer | Pan Li <pan2.li@intel.com> | 2023-11-24 14:34:04 +0800 |
commit | d83013b88b74d1f1f774d94ca950d3b6dba26e5d (patch) | |
tree | 64f53146fd267f20894f8e3e7befaeef57f92774 /gcc/expr.cc | |
parent | af7a422da457aa13df8eb1feb601dffafb76ed7c (diff) | |
download | gcc-d83013b88b74d1f1f774d94ca950d3b6dba26e5d.zip gcc-d83013b88b74d1f1f774d94ca950d3b6dba26e5d.tar.gz gcc-d83013b88b74d1f1f774d94ca950d3b6dba26e5d.tar.bz2 |
RISC-V: Optimize a special case of VLA SLP
When working on fixing bugs of zvl1024b. I notice a special VLA SLP case
can be better optimized.
v = vec_perm (op1, op2, { nunits - 1, nunits, nunits + 1, ... })
Before this patch, we are using genriec approach (vrgather):
vid
vadd.vx
vrgather
vmsgeu
vrgather
With this patch, we use vec_extract + slide1up:
scalar = vec_extract (last element of op1)
v = slide1up (op2, scalar)
Tested on zvl128b/zvl256b/zvl512b/zvl1024b of both RV32 and RV64 no regression.
Ok for trunk ?
PR target/112599
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_extract_and_slide1up_patterns): New function.
(expand_vec_perm_const_1): Add new optimization.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112599-2.c: New test.
Diffstat (limited to 'gcc/expr.cc')
0 files changed, 0 insertions, 0 deletions