diff options
author | Ju-Zhe Zhong <juzhe.zhong@rivai.ai> | 2023-06-23 21:48:27 +0800 |
---|---|---|
committer | Pan Li <pan2.li@intel.com> | 2023-06-25 13:58:55 +0800 |
commit | ef09afa4767c25e23d5d837ce68a5b7ebd9bad1d (patch) | |
tree | 1dac02af0c878c2642c89b9fbb48fdeaf117c852 /libjava/java/lang | |
parent | c79476da46728e2ab17e0e546262d2f6377081aa (diff) | |
download | gcc-ef09afa4767c25e23d5d837ce68a5b7ebd9bad1d.zip gcc-ef09afa4767c25e23d5d837ce68a5b7ebd9bad1d.tar.gz gcc-ef09afa4767c25e23d5d837ce68a5b7ebd9bad1d.tar.bz2 |
GIMPLE_FOLD: Apply LEN_MASK_{LOAD,STORE} into GIMPLE_FOLD
Hi, since we are going to have LEN_MASK_{LOAD,STORE} into loopVectorizer.
Currenly,
1. we can fold MASK_{LOAD,STORE} into MEM when mask is all ones.
2. we can fold LEN_{LOAD,STORE} into MEM when (len - bias) is VF.
Now, I think it makes sense that we can support
fold LEN_MASK_{LOAD,STORE} into MEM when both mask = all ones and (len - bias) is VF.
gcc/ChangeLog:
* gimple-fold.cc (arith_overflowed_p): Apply LEN_MASK_{LOAD,STORE}.
(gimple_fold_partial_load_store_mem_ref): Ditto.
(gimple_fold_partial_store): Ditto.
(gimple_fold_call): Ditto.
Diffstat (limited to 'libjava/java/lang')
0 files changed, 0 insertions, 0 deletions