diff options
author | Richard Biener <rguenther@suse.de> | 2024-03-05 16:07:41 +0100 |
---|---|---|
committer | Richard Biener <rguenther@suse.de> | 2024-05-08 14:30:39 +0200 |
commit | b6822bf3e3f3ff37d64be700f139c8fce3a9bf44 (patch) | |
tree | 92bafdeb9ee032372e48bc01cc81dbfbcac3735c | |
parent | 73c8e24b692e691c665d0f1f5424432837bd8c06 (diff) | |
download | gcc-b6822bf3e3f3ff37d64be700f139c8fce3a9bf44.zip gcc-b6822bf3e3f3ff37d64be700f139c8fce3a9bf44.tar.gz gcc-b6822bf3e3f3ff37d64be700f139c8fce3a9bf44.tar.bz2 |
Fix non-grouped SLP load/store accounting in alignment peeling
When we have a non-grouped access we bogously multiply by zero.
This shows most with single-lane SLP but also happens with
the multi-lane splat case.
* tree-vect-data-refs.cc (vect_enhance_data_refs_alignment):
Properly guard DR_GROUP_SIZE access with STMT_VINFO_GROUPED_ACCESS.
-rw-r--r-- | gcc/tree-vect-data-refs.cc | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc index c531079..ae23740 100644 --- a/gcc/tree-vect-data-refs.cc +++ b/gcc/tree-vect-data-refs.cc @@ -2290,8 +2290,11 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo) if (unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo))) { poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo); - nscalars = (STMT_SLP_TYPE (stmt_info) - ? vf * DR_GROUP_SIZE (stmt_info) : vf); + unsigned group_size = 1; + if (STMT_SLP_TYPE (stmt_info) + && STMT_VINFO_GROUPED_ACCESS (stmt_info)) + group_size = DR_GROUP_SIZE (stmt_info); + nscalars = vf * group_size; } /* Save info about DR in the hash table. Also include peeling |