diff options
author | Kewen Lin <linkw@linux.ibm.com> | 2023-07-26 21:43:09 -0500 |
---|---|---|
committer | Kewen Lin <linkw@linux.ibm.com> | 2023-07-26 21:43:09 -0500 |
commit | 9890d4e8bcda1f34b8eefb481935ef0e4cd8069e (patch) | |
tree | 11fe19dc8533aa3ad1f803e258b7303b2253351b /gcc/tree-vect-stmts.cc | |
parent | 6f709f79c915a1ea82220a44e9f4a144d5eedfd1 (diff) | |
download | gcc-9890d4e8bcda1f34b8eefb481935ef0e4cd8069e.zip gcc-9890d4e8bcda1f34b8eefb481935ef0e4cd8069e.tar.gz gcc-9890d4e8bcda1f34b8eefb481935ef0e4cd8069e.tar.bz2 |
vect: Treat VMAT_ELEMENTWISE as scalar load in costing [PR110776]
PR110776 exposes one issue that we could query unaligned
load for vector type but actually no unaligned vector load
is supported there. The reason is that the costed load is
with single-lane vector type and its memory access type is
VMAT_ELEMENTWISE, we actually take it as scalar load and
set its alignment_support_scheme as dr_unaligned_supported.
To avoid the ICE as exposed, following Rich's suggestion,
this patch is to make VMAT_ELEMENTWISE be costed as scalar
load.
Co-authored-by: Richard Biener <rguenther@suse.de>
PR tree-optimization/110776
gcc/ChangeLog:
* tree-vect-stmts.cc (vectorizable_load): Always cost VMAT_ELEMENTWISE
as scalar load.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/pr110776.c: New test.
Diffstat (limited to 'gcc/tree-vect-stmts.cc')
-rw-r--r-- | gcc/tree-vect-stmts.cc | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index 5018bd2..6a4e8fc 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -9876,7 +9876,10 @@ vectorizable_load (vec_info *vinfo, { if (costing_p) { - if (VECTOR_TYPE_P (ltype)) + /* For VMAT_ELEMENTWISE, just cost it as scalar_load to + avoid ICE, see PR110776. */ + if (VECTOR_TYPE_P (ltype) + && memory_access_type != VMAT_ELEMENTWISE) vect_get_load_cost (vinfo, stmt_info, 1, alignment_support_scheme, misalignment, false, &inside_cost, nullptr, cost_vec, |