aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-vect-data-refs.c
diff options
context:
space:
mode:
authorRichard Sandiford <richard.sandiford@linaro.org>2018-04-10 10:28:33 +0000
committerRichard Sandiford <rsandifo@gcc.gnu.org>2018-04-10 10:28:33 +0000
commiteb38d071636da1ea2d0f9a068c86c7ceee2634b2 (patch)
tree76f515b13d6c8afd7d42d5ba5d416e64b0cf8cd1 /gcc/tree-vect-data-refs.c
parent02149a789076495212f47452550971bc3c5a0b9a (diff)
downloadgcc-eb38d071636da1ea2d0f9a068c86c7ceee2634b2.zip
gcc-eb38d071636da1ea2d0f9a068c86c7ceee2634b2.tar.gz
gcc-eb38d071636da1ea2d0f9a068c86c7ceee2634b2.tar.bz2
Add missing cases to vect_get_smallest_scalar_type (PR 85286)
In this PR we used WIDEN_SUM_EXPR to vectorise: short i, y; int sum; [...] for (i = x; i > 0; i--) sum += y; with 4 ints and 8 shorts per vector. The problem was that we set the VF based only on the ints, then calculated the number of vector copies based on the shorts, giving 4/8. Previously that led to ncopies==0, but after r249897 we pick it up as an ICE. In this particular case we could vectorise the reduction by setting ncopies based on the output type rather than the input type, but it doesn't seem worth adding a special "optimisation" for such a pathological case. I think it's really an instance of the more general problem that we can't vectorise using combinations of (say) 64-bit and 128-bit vectors on targets that support both. 2018-04-10 Richard Sandiford <richard.sandiford@linaro.org> gcc/ PR tree-optimization/85286 * tree-vect-data-refs.c (vect_get_smallest_scalar_type): gcc/testsuite/ PR tree-optimization/85286 * gcc.dg/vect/pr85286.c: New test. From-SVN: r259268
Diffstat (limited to 'gcc/tree-vect-data-refs.c')
-rw-r--r--gcc/tree-vect-data-refs.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
index ce24387..161a886 100644
--- a/gcc/tree-vect-data-refs.c
+++ b/gcc/tree-vect-data-refs.c
@@ -132,6 +132,8 @@ vect_get_smallest_scalar_type (gimple *stmt, HOST_WIDE_INT *lhs_size_unit,
if (is_gimple_assign (stmt)
&& (gimple_assign_cast_p (stmt)
+ || gimple_assign_rhs_code (stmt) == DOT_PROD_EXPR
+ || gimple_assign_rhs_code (stmt) == WIDEN_SUM_EXPR
|| gimple_assign_rhs_code (stmt) == WIDEN_MULT_EXPR
|| gimple_assign_rhs_code (stmt) == WIDEN_LSHIFT_EXPR
|| gimple_assign_rhs_code (stmt) == FLOAT_EXPR))