diff options
author | Richard Sandiford <richard.sandiford@linaro.org> | 2018-01-13 18:01:24 +0000 |
---|---|---|
committer | Richard Sandiford <rsandifo@gcc.gnu.org> | 2018-01-13 18:01:24 +0000 |
commit | b781a135a06fc1805c072778d7513df09a32171d (patch) | |
tree | 43af641081da5b462f6d95a1d23ab6b0f16dd13a /gcc/fold-const-call.c | |
parent | b89fa419ca39b13b5ed0f7a23722b394b3af399e (diff) | |
download | gcc-b781a135a06fc1805c072778d7513df09a32171d.zip gcc-b781a135a06fc1805c072778d7513df09a32171d.tar.gz gcc-b781a135a06fc1805c072778d7513df09a32171d.tar.bz2 |
Add support for in-order addition reduction using SVE FADDA
This patch adds support for in-order floating-point addition reductions,
which are suitable even in strict IEEE mode.
Previously vect_is_simple_reduction would reject any cases that forbid
reassociation. The idea is instead to tentatively accept them as
"FOLD_LEFT_REDUCTIONs" and only fail later if there is no support
for them. Although this patch only handles the particular case of plus
and minus on floating-point types, there's no reason in principle why
we couldn't handle other cases.
The reductions use a new fold_left_plus_optab if available, otherwise
they fall back to elementwise additions or subtractions.
The vect_force_simple_reduction change makes it easier for parloops
to read the type of reduction.
2018-01-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* optabs.def (fold_left_plus_optab): New optab.
* doc/md.texi (fold_left_plus_@var{m}): Document.
* internal-fn.def (IFN_FOLD_LEFT_PLUS): New internal function.
* internal-fn.c (fold_left_direct): Define.
(expand_fold_left_optab_fn): Likewise.
(direct_fold_left_optab_supported_p): Likewise.
* fold-const-call.c (fold_const_fold_left): New function.
(fold_const_call): Use it to fold CFN_FOLD_LEFT_PLUS.
* tree-parloops.c (valid_reduction_p): New function.
(gather_scalar_reductions): Use it.
* tree-vectorizer.h (FOLD_LEFT_REDUCTION): New vect_reduction_type.
(vect_finish_replace_stmt): Declare.
* tree-vect-loop.c (fold_left_reduction_fn): New function.
(needs_fold_left_reduction_p): New function, split out from...
(vect_is_simple_reduction): ...here. Accept reductions that
forbid reassociation, but give them type FOLD_LEFT_REDUCTION.
(vect_force_simple_reduction): Also store the reduction type in
the assignment's STMT_VINFO_REDUC_TYPE.
(vect_model_reduction_cost): Handle FOLD_LEFT_REDUCTION.
(merge_with_identity): New function.
(vect_expand_fold_left): Likewise.
(vectorize_fold_left_reduction): Likewise.
(vectorizable_reduction): Handle FOLD_LEFT_REDUCTION. Leave the
scalar phi in place for it. Check for target support and reject
cases that would reassociate the operation. Defer the transform
phase to vectorize_fold_left_reduction.
* config/aarch64/aarch64.md (UNSPEC_FADDA): New unspec.
* config/aarch64/aarch64-sve.md (fold_left_plus_<mode>): New expander.
(*fold_left_plus_<mode>, *pred_fold_left_plus_<mode>): New insns.
gcc/testsuite/
* gcc.dg/vect/no-fast-math-vect16.c: Expect the test to pass and
check for a message about using in-order reductions.
* gcc.dg/vect/pr79920.c: Expect both loops to be vectorized and
check for a message about using in-order reductions.
* gcc.dg/vect/trapv-vect-reduc-4.c: Expect all three loops to be
vectorized and check for a message about using in-order reductions.
Expect targets with variable-length vectors to fall back to the
fixed-length mininum.
* gcc.dg/vect/vect-reduc-6.c: Expect the loop to be vectorized and
check for a message about using in-order reductions.
* gcc.dg/vect/vect-reduc-in-order-1.c: New test.
* gcc.dg/vect/vect-reduc-in-order-2.c: Likewise.
* gcc.dg/vect/vect-reduc-in-order-3.c: Likewise.
* gcc.dg/vect/vect-reduc-in-order-4.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_1.c: New test.
* gcc.target/aarch64/sve/reduc_strict_1_run.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_2.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_2_run.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_3.c: Likewise.
* gcc.target/aarch64/sve/slp_13.c: Add floating-point types.
* gfortran.dg/vect/vect-8.f90: Expect 22 loops to be vectorized if
vect_fold_left_plus.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256639
Diffstat (limited to 'gcc/fold-const-call.c')
-rw-r--r-- | gcc/fold-const-call.c | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/gcc/fold-const-call.c b/gcc/fold-const-call.c index 60acf96..fcf4a14 100644 --- a/gcc/fold-const-call.c +++ b/gcc/fold-const-call.c @@ -1195,6 +1195,28 @@ fold_const_call (combined_fn fn, tree type, tree arg) } } +/* Fold a call to IFN_FOLD_LEFT_<CODE> (ARG0, ARG1), returning a value + of type TYPE. */ + +static tree +fold_const_fold_left (tree type, tree arg0, tree arg1, tree_code code) +{ + if (TREE_CODE (arg1) != VECTOR_CST) + return NULL_TREE; + + unsigned HOST_WIDE_INT nelts; + if (!VECTOR_CST_NELTS (arg1).is_constant (&nelts)) + return NULL_TREE; + + for (unsigned HOST_WIDE_INT i = 0; i < nelts; i++) + { + arg0 = const_binop (code, type, arg0, VECTOR_CST_ELT (arg1, i)); + if (arg0 == NULL_TREE || !CONSTANT_CLASS_P (arg0)) + return NULL_TREE; + } + return arg0; +} + /* Try to evaluate: *RESULT = FN (*ARG0, *ARG1) @@ -1500,6 +1522,9 @@ fold_const_call (combined_fn fn, tree type, tree arg0, tree arg1) } return NULL_TREE; + case CFN_FOLD_LEFT_PLUS: + return fold_const_fold_left (type, arg0, arg1, PLUS_EXPR); + default: return fold_const_call_1 (fn, type, arg0, arg1); } |