diff options
author | Richard Sandiford <richard.sandiford@arm.com> | 2019-11-14 14:45:49 +0000 |
---|---|---|
committer | Richard Sandiford <rsandifo@gcc.gnu.org> | 2019-11-14 14:45:49 +0000 |
commit | 0a0ef2387cc1561d537d8d949aef9479ef17ba35 (patch) | |
tree | 9c5d882c792520fd488d9020641474564b6e62de /gcc/tree-vect-loop-manip.c | |
parent | d083ee47a9828236016841356fc7207e7c90bbbd (diff) | |
download | gcc-0a0ef2387cc1561d537d8d949aef9479ef17ba35.zip gcc-0a0ef2387cc1561d537d8d949aef9479ef17ba35.tar.gz gcc-0a0ef2387cc1561d537d8d949aef9479ef17ba35.tar.bz2 |
Add build_truth_vector_type_for_mode
Callers of vect_halve_mask_nunits and vect_double_mask_nunits
already know what mode the resulting vector type should have,
so we might as well create the vector type directly with that mode,
just like build_vector_type_for_mode lets us build normal vectors
with a known mode. This avoids the current awkwardness of having
to recompute the mode starting from vec_info::vector_size, which
hard-codes the assumption that all vectors have to be the same size.
A later patch gets rid of build_truth_vector_type and
build_same_sized_truth_vector_type, so the net effect of the
series is to reduce the number of type functions by one.
2019-11-14 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree.h (build_truth_vector_type_for_mode): Declare.
* tree.c (build_truth_vector_type_for_mode): New function,
split out from...
(build_truth_vector_type): ...here.
(build_opaque_vector_type): Fix head comment.
* tree-vectorizer.h (supportable_narrowing_operation): Remove
vec_info parameter.
(vect_halve_mask_nunits): Replace vec_info parameter with the
mode of the new vector.
(vect_double_mask_nunits): Likewise.
* tree-vect-loop.c (vect_halve_mask_nunits): Likewise.
(vect_double_mask_nunits): Likewise.
* tree-vect-loop-manip.c: Include insn-config.h, rtl.h and recog.h.
(vect_maybe_permute_loop_masks): Remove vinfo parameter. Update call
to vect_halve_mask_nunits, getting the required mode from the unpack
patterns.
(vect_set_loop_condition_masked): Update call accordingly.
* tree-vect-stmts.c (supportable_narrowing_operation): Remove vec_info
parameter and update call to vect_double_mask_nunits.
(vectorizable_conversion): Update call accordingly.
(simple_integer_narrowing): Likewise. Remove vec_info parameter.
(vectorizable_call): Update call accordingly.
(supportable_widening_operation): Update call to
vect_halve_mask_nunits.
* config/aarch64/aarch64-sve-builtins.cc (register_builtin_types):
Use build_truth_vector_type_mode instead of build_truth_vector_type.
From-SVN: r278231
Diffstat (limited to 'gcc/tree-vect-loop-manip.c')
-rw-r--r-- | gcc/tree-vect-loop-manip.c | 20 |
1 files changed, 13 insertions, 7 deletions
diff --git a/gcc/tree-vect-loop-manip.c b/gcc/tree-vect-loop-manip.c index beee5fe..f49d980 100644 --- a/gcc/tree-vect-loop-manip.c +++ b/gcc/tree-vect-loop-manip.c @@ -47,6 +47,9 @@ along with GCC; see the file COPYING3. If not see #include "stor-layout.h" #include "optabs-query.h" #include "vec-perm-indices.h" +#include "insn-config.h" +#include "rtl.h" +#include "recog.h" /************************************************************************* Simple Loop Peeling Utilities @@ -317,20 +320,24 @@ interleave_supported_p (vec_perm_indices *indices, tree vectype, latter. Return true on success, adding any new statements to SEQ. */ static bool -vect_maybe_permute_loop_masks (loop_vec_info loop_vinfo, gimple_seq *seq, - rgroup_masks *dest_rgm, +vect_maybe_permute_loop_masks (gimple_seq *seq, rgroup_masks *dest_rgm, rgroup_masks *src_rgm) { tree src_masktype = src_rgm->mask_type; tree dest_masktype = dest_rgm->mask_type; machine_mode src_mode = TYPE_MODE (src_masktype); + insn_code icode1, icode2; if (dest_rgm->max_nscalars_per_iter <= src_rgm->max_nscalars_per_iter - && optab_handler (vec_unpacku_hi_optab, src_mode) != CODE_FOR_nothing - && optab_handler (vec_unpacku_lo_optab, src_mode) != CODE_FOR_nothing) + && (icode1 = optab_handler (vec_unpacku_hi_optab, + src_mode)) != CODE_FOR_nothing + && (icode2 = optab_handler (vec_unpacku_lo_optab, + src_mode)) != CODE_FOR_nothing) { /* Unpacking the source masks gives at least as many mask bits as we need. We can then VIEW_CONVERT any excess bits away. */ - tree unpack_masktype = vect_halve_mask_nunits (loop_vinfo, src_masktype); + machine_mode dest_mode = insn_data[icode1].operand[0].mode; + gcc_assert (dest_mode == insn_data[icode2].operand[0].mode); + tree unpack_masktype = vect_halve_mask_nunits (src_masktype, dest_mode); for (unsigned int i = 0; i < dest_rgm->masks.length (); ++i) { tree src = src_rgm->masks[i / 2]; @@ -690,8 +697,7 @@ vect_set_loop_condition_masked (class loop *loop, loop_vec_info loop_vinfo, { rgroup_masks *half_rgm = &(*masks)[nmasks / 2 - 1]; if (!half_rgm->masks.is_empty () - && vect_maybe_permute_loop_masks (loop_vinfo, &header_seq, - rgm, half_rgm)) + && vect_maybe_permute_loop_masks (&header_seq, rgm, half_rgm)) continue; } |