Age | Commit message (Collapse) | Author | Files | Lines |
|
This fixes two PRs on Early break vectorization by delaying the safety checks to
vectorizable_load when the VF, VMAT and vectype are all known.
This patch does add two new restrictions:
1. On LOAD_LANES targets, where the buffer size is known, we reject non-power
of two group sizes, as they are unaligned every other iteration and so may
cross a page unwittingly. For those cases require partial masking support.
2. On LOAD_LANES targets when the buffer is unknown, we reject vectorization if
we cannot peel for alignment, as the alignment requirement is quite large at
GROUP_SIZE * vectype_size. This is unlikely to ever be beneficial so we
don't support it for now.
There are other steps documented inside the code itself so that the reasoning
is next to the code.
As a fall-back, when the alignment fails we require partial vector support.
For VLA targets like SVE return element alignment as the desired vector
alignment. This means that the loads are never misaligned and so annoying it
won't ever need to peel.
So what I think needs to happen in GCC 16 is that.
1. during vect_compute_data_ref_alignment we need to take the max of
POLY_VALUE_MIN and vector_alignment.
2. vect_do_peeling define skip_vector when PFA for VLA, and in the guard add a
check that ncopies * vectype does not exceed POLY_VALUE_MAX which we use as a
proxy for pagesize.
3. Force LOOP_VINFO_USING_PARTIAL_VECTORS_P to be true in
vect_determine_partial_vectors_and_peeling since the first iteration has to
be partial. Require LOOP_VINFO_MUST_USE_PARTIAL_VECTORS_P otherwise we have
to fail to vectorize.
4. Create a default mask to be used, so that vect_use_loop_mask_for_alignment_p
becomes true and we generate the peeled check through loop control for
partial loops. From what I can tell this won't work for
LOOP_VINFO_FULLY_WITH_LENGTH_P since they don't have any peeling support at
all in the compiler. That would need to be done independently from the
above.
In any case, not GCC 15 material so I've kept the WIP patches I have downstream.
Bootstrapped Regtested on aarch64-none-linux-gnu,
arm-none-linux-gnueabihf, x86_64-pc-linux-gnu
-m32, -m64 and no issues.
gcc/ChangeLog:
PR tree-optimization/118464
PR tree-optimization/116855
* doc/invoke.texi (min-pagesize): Update docs with vectorizer use.
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): Delay
checks.
(vect_compute_data_ref_alignment): Remove alignment checks and move to
get_load_store_type, increase group access alignment.
(vect_enhance_data_refs_alignment): Add note to comment needing
investigating.
(vect_analyze_data_refs_alignment): Likewise.
(vect_supportable_dr_alignment): For group loads look at first DR.
* tree-vect-stmts.cc (get_load_store_type):
Perform safety checks for early break pfa.
* tree-vectorizer.h (dr_set_safe_speculative_read_required,
dr_safe_speculative_read_required, DR_SCALAR_KNOWN_BOUNDS): New.
(need_peeling_for_alignment): Renamed to...
(safe_speculative_read_required): .. This
(class dr_vec_info): Add scalar_access_known_in_bounds.
gcc/testsuite/ChangeLog:
PR tree-optimization/118464
PR tree-optimization/116855
* gcc.dg/vect/bb-slp-pr65935.c: Update, it now vectorizes because the
load type is relaxed later.
* gcc.dg/vect/vect-early-break_121-pr114081.c: Update.
* gcc.dg/vect/vect-early-break_22.c: Require partial vectors.
* gcc.dg/vect/vect-early-break_128.c: Likewise.
* gcc.dg/vect/vect-early-break_26.c: Likewise.
* gcc.dg/vect/vect-early-break_43.c: Likewise.
* gcc.dg/vect/vect-early-break_44.c: Likewise.
* gcc.dg/vect/vect-early-break_2.c: Require load_lanes.
* gcc.dg/vect/vect-early-break_7.c: Likewise.
* gcc.dg/vect/vect-early-break_132-pr118464.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa1.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa11.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa10.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa2.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa3.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa4.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa5.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa6.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa7.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa8.c: New test.
* gcc.dg/vect/vect-early-break_133_pfa9.c: New test.
* gcc.dg/vect/vect-early-break_39.c: Update testcase for misalignment.
* gcc.dg/vect/vect-early-break_18.c: Likewise.
* gcc.dg/vect/vect-early-break_20.c: Likewise.
* gcc.dg/vect/vect-early-break_21.c: Likewise.
* gcc.dg/vect/vect-early-break_38.c: Likewise.
* gcc.dg/vect/vect-early-break_6.c: Likewise.
* gcc.dg/vect/vect-early-break_53.c: Likewise.
* gcc.dg/vect/vect-early-break_56.c: Likewise.
* gcc.dg/vect/vect-early-break_57.c: Likewise.
* gcc.dg/vect/vect-early-break_81.c: Likewise.
|
|
The following fixes a latent issue where we use ranges to verify
correctness of a vector conversion optimization. We rely on ranges
from 'op0' which for SLP is extracted from the representative stmt
which does not necessarily correspond to any actual scalar operation.
We also do not verify the range of all scalar lanes in the SLP
operand match. The following rectifies this, restricting the support
to single-lane SLP nodes at this point - on branches we'd simply
not perform this optimization with SLP.
PR tree-optimization/115538
* tree-vectorizer.h (vect_get_slp_scalar_def): Declare.
* tree-vect-slp.cc (vect_get_slp_scalar_def): New helper.
* tree-vect-generic.cc (expand_vector_conversion): Adjust.
* tree-vect-stmts.cc (vectorizable_conversion): For SLP
correctly look at ranges of the scalar defs of the SLP operand.
(supportable_indirect_convert_operation): Likewise.
|
|
There are calls to dr_misalignment left that do not correct for the
offset (which is vector type dependent) when the stride is negative.
Notably vect_known_alignment_in_bytes doesn't allow to pass through
such offset which the following adds (computing the offset in
vect_known_alignment_in_bytes would be possible as well, but the
offset can be shared as seen). Eventually this function could go away.
This leads to peeling for gaps not considerd, nor shortening of the
access applied which is what fixes the testcase on x86_64.
PR tree-optimization/118558
* tree-vectorizer.h (vect_known_alignment_in_bytes): Pass
through offset to dr_misalignment.
* tree-vect-stmts.cc (get_group_load_store_type): Compute
offset applied for negative stride and use it when querying
alignment of accesses.
(vectorizable_load): Likewise.
* gcc.dg/vect/pr118558.c: New testcase.
|
|
This allows us to vectorize more loops with early exits by forcing
peeling for alignment to make sure that we're guaranteed to be able to
safely read an entire vector iteration without crossing a page boundary.
To make this work for VLA architectures we have to allow compile-time
non-constant target alignments. We also have to override the result of
the target's preferred_vector_alignment hook if it isn't a power-of-two
multiple of the TYPE_SIZE of the chosen vector type.
gcc/ChangeLog:
PR tree-optimization/118211
PR tree-optimization/116126
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences):
Set need_peeling_for_alignment flag on read DRs instead of
failing vectorization. Punt on gathers.
(dr_misalignment): Handle non-constant target alignments.
(vect_compute_data_ref_alignment): If need_peeling_for_alignment
flag is set on the DR, then override the target alignment chosen
by the preferred_vector_alignment hook to choose a safe
alignment.
(vect_supportable_dr_alignment): Override
support_vector_misalignment hook if need_peeling_for_alignment
is set on the DR: in this case we must return
dr_unaligned_unsupported in order to force peeling.
* tree-vect-loop-manip.cc (vect_do_peeling): Allow prolog
peeling by a compile-time non-constant amount.
* tree-vectorizer.h (dr_vec_info): Add new flag
need_peeling_for_alignment.
gcc/testsuite/ChangeLog:
PR tree-optimization/118211
PR tree-optimization/116126
* gcc.dg/tree-ssa/cunroll-13.c: Don't vectorize.
* gcc.dg/tree-ssa/cunroll-14.c: Likewise.
* gcc.dg/unroll-6.c: Likewise.
* gcc.dg/tree-ssa/gen-vect-28.c: Likewise.
* gcc.dg/vect/vect-104.c: Expect to vectorize.
* gcc.dg/vect/vect-early-break_108-pr113588.c: Likewise.
* gcc.dg/vect/vect-early-break_109-pr113588.c: Likewise.
* gcc.dg/vect/vect-early-break_110-pr113467.c: Likewise.
* gcc.dg/vect/vect-early-break_3.c: Likewise.
* gcc.dg/vect/vect-early-break_65.c: Likewise.
* gcc.dg/vect/vect-early-break_8.c: Likewise.
* gfortran.dg/vect/vect-5.f90: Likewise.
* gfortran.dg/vect/vect-8.f90: Likewise.
* gcc.dg/vect/vect-switch-search-line-fast.c:
Co-Authored-By: Tamar Christina <tamar.christina@arm.com>
|
|
The following fixes memory leaks found compiling SPEC CPU 2017 with
valgrind.
* df-core.cc (rest_of_handle_df_finish): Release dflow for
problems without free function (like LR).
* gimple-crc-optimization.cc (crc_optimization::loop_may_calculate_crc):
Release loop_bbs on all exits.
* tree-vectorizer.h (supportable_indirect_convert_operation): Change.
* tree-vect-generic.cc (expand_vector_conversion): Adjust.
* tree-vect-stmts.cc (vectorizable_conversion): Use auto_vec for
converts.
(supportable_indirect_convert_operation): Get a reference to
the output vector of converts.
|
|
|
|
While testing upcoming support for 64-bit location_t, I came across some
test failures on sparc (32-bit) that trigger when location_t is changed to
be 64-bit. The reason is that several call sites that make use of
loop_version() for performing loop optimizations assume that a gphi*
obtained prior to calling loop_version() will remain valid afterwards, but
this is not the case for a PHI that needs to be resized. It doesn't happen
usually, because PHI nodes usually have room for at least 4 arguments and
this is usually more than are needed, but this is not guaranteed.
Fix the affected callers by avoiding the assumption that a PHI node pointer
remains valid. For most cases, this is done by remembering instead the
gphi->result pointer, which contains a pointer back to the PHI node that is
kept up to date when the PHI is moved to a new address.
gcc/ChangeLog:
* tree-parloops.cc (struct reduction_info): Store the result of the
reduction PHI rather than the PHI itself.
(reduction_info::reduc_phi): New member function.
(reduction_hasher::equal): Adapt to the change in struct reduction_info.
(reduction_phi): Likewise.
(initialize_reductions): Likewise.
(create_call_for_reduction_1): Likewise.
(transform_to_exit_first_loop_alt): Likewise.
(transform_to_exit_first_loop): Likewise.
(build_new_reduction): Likewise.
(set_reduc_phi_uids): Likewise.
(try_create_reduction_list): Likewise.
* tree-ssa-loop-split.cc (split_loop): Remember the PHI result
variable so that the PHI can be found in case it is resized and move
to a new address.
* tree-vect-loop-manip.cc (vect_loop_versioning): After calling
loop_version(), fix up stored PHI pointers in case they have
changed.
* tree-vectorizer.cc (vec_info::resync_stmt_addr): New function.
* tree-vectorizer.h (vec_info::resync_stmt_addr): Declare.
|
|
The following removes scalar stmt counting from loop vectorization
and using that as base to limit both the SLP tree final size and
discovery. Instead use the number of relevant stmts for that
which is conveniently the number of stmt_vec_infos we create which
in turn includes things like pattern stmts.
PR tree-optimization/117874
* tree-vectorizer.h (vec_info_shared::n_stmts): Remove.
(LOOP_VINFO_N_STMTS): Likewise.
* tree-vectorizer.cc (vec_info_shared::vec_info_shared): Adjust.
* tree-vect-loop.cc (vect_get_datarefs_in_loop): Do not
count stmts.
(vect_analyze_loop_2): Adjust. Pass stmt_vec_info.length ()
to vect_analyze_slp as SLP tree size limit.
|
|
With the support to SLP only we now pass the VMAT through the SLP node, however
the majority of the costing calls inside vectorizable_load and
vectorizable_store do no pass the SLP node along. Due to this the backend costing
never sees the VMAT for these cases anymore.
Additionally the helper around record_stmt_cost when both SLP and stmt_vinfo are
passed would only pass the SLP node along. However the SLP node doesn't contain
all the info available in the stmt_vinfo and we'd have to go through the
SLP_TREE_REPRESENTATIVE anyway. As such I changed the function to just Always
pass both along. Unlike the VMAT changes, I don't believe there to be a
correctness issue here but would minimize the number of churn in the backend
costing until vectorizer costing as a whole is revisited in GCC 16.
These changes re-enable the cost model on AArch64 and also correctly find the
VMATs on loads and stores fixing testcases such as sve_iters_low_2.c.
gcc/ChangeLog:
* tree-vect-data-refs.cc (vect_get_data_access_cost): Pass NULL for SLP
node.
* tree-vect-stmts.cc (record_stmt_cost): Expose.
(vect_get_store_cost, vect_get_load_cost): Extend with SLP node.
(vectorizable_store, vectorizable_load): Pass SLP node to all costing.
* tree-vectorizer.h (record_stmt_cost): Always pass both SLP node and
stmt_vinfo to costing.
(vect_get_load_cost, vect_get_store_cost): Extend with SLP node.
|
|
This patch adds an else operand to vectorized masked load calls.
The current implementation adds else-value arguments to the respective
target-querying functions that is used to supply the vectorizer with the
proper else value.
We query the target for its supported else operand and uses that for the
maskload call. If necessary, i.e. if the mode has padding bits and if
the else operand is nonzero, a VEC_COND enforcing a zero else value is
emitted.
gcc/ChangeLog:
* optabs-query.cc (supports_vec_convert_optab_p): Return icode.
(get_supported_else_val): Return supported else value for
optab's operand at index.
(supports_vec_gather_load_p): Add else argument.
(supports_vec_scatter_store_p): Ditto.
* optabs-query.h (supports_vec_gather_load_p): Ditto.
(get_supported_else_val): Ditto.
* optabs-tree.cc (target_supports_mask_load_store_p): Ditto.
(can_vec_mask_load_store_p): Ditto.
(target_supports_len_load_store_p): Ditto.
(get_len_load_store_mode): Ditto.
* optabs-tree.h (target_supports_mask_load_store_p): Ditto.
(can_vec_mask_load_store_p): Ditto.
* tree-vect-data-refs.cc (vect_lanes_optab_supported_p): Ditto.
(vect_gather_scatter_fn_p): Ditto.
(vect_check_gather_scatter): Ditto.
(vect_load_lanes_supported): Ditto.
* tree-vect-patterns.cc (vect_recog_gather_scatter_pattern):
Ditto.
* tree-vect-slp.cc (vect_get_operand_map): Adjust indices for
else operand.
(vect_slp_analyze_node_operations): Skip undefined else operand.
* tree-vect-stmts.cc (exist_non_indexing_operands_for_use_p):
Add else operand handling.
(vect_get_vec_defs_for_operand): Handle undefined else operand.
(check_load_store_for_partial_vectors): Add else argument.
(vect_truncate_gather_scatter_offset): Ditto.
(vect_use_strided_gather_scatters_p): Ditto.
(get_group_load_store_type): Ditto.
(get_load_store_type): Ditto.
(vect_get_mask_load_else): Ditto.
(vect_get_else_val_from_tree): Ditto.
(vect_build_one_gather_load_call): Add zero else operand.
(vectorizable_load): Use else operand.
* tree-vectorizer.h (vect_gather_scatter_fn_p): Add else
argument.
(vect_load_lanes_supported): Ditto.
(vect_get_mask_load_else): Ditto.
(vect_get_else_val_from_tree): Ditto.
|
|
The following ensures that peeling a single iteration for gaps is
sufficient by enforcing niter masking (partial vector use) given
we cannot (always) statically decide when the vector size isn't known.
The condition guarding this and thus statically giving a pass in
some cases for VL vectors is questionable, the patch doesn't address
this.
This fixes a set of known failout from enabling
--param vect-force-slp=1 by default.
PR tree-optimization/117558
* tree-vectorizer.h (_loop_vec_info::must_use_partial_vectors_p): New.
(LOOP_VINFO_MUST_USE_PARTIAL_VECTORS_P): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize
must_use_partial_vectors_p.
(vect_determine_partial_vectors_and_peeling): Enforce it.
(vect_analyze_loop_2): Reset before restarting.
* tree-vect-stmts.cc (get_group_load_store_type): When peeling
a single gap iteration cannot be determined safe statically
enforce the use of partial vectors.
|
|
The following enables targets to suggest the vector mode to be used
preferably for the epilogue of a vectorized loop. The patch also
enables more than one vectorized epilogue in case the target suggests
a vector mode for the epilogue of a vector epilogue.
* tree-vectorizer.h (vector_costs::suggested_epilogue_mode): New.
(vector_costs::m_suggested_epilogue_mode): Likewise.
(vector_costs::vector_costs): Initialize m_suggested_epilogue_mode.
* tree-vect-loop.cc (vect_analyze_loop): Honor the target
suggested prefered epilogue mode and support vector epilogues
of vector epilogues if requested.
|
|
The following introduces LOOP_VINFO_MAIN_LOOP_INFO alongside
LOOP_VINFO_ORIG_LOOP_INFO so one can have both access to the main
vectorized loop info and the preceeding vectorized epilogue.
This is critical for correctness as we need to disallow never
executed epilogues by costing in vect_analyze_loop_costing as
we assume those do not exist when deciding to add a skip-vector
edge during peeling. The patch also changes how multiple vector
epilogues are handled - instead of the epilogue_vinfos array in
the main loop info we now record the single epilogue_vinfo there
and further epilogues in the epilogue_vinfo member of the
epilogue info. This simplifies code.
* tree-vectorizer.h (_loop_vec_info::main_loop_info): New.
(LOOP_VINFO_MAIN_LOOP_INFO): Likewise.
(_loop_vec_info::epilogue_vinfo): Change from epilogue_vinfos
from array to single element.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize
main_loop_info and epilogue_vinfo. Remove epilogue_vinfos
allocation.
(_loop_vec_info::~_loop_vec_info): Do not release epilogue_vinfos.
(vect_create_loop_vinfo): Rename parameter, set
LOOP_VINFO_MAIN_LOOP_INFO.
(vect_analyze_loop_1): Rename parameter.
(vect_analyze_loop_costing): Properly distinguish between
the main vector loop and the preceeding epilogue.
(vect_analyze_loop): Change for epilogue_vinfos no longer
being a vector.
* tree-vect-loop-manip.cc (vect_do_peeling): Simplify and
thereby handle a vector epilogue of a vector epilogue.
|
|
The following remembers how we advanced DRs when vectorizing an
epilogue. When we want to vectorize the epilogue of such epilogue
we have to retain that advancement and add the advancement for this
vectorized epilogue. Due to the way we copy and re-associate
stmt_vec_infos and DRs recording this advancement and re-applying
it for the next epilogue is simplest.
* tree-vectorizer.h (_loop_vec_info::drs_advanced_by): New.
(LOOP_VINFO_DRS_ADVANCED_BY): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize
drs_advanced_by.
(update_epilogue_loop_vinfo): Remember the DR advancement made.
(vect_transform_loop): Accumulate past advancements.
|
|
The inline function wraps the vector_cost class API and no longer is
a good representation of the query style of that class which makes it
also difficult to extend.
* tree-vectorizer.h (finish_cost): Inline everywhere and remove.
* tree-vect-loop.cc (vect_estimate_min_profitable_iters):
Inline finish_cost.
* tree-vect-slp.cc (vect_bb_vectorization_profitable_p): Likewise.
|
|
The following prepares us for SLP instances with a non-uniform number
of lanes. We already have this with load permutation lowering, but
we managed to keep that within the constraints of the per SLP instance
computed VF based on its max_nunits (with a vector type fixed for
each node) and the instance group size which is the number of lanes
in the SLP instance root. But in the case where arbitrary splitting
and merging SLP nodes at non-power-of-two lane boundaries is allowed
this simple calculation based on the outgoing group size falls apart.
The following, instead of computing a VF during SLP instance
discovery, computes it at vect_make_slp_decision time by walking
the SLP graph and looking at each SLP node in isolation. We do
track max_nunits per node which could be a VF per node instead or
forgo with both completely (though for BB vectorization we need
to communicate a VF > 1 requirement upward, or compute that after
the fact). In the end we'd like to delay vector type assignment
and only compute a minimum VF here, allowing vector types to
grow when the actual VF is bigger.
There's slight complication with permutes of externs / constants
as those get their vector type (and thus max_nunits) assigned late.
While we force them to have the same vector type as the result at
the moment their number of lanes can differ. So those get handled
explicitly there right now to up the VF as needed - the alternative
is to fail vectorization, I have an addition to
vect_maybe_update_slp_op_vectype that would FAIL if the set
vector type isn't within the constraints of the VF.
* tree-vectorizer.h (SLP_INSTANCE_UNROLLING_FACTOR): Remove.
(slp_instance::unrolling_factor): Likewise.
* tree-vect-slp.cc (vect_build_slp_instance): Do not set
SLP_INSTANCE_UNROLLING_FACTOR. Remove then dead code.
Compute and set max_nunits from the RHS nodes merged.
(vect_update_slp_vf_for_node): New function.
(vect_make_slp_decision): Use vect_update_slp_vf_for_node
to compute VF recursively.
(vect_build_slp_store_interleaving): Get max_nunits and
properly set that on the permute nodes built.
(vect_analyze_slp): Do not set SLP_INSTANCE_UNROLLING_FACTOR.
|
|
While chasing down a costing difference between SLP and non-SLP for memory
access costing I noticed that at some point the SLP and non-SLP costing have
diverged. It used to be we only supported LOAD_LANES in SLP and so the non-SLP
costing was working fine.
But with the change to SLP only we now lost costing.
It looks like the vectorizer for non-SLP stores the VMAT type in
STMT_VINFO_MEMORY_ACCESS_TYPE on the stmt_info, but for SLP it stores it in
SLP_TREE_MEMORY_ACCESS_TYPE which is on the SLP node itself.
While my first attempt of a patch was to just also store the VMAT in the
stmt_info https://gcc.gnu.org/pipermail/gcc-patches/2024-October/665295.html
Richi pointed out that this goes wrong when the same access is used Hybrid.
And so we have to do a backend specific fix. To help out other backends this
also introduces a generic helper function suggested by Richi in that patch
(I hope that's ok.. I didn't want to split out just the helper.)
This successfully restores VMAT based costing in the new SLP only world.
gcc/ChangeLog:
* tree-vectorizer.h (vect_mem_access_type): New.
* config/aarch64/aarch64.cc (aarch64_ld234_st234_vectors): Use it.
(aarch64_detect_vector_stmt_subtype): Likewise.
(aarch64_adjust_stmt_cost): Likewise.
(aarch64_vector_costs::count_ops): Likewise.
(aarch64_vector_costs::add_stmt_cost): Make SLP node named.
|
|
This patch introduces feature parity for early break int the SLP only
vectorizer.
The approach taken here is to treat the early exits as root statements for an
SLP tree. This means that we don't need any changes to build_slp to support
gconds.
Codegen for the gcond itself now has to be done out of line but the body of the
SLP blocks itself is simply driven by SLP scheduling. There is a slight
awkwardness in having re-used vectorizable_early_exit for both SLP and non-SLP
but I've documented the differences and when I did try to refactor it it wasn't
really worth it given that this is a temporary state anyway.
This version is restricted to lane = 1, as such we can re-use the existing
move_early_break function instead of having to do safety update through
scheduling. I have a branch where I'm working on that but lane > 1 is out of
scope for GCC 15 anyway. The only reason I will try to get moving through
scheduling done as a stretch goal is so we get epilogue vectorization back for
early break.
The example:
unsigned test4(unsigned x)
{
unsigned ret = 0;
for (int i = 0; i < N; i++)
{
vect_b[i] = x + i;
if (vect_a[i]*2 != x)
break;
vect_a[i] = x;
}
return ret;
}
builds the following SLP instance for early break:
note: Analyzing vectorizable control flow: if (patt_6 != 0)
note: Starting SLP discovery for
note: patt_6 = _4 != x_9(D);
note: starting SLP discovery for node 0x63abc80
note: Build SLP for patt_6 = _4 != x_9(D);
note: precomputed vectype: vector(4) <signed-boolean:32>
note: nunits = 4
note: vect_is_simple_use: operand x_9(D), type of def: external
note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff
_3 * 2, type of def: internal
note: starting SLP discovery for node 0x63abdc0
note: Build SLP for _4 = _3 * 2;
note: precomputed vectype: vector(4) unsigned int
note: nunits = 4
note: vect_is_simple_use: operand #
vect_aD.4416[i_15], type of def: internal
note: vect_is_simple_use: operand 2, type of def: constant
note: starting SLP discovery for node 0x63abe60
note: Build SLP for _3 = vect_a[i_15];
note: precomputed vectype: vector(4) unsigned int
note: nunits = 4
note: SLP discovery for node 0x63abe60 succeeded
note: SLP discovery for node 0x63abdc0 succeeded
note: SLP discovery for node 0x63abc80 succeeded
note: SLP size 3 vs. limit 10.
note: Final SLP tree for instance 0x6474190:
note: node 0x63abc80 (max_nunits=4, refcnt=2) vector(4) <signed-boolean:32>
note: op template: patt_6 = _4 != x_9(D);
note: stmt 0 patt_6 = _4 != x_9(D);
note: children 0x63abd20 0x63abdc0
note: node (external) 0x63abd20 (max_nunits=1, refcnt=1)
note: { x_9(D) }
note: node 0x63abdc0 (max_nunits=4, refcnt=2) vector(4) unsigned int
note: op template: _4 = _3 * 2;
note: stmt 0 _4 = _3 * 2;
note: children 0x63abe60 0x63abf00
note: node 0x63abe60 (max_nunits=4, refcnt=2) vector(4) unsigned int
note: op template: _3 = vect_a[i_15];
note: stmt 0 _3 = vect_a[i_15];
note: load permutation { 0 }
note: node (constant) 0x63abf00 (max_nunits=1, refcnt=1)
note: { 2 }
and during codegen:
note: ------>vectorizing SLP node starting from: patt_6 = _4 != x_9(D);
note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff
_3 * 2, type of def: internal
note: add new stmt: mask_patt_6.18_58 = _53 != vect__4.17_57;
note: === vectorizable_early_exit ===
note: transform early-exit.
note: vectorizing stmts using SLP.
note: Vectorizing SLP tree:
note: node 0x63abfa0 (max_nunits=4, refcnt=1) vector(4) int
note: op template: i_12 = i_15 + 1;
note: stmt 0 i_12 = i_15 + 1;
note: children 0x63aba00 0x63ac040
note: node 0x63aba00 (max_nunits=4, refcnt=2) vector(4) int
note: op template: i_15 = PHI <i_12(6), 0(14)>
note: [l] stmt 0 i_15 = PHI <i_12(6), 0(14)>
note: children (nil) (nil)
note: node (constant) 0x63ac040 (max_nunits=1, refcnt=1) vector(4) int
note: { 1 }
gcc/ChangeLog:
* tree-vect-loop.cc (vect_analyze_loop_2): Handle SLP trees with no
children.
* tree-vectorizer.h (enum slp_instance_kind): Add slp_inst_kind_gcond.
(LOOP_VINFO_EARLY_BREAKS_LIVE_IVS): New.
(vectorizable_early_exit): Expose.
(class _loop_vec_info): Add early_break_live_stmts.
* tree-vect-slp.cc (vect_build_slp_instance, vect_analyze_slp_instance):
Support gcond instances.
(vect_analyze_slp): Analyze gcond roots and early break live statements.
(maybe_push_to_hybrid_worklist): Don't sink gconds.
(vect_slp_analyze_operations): Support gconds.
(vect_slp_check_for_roots): Update comments.
(vectorize_slp_instance_root_stmt): Support gconds.
(vect_schedule_slp): Pass vinfo to vectorize_slp_instance_root_stmt.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Record early break live
statements.
(vectorizable_early_exit): Support SLP.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-early-break_126.c: New test.
* gcc.dg/vect/vect-early-break_127.c: New test.
* gcc.dg/vect/vect-early-break_128.c: New test.
|
|
It turns out target costing code looks at STMT_VINFO_MEMORY_ACCESS_TYPE
to identify operations from (emulated) gathers for example. This
doesn't work for SLP loads since we do not set STMT_VINFO_MEMORY_ACCESS_TYPE
there as the vectorization strathegy might differ between different
stmt uses. It seems we got away with setting it for stores though.
The following adds a memory_access_type field to slp_tree and sets it
from load and store vectorization code. All the costing doesn't record
the SLP node (that was only done selectively for some corner case). The
costing is really in need of a big overhaul, the following just massages
the two relevant ops to fix gcc.dg/target/pr88531-2[bc].c FAILs when
switching on SLP for non-grouped stores. In particular currently
we either have a SLP node or a stmt_info in the cost hook but not both.
So the following mitigates this, postponing a rewrite of costing to
next stage1. Other targets look possibly affected as well but are
left to respective maintainers to update.
PR tree-optimization/117080
* tree-vectorizer.h (_slp_tree::memory_access_type): Add.
(SLP_TREE_MEMORY_ACCESS_TYPE): New.
(record_stmt_cost): Add another overload.
* tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize
memory_access_type.
* tree-vect-stmts.cc (vectorizable_store): Set
SLP_TREE_MEMORY_ACCESS_TYPE.
(vectorizable_load): Likewise. Also record the SLP node
when costing emulated gather offset decompose and vector
composition.
* config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): Also
recognize SLP emulated gather/scatter.
|
|
The following makes sure to discover the scalar loop IV exit during
analysis as failure to do so (if DCE and friends are disabled this
can happen due to if-conversion doing DCE and FRE on the if-converted
loop) would ICE later.
I refrained from larger refactoring to be able to eventually backport.
PR tree-optimization/116982
* tree-vectorizer.h (vect_analyze_loop): Pass in .LOOP_VECTORIZED
call.
(vect_analyze_loop_form): Likewise.
* tree-vect-loop.cc (vect_analyze_loop_form): Reject loops where we
cannot determine a IV exit for the scalar loop.
(vect_analyze_loop): Adjust.
* tree-vectorizer.cc (try_vectorize_loop_1): Likewise.
* tree-parloops.cc (gather_scalar_reductions): Likewise.
|
|
Currently the vectorizer cheats when lowering COND_EXPR during bool recog.
In the cases where the conditonal is loop invariant or non-boolean it instead
converts the operation back into GENERIC and hides much of the operation from
the analysis part of the vectorizer.
i.e.
a ? b : c
is transformed into:
a != 0 ? b : c
however by doing so we can't perform any optimization on the mask as they aren't
explicit until quite late during codegen.
To fix this this patch lowers booleans earlier and so ensures that we are always
in GIMPLE.
For when the value is a loop invariant boolean we have to generate an additional
conversion from bool to the integer mask form.
This is done by creating a loop invariant a ? -1 : 0 with the target mask
precision and then doing a normal != 0 comparison on that.
To support this the patch also adds the ability to during pattern matching
create a loop invariant pattern that won't be seen by the vectorizer and will
instead me materialized inside the loop preheader in the case of loops, or in
the case of BB vectorization it materializes it in the first BB in the region.
gcc/ChangeLog:
* tree-vect-patterns.cc (append_inv_pattern_def_seq): New.
(vect_recog_bool_pattern): Lower COND_EXPRs.
* tree-vect-slp.cc (vect_slp_region): Materialize loop invariant
statements.
* tree-vect-loop.cc (vect_transform_loop): Likewise.
* tree-vect-stmts.cc (vectorizable_comparison_1): Remove
VECT_SCALAR_BOOLEAN_TYPE_P handling for vectype.
* tree-vectorizer.cc (vec_info::vec_info): Initialize
inv_pattern_def_seq.
* tree-vectorizer.h (LOOP_VINFO_INV_PATTERN_DEF_SEQ): New.
(class vec_info): Add inv_pattern_def_seq.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/bb-slp-conditional_store_1.c: New test.
* gcc.dg/vect/vect-conditional_store_5.c: New test.
* gcc.dg/vect/vect-conditional_store_6.c: New test.
|
|
The following changes the fallback to disable SLP when any of the
discovered SLP instances failed to pass vectorization checking into
a fallback that emulates what no SLP would do with SLP - force
single-lane discovery for all instances.
The patch does not remove the final fallback to disable SLP but it
reduces the fallout from failing vectorization when any non-SLP
stmt survives analysis.
* tree-vectorizer.h (vect_analyze_slp): Add force_single_lane
parameter.
* tree-vect-slp.cc (vect_analyze_slp_instance): Remove
defaulting of force_single_lane.
(vect_build_slp_instance): Likewise. Pass down appropriate
force_single_lane.
(vect_analyze_slp): Add force_sigle_lane parameter and pass
it down appropriately.
(vect_slp_analyze_bb_1): Always do multi-lane SLP.
* tree-vect-loop.cc (vect_analyze_loop_2): Track two SLP
modes and adjust accordingly.
(vect_analyze_loop_1): Save the SLP mode when unrolling.
* gcc.dg/vect/vect-outer-slp-1.c: Adjust.
|
|
The following is a prototype for how to represent load/store-lanes
within SLP. I've for now settled with having a single load node
with multiple permute nodes acting as selection, one for each loaded lane
and a single store node fed from all stored lanes. For
for (int i = 0; i < 1024; ++i)
{
a[2*i] = b[2*i] + 7;
a[2*i+1] = b[2*i+1] * 3;
}
you have the following SLP graph where I explain how things are set
up and code-generated:
t.c:23:21: note: SLP graph after lowering permutations:
t.c:23:21: note: node 0x50dc8b0 (max_nunits=1, refcnt=1) vector(4) int
t.c:23:21: note: op template: *_6 = _7;
t.c:23:21: note: stmt 0 *_6 = _7;
t.c:23:21: note: stmt 1 *_12 = _13;
t.c:23:21: note: children 0x50dc488 0x50dc6e8
This is the store node, it's marked with ldst_lanes = true during
SLP discovery. This node code-generates
vect_array.65[0] = vect__7.61_29;
vect_array.65[1] = vect__13.62_28;
MEM <int[8]> [(int *)vectp_a.63_27] = .STORE_LANES (vect_array.65);
...
t.c:23:21: note: node 0x50dc520 (max_nunits=4, refcnt=2) vector(4) int
t.c:23:21: note: op: VEC_PERM_EXPR
t.c:23:21: note: stmt 0 _5 = *_4;
t.c:23:21: note: lane permutation { 0[0] }
t.c:23:21: note: children 0x50dc948
t.c:23:21: note: node 0x50dc780 (max_nunits=4, refcnt=2) vector(4) int
t.c:23:21: note: op: VEC_PERM_EXPR
t.c:23:21: note: stmt 0 _11 = *_10;
t.c:23:21: note: lane permutation { 0[1] }
t.c:23:21: note: children 0x50dc948
These are the selection nodes, marked with ldst_lanes = true.
They code generate nothing.
t.c:23:21: note: node 0x50dc948 (max_nunits=4, refcnt=3) vector(4) int
t.c:23:21: note: op template: _5 = *_4;
t.c:23:21: note: stmt 0 _5 = *_4;
t.c:23:21: note: stmt 1 _11 = *_10;
t.c:23:21: note: load permutation { 0 1 }
This is the load node, marked with ldst_lanes = true (the load
permutation is only accurate when taking into account the lane permute
in the selection nodes). It code generates
vect_array.58 = .LOAD_LANES (MEM <int[8]> [(int *)vectp_b.56_33]);
vect__5.59_31 = vect_array.58[0];
vect__5.60_30 = vect_array.58[1];
This scheme allows to leave code generation in vectorizable_load/store
mostly as-is.
While this should support both load-lanes and (masked) store-lanes
the decision to do either is done during SLP discovery time and
cannot be reversed without altering the SLP tree - as-is the SLP
tree is not usable for non-store-lanes on the store side, the
load side is OK representation-wise but will very likely fail
permute handling as the lowering to deal with the two input vector
restriction isn't done - but of course since the permute node is
marked as to be ignored that doesn't work out. So I've put
restrictions in place that fail vectorization if a load/store-lane
SLP tree is later classified differently by get_load_store_type.
I'll note that for example gcc.target/aarch64/sve/mask_struct_store_3.c
will not get SLP store-lanes used because the full store SLPs just
fine though we then fail to handle the "splat" load-permutation
t2.c:5:21: note: node 0x4db2630 (max_nunits=4, refcnt=2) vector([4,4]) int
t2.c:5:21: note: op template: _6 = *_5;
t2.c:5:21: note: stmt 0 _6 = *_5;
t2.c:5:21: note: stmt 1 _6 = *_5;
t2.c:5:21: note: stmt 2 _6 = *_5;
t2.c:5:21: note: stmt 3 _6 = *_5;
t2.c:5:21: note: load permutation { 0 0 0 0 }
the load permute lowering code currently doesn't consider it worth
lowering single loads from a group (or in this case not grouped loads).
The expectation is the target can handle this by two interleaves with
itself.
So what we see here is that while the explicit SLP representation is
helpful in some cases, in cases like this it would require changing
it when we make decisions how to vectorize. My idea is that this
all will change a lot when we re-do SLP discovery (for loops) and
when we get rid of non-SLP as I think vectorizable_* should be
allowed to alter the SLP graph during analysis.
The patch also removes the code cancelling SLP if we can use
load/store-lanes from the main loop vector analysis code and
re-implements it as re-discovering the SLP instance with
forced single-lane splits so SLP load/store-lanes scheme can be
used.
This is now done after SLP discovery and SLP pattern recog are
complete to not disturb the latter but per SLP instance instead
of being a global decision on the whole loop.
This is a behavioral change that for example shows in
gcc.dg/vect/slp-perm-6.c on ARM where we formerly used SLP permutes
but now a mix of SLP without permutes and load/store lanes. The
previous flaky heuristic is now flaky in a different way.
Testing on RISC-V and aarch64 reveal several testcases that require
adjustment as to now expect SLP even when load/store lanes are being
used. If in doubt I've adjusted them to the final expectation which
will lead to one or two new FAILs where we still do the SLP cancelling.
I have a followup that implements that while remaining in SLP that's
in final testing.
Note that gcc.dg/vect/slp-42.c and gcc.dg/vect/pr68445.c will FAIL
on aarch64 with SVE because for some odd reason vect_stridedN
is true for any N for check_effective_target_vect_fully_masked
targets but SVE cannot do ld8 while risc-v can.
I have not bothered to adjust target tests that now fail assembly-scan.
* tree-vectorizer.h (_slp_tree::ldst_lanes): New flag to mark
load, store and permute nodes.
* tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize ldst_lanes.
(vect_build_slp_instance): For stores iff the target prefers
store-lanes discover single-lane sub-groups, do not perform
interleaving lowering but mark the node with ldst_lanes.
Also allow i == 0 - fatal failure - for splitting up a store group
when we're not doing single-lane discovery already.
(vect_lower_load_permutations): When the target supports
load lanes and the loads all fit the pattern split out
a single level of permutes only and mark the load and
permute nodes with ldst_lanes.
(vectorizable_slp_permutation_1): Handle the load-lane permute
forwarding of vector defs.
(vect_analyze_slp): After SLP pattern recog is finished see if
there are any SLP instances that would benefit from using
load/store-lanes and re-discover those with forced single lanes.
* tree-vect-stmts.cc (get_group_load_store_type): Support
load/store-lanes for SLP.
(vectorizable_store): Support SLP code generation for store-lanes.
(vectorizable_load): Support SLP code generation for load-lanes.
* tree-vect-loop.cc (vect_analyze_loop_2): Do not cancel SLP
when store-lanes can be used.
* gcc.dg/vect/slp-55.c: New testcase.
* gcc.dg/vect/slp-56.c: Likewise.
* gcc.dg/vect/slp-11c.c: Adjust.
* gcc.dg/vect/slp-53.c: Likewise.
* gcc.dg/vect/slp-cond-1.c: Likewise.
* gcc.dg/vect/vect-complex-5.c: Likewise.
* gcc.dg/vect/slp-1.c: Likewise.
* gcc.dg/vect/slp-54.c: Remove riscv XFAIL.
* gcc.dg/vect/slp-perm-5.c: Adjust.
* gcc.dg/vect/slp-perm-7.c: Likewise.
* gcc.dg/vect/slp-perm-8.c: Likewise.
* gcc.dg/vect/slp-multitypes-11.c: Likewise.
* gcc.dg/vect/slp-multitypes-11-big-array.c: Likewise.
* gcc.dg/vect/slp-perm-9.c: Remove expected SLP fail due to
three-vector permute.
* gcc.dg/vect/slp-perm-6.c: Remove XFAIL.
* gcc.dg/vect/slp-perm-1.c: Adjust.
* gcc.dg/vect/slp-perm-2.c: Likewise.
* gcc.dg/vect/slp-perm-3.c: Likewise.
* gcc.dg/vect/slp-perm-4.c: Likewise.
* gcc.dg/vect/pr68445.c: Likewise.
* gcc.dg/vect/slp-11b.c: Likewise.
* gcc.dg/vect/slp-2.c: Likewise.
* gcc.dg/vect/slp-23.c: Likewise.
* gcc.dg/vect/slp-33.c: Likewise.
* gcc.dg/vect/slp-42.c: Likewise.
* gcc.dg/vect/slp-46.c: Likewise.
* gcc.dg/vect/slp-perm-10.c: Likewise.
|
|
When transforming multiple lane-reducing operations in a loop reduction chain,
originally, corresponding vectorized statements are generated into def-use
cycles starting from 0. The def-use cycle with smaller index, would contain
more statements, which means more instruction dependency. For example:
int sum = 1;
for (i)
{
sum += d0[i] * d1[i]; // dot-prod <vector(16) char>
sum += w[i]; // widen-sum <vector(16) char>
sum += abs(s0[i] - s1[i]); // sad <vector(8) short>
sum += n[i]; // normal <vector(4) int>
}
Original transformation result:
for (i / 16)
{
sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0);
sum_v1 = sum_v1; // copy
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = WIDEN_SUM (w_v0[i: 0 ~ 15], sum_v0);
sum_v1 = sum_v1; // copy
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = SAD (s0_v0[i: 0 ~ 7 ], s1_v0[i: 0 ~ 7 ], sum_v0);
sum_v1 = SAD (s0_v1[i: 8 ~ 15], s1_v1[i: 8 ~ 15], sum_v1);
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
...
}
For a higher instruction parallelism in final vectorized loop, an optimal
means is to make those effective vector lane-reducing ops be distributed
evenly among all def-use cycles. Transformed as the below, DOT_PROD,
WIDEN_SUM and SADs are generated into disparate cycles, instruction
dependency among them could be eliminated.
for (i / 16)
{
sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0);
sum_v1 = sum_v1; // copy
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = sum_v0; // copy
sum_v1 = WIDEN_SUM (w_v1[i: 0 ~ 15], sum_v1);
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = sum_v0; // copy
sum_v1 = sum_v1; // copy
sum_v2 = SAD (s0_v2[i: 0 ~ 7 ], s1_v2[i: 0 ~ 7 ], sum_v2);
sum_v3 = SAD (s0_v3[i: 8 ~ 15], s1_v3[i: 8 ~ 15], sum_v3);
...
}
2024-03-22 Feng Xue <fxue@os.amperecomputing.com>
gcc/
PR tree-optimization/114440
* tree-vectorizer.h (struct _stmt_vec_info): Add a new field
reduc_result_pos.
* tree-vect-loop.cc (vect_transform_reduction): Generate lane-reducing
statements in an optimized order.
|
|
For lane-reducing operation(dot-prod/widen-sum/sad) in loop reduction, current
vectorizer could only handle the pattern if the reduction chain does not
contain other operation, no matter the other is normal or lane-reducing.
This patches removes some constraints in reduction analysis to allow multiple
arbitrary lane-reducing operations with mixed input vectypes in a loop
reduction chain. For example:
int sum = 1;
for (i)
{
sum += d0[i] * d1[i]; // dot-prod <vector(16) char>
sum += w[i]; // widen-sum <vector(16) char>
sum += abs(s0[i] - s1[i]); // sad <vector(8) short>
}
The vector size is 128-bit vectorization factor is 16. Reduction statements
would be transformed as:
vector<4> int sum_v0 = { 0, 0, 0, 1 };
vector<4> int sum_v1 = { 0, 0, 0, 0 };
vector<4> int sum_v2 = { 0, 0, 0, 0 };
vector<4> int sum_v3 = { 0, 0, 0, 0 };
for (i / 16)
{
sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0);
sum_v1 = sum_v1; // copy
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = WIDEN_SUM (w_v0[i: 0 ~ 15], sum_v0);
sum_v1 = sum_v1; // copy
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
sum_v0 = SAD (s0_v0[i: 0 ~ 7 ], s1_v0[i: 0 ~ 7 ], sum_v0);
sum_v1 = SAD (s0_v1[i: 8 ~ 15], s1_v1[i: 8 ~ 15], sum_v1);
sum_v2 = sum_v2; // copy
sum_v3 = sum_v3; // copy
}
sum_v = sum_v0 + sum_v1 + sum_v2 + sum_v3; // = sum_v0 + sum_v1
2024-03-22 Feng Xue <fxue@os.amperecomputing.com>
gcc/
PR tree-optimization/114440
* tree-vectorizer.h (vectorizable_lane_reducing): New function
declaration.
* tree-vect-stmts.cc (vect_analyze_stmt): Call new function
vectorizable_lane_reducing to analyze lane-reducing operation.
* tree-vect-loop.cc (vect_model_reduction_cost): Remove cost computation
code related to emulated_mixed_dot_prod.
(vectorizable_lane_reducing): New function.
(vectorizable_reduction): Allow multiple lane-reducing operations in
loop reduction. Move some original lane-reducing related code to
vectorizable_lane_reducing.
(vect_transform_reduction): Adjust comments with updated example.
gcc/testsuite/
PR tree-optimization/114440
* gcc.dg/vect/vect-reduc-chain-1.c
* gcc.dg/vect/vect-reduc-chain-2.c
* gcc.dg/vect/vect-reduc-chain-3.c
* gcc.dg/vect/vect-reduc-chain-dot-slp-1.c
* gcc.dg/vect/vect-reduc-chain-dot-slp-2.c
* gcc.dg/vect/vect-reduc-chain-dot-slp-3.c
* gcc.dg/vect/vect-reduc-chain-dot-slp-4.c
* gcc.dg/vect/vect-reduc-dot-slp-1.c
|
|
Extend original vect_get_num_copies (pure loop-based) to calculate number of
vector stmts for slp node regarding a generic vect region.
2024-07-12 Feng Xue <fxue@os.amperecomputing.com>
gcc/
* tree-vectorizer.h (vect_get_num_copies): New overload function.
* tree-vect-slp.cc (vect_slp_analyze_node_operations_1): Calculate
number of vector stmts for slp node with vect_get_num_copies.
(vect_slp_analyze_node_operations): Calculate number of vector elements
for constant/external slp node with vect_get_num_copies.
|
|
<-> float.
gcc/ChangeLog:
PR target/107432
* tree-vect-generic.cc
(expand_vector_conversion): Support convert for int -> int,
float -> float and int <-> float.
* tree-vect-stmts.cc (vectorizable_conversion): Wrap the
indirect convert part.
(supportable_indirect_convert_operation): New function.
* tree-vectorizer.h (supportable_indirect_convert_operation):
Define the new function.
gcc/testsuite/ChangeLog:
PR target/107432
* gcc.target/i386/pr107432-1.c: New test.
* gcc.target/i386/pr107432-2.c: Ditto.
* gcc.target/i386/pr107432-3.c: Ditto.
* gcc.target/i386/pr107432-4.c: Ditto.
* gcc.target/i386/pr107432-5.c: Ditto.
* gcc.target/i386/pr107432-6.c: Ditto.
* gcc.target/i386/pr107432-7.c: Ditto.
|
|
Add a utility function to check if a statement is lane-reducing operation,
which could simplify some existing code.
2024-06-16 Feng Xue <fxue@os.amperecomputing.com>
gcc/
* tree-vectorizer.h (lane_reducing_stmt_p): New function.
* tree-vect-slp.cc (vect_analyze_slp): Use new function
lane_reducing_stmt_p to check statement.
|
|
Currently we discard the cond-op mask when the loop is fully masked
which causes wrong code in
gcc.dg/vect/vect-cond-reduc-in-order-2-signed-zero.c
when compiled with
-O3 -march=cascadelake --param vect-partial-vector-usage=2.
This patch ANDs both masks.
gcc/ChangeLog:
PR tree-optimization/115382
* tree-vect-loop.cc (vectorize_fold_left_reduction): Use
prepare_vec_mask.
* tree-vect-stmts.cc (check_load_store_for_partial_vectors):
Remove static of prepare_vec_mask.
* tree-vectorizer.h (prepare_vec_mask): Export.
|
|
Check if an operation is lane-reducing requires comparison of code against
three kinds (DOT_PROD_EXPR/WIDEN_SUM_EXPR/SAD_EXPR). Add an utility
function to make source coding for the check handy and concise.
2024-05-29 Feng Xue <fxue@os.amperecomputing.com>
gcc/
* tree-vectorizer.h (lane_reducing_op_p): New function.
* tree-vect-slp.cc (vect_analyze_slp): Use new function
lane_reducing_op_p to check statement code.
* tree-vect-loop.cc (vect_transform_reduction): Likewise.
(vectorizable_reduction): Likewise, and change name of a local
variable that holds the result flag.
|
|
Both derived classes have their own "bbs" field, which have exactly same
purpose of recording all basic blocks inside the corresponding vect region,
while the fields are composed by different data type, one is normal array,
the other is auto_vec. This difference causes some duplicated code even
handling the same stuff, almost in tree-vect-patterns. One refinement is
lifting this field into the base class "vec_info", and reset its value to
the continuous memory area pointed by two old "bbs" in each constructor
of derived classes.
2024-05-16 Feng Xue <fxue@os.amperecomputing.com>
gcc/
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Move
initialization of bbs to explicit construction code. Adjust the
definition of nbbs.
(update_epilogue_loop_vinfo): Update nbbs for epilog vinfo.
* tree-vect-patterns.cc (vect_determine_precisions): Make
loop_vec_info and bb_vec_info share same code.
(vect_pattern_recog): Remove duplicated vect_pattern_recog_1 loop.
* tree-vect-slp.cc (vect_get_and_check_slp_defs): Access to bbs[0]
via base vec_info class.
(_bb_vec_info::_bb_vec_info): Initialize bbs and nbbs using data
fields of input auto_vec<> bbs.
(vect_slp_region): Use access to nbbs to replace original
bbs.length().
(vect_schedule_slp_node): Access to bbs[0] via base vec_info class.
* tree-vectorizer.cc (vec_info::vec_info): Add initialization of
bbs and nbbs.
(vec_info::insert_seq_on_entry): Access to bbs[0] via base vec_info
class.
* tree-vectorizer.h (vec_info): Add new fields bbs and nbbs.
(LOOP_VINFO_NBBS): New macro.
(BB_VINFO_BBS): Rename BB_VINFO_BB to BB_VINFO_BBS.
(BB_VINFO_NBBS): New macro.
(_loop_vec_info): Remove field bbs.
(_bb_vec_info): Rename field bbs.
|
|
This patch adds early break auto-vectorization support for target which
use length on partial vectorization. Consider this following example:
unsigned vect_a[802];
unsigned vect_b[802];
void test (unsigned x, int n)
{
for (int i = 0; i < n; i++)
{
vect_b[i] = x + i;
if (vect_a[i] > x)
break;
vect_a[i] = x;
}
}
We use VCOND_MASK_LEN to simulate the generate (mask && i < len + bias).
And then the IR of RVV looks like below:
...
_87 = .SELECT_VL (ivtmp_85, POLY_INT_CST [32, 32]);
_55 = (int) _87;
...
mask_patt_6.13_69 = vect_cst__62 < vect__3.12_67;
vec_len_mask_72 = .VCOND_MASK_LEN (mask_patt_6.13_69, { -1, ... }, \
{0, ... }, _87, 0);
if (vec_len_mask_72 != { 0, ... })
goto <bb 6>; [5.50%]
else
goto <bb 7>; [94.50%]
The below tests are passed for this patch:
1. The riscv fully regression tests.
2. The x86 bootstrap tests.
3. The x86 fully regression tests.
gcc/ChangeLog:
* tree-vect-loop.cc (vect_gen_loop_len_mask): New func to gen
the loop len mask.
* tree-vect-stmts.cc (vectorizable_early_exit): Invoke the
vect_gen_loop_len_mask for 1 or more stmt(s).
* tree-vectorizer.h (vect_gen_loop_len_mask): New func decl
for vect_gen_loop_len_mask.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
|
|
Hi All,
This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.
Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND. This is however only checked
then the comparison would produce multiple statements.
This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit. Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.
After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit. This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.
In such situations the loop is reflowed which enabled vectorization of many
other loop forms.
Concretely the kind of loops supported are of the forms:
for (int i = 0; i < N; i++)
{
<statements1>
if (<condition>)
{
...
<action>;
}
<statements2>
}
where <action> can be:
- break
- return
- goto
Any number of statements can be used before the <action> occurs.
Since this is an initial version for GCC 14 it has the following limitations and
features:
- Only fixed sized iterations and buffers are supported. That is to say any
vectors loaded or stored must be to statically allocated arrays with known
sizes. N must also be known. This limitation is because our primary target
for this optimization is SVE. For VLA SVE we can't easily do cross page
iteraion checks. The result is likely to also not be beneficial. For that
reason we punt support for variable buffers till we have First-Faulting
support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
<condition>. Loads are fine as long as they don't have the possibility to
alias. More concretely, we block RAW dependencies when the intermediate value
can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization. The only epilogue supported is the
scalar final one. Peeling code supports it but the code motion code cannot
find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.
With the help of IPA and LTO this still gets hit quite often. During bootstrap
it hit rather frequently. Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.
This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.
For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations. For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.
That is to say, the first vector loop executes so long as the early exit isn't
needed. Once the exit is taken, the scalar code will perform at most VF extra
iterations. The exact number depending on peeling and iteration start and which
exit was taken (natural or early). For this scalar loop, all early exits are
treated the same.
When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break. If this is not possible we decline to vectorize. The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.
This means that we check at the start of iterations whether we are going to exit
or not. During the analyis phase we check whether we are allowed to do this
moving of statements. Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.
With this the vector flow no longer necessarily needs to match that of the
scalar code. In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.
Codegen:
for e.g.
unsigned vect_a[N];
unsigned vect_b[N];
unsigned test4(unsigned x)
{
unsigned ret = 0;
for (int i = 0; i < N; i++)
{
vect_b[i] = x + i;
if (vect_a[i] > x)
break;
vect_a[i] = x;
}
return ret;
}
We generate for Adv. SIMD:
test4:
adrp x2, .LC0
adrp x3, .LANCHOR0
dup v2.4s, w0
add x3, x3, :lo12:.LANCHOR0
movi v4.4s, 0x4
add x4, x3, 3216
ldr q1, [x2, #:lo12:.LC0]
mov x1, 0
mov w2, 0
.p2align 3,,7
.L3:
ldr q0, [x3, x1]
add v3.4s, v1.4s, v2.4s
add v1.4s, v1.4s, v4.4s
cmhi v0.4s, v0.4s, v2.4s
umaxp v0.4s, v0.4s, v0.4s
fmov x5, d0
cbnz x5, .L6
add w2, w2, 1
str q3, [x1, x4]
str q2, [x3, x1]
add x1, x1, 16
cmp w2, 200
bne .L3
mov w7, 3
.L2:
lsl w2, w2, 2
add x5, x3, 3216
add w6, w2, w0
sxtw x4, w2
ldr w1, [x3, x4, lsl 2]
str w6, [x5, x4, lsl 2]
cmp w0, w1
bcc .L4
add w1, w2, 1
str w0, [x3, x4, lsl 2]
add w6, w1, w0
sxtw x1, w1
ldr w4, [x3, x1, lsl 2]
str w6, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
add w4, w2, 2
str w0, [x3, x1, lsl 2]
sxtw x1, w4
add w6, w1, w0
ldr w4, [x3, x1, lsl 2]
str w6, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
str w0, [x3, x1, lsl 2]
add w2, w2, 3
cmp w7, 3
beq .L4
sxtw x1, w2
add w2, w2, w0
ldr w4, [x3, x1, lsl 2]
str w2, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
str w0, [x3, x1, lsl 2]
.L4:
mov w0, 0
ret
.p2align 2,,3
.L6:
mov w7, 4
b .L2
and for SVE:
test4:
adrp x2, .LANCHOR0
add x2, x2, :lo12:.LANCHOR0
add x5, x2, 3216
mov x3, 0
mov w1, 0
cntw x4
mov z1.s, w0
index z0.s, #0, #1
ptrue p1.b, all
ptrue p0.s, all
.p2align 3,,7
.L3:
ld1w z2.s, p1/z, [x2, x3, lsl 2]
add z3.s, z0.s, z1.s
cmplo p2.s, p0/z, z1.s, z2.s
b.any .L2
st1w z3.s, p1, [x5, x3, lsl 2]
add w1, w1, 1
st1w z1.s, p1, [x2, x3, lsl 2]
add x3, x3, x4
incw z0.s
cmp w3, 803
bls .L3
.L5:
mov w0, 0
ret
.p2align 2,,3
.L2:
cntw x5
mul w1, w1, w5
cbz w5, .L5
sxtw x1, w1
sub w5, w5, #1
add x5, x5, x1
add x6, x2, 3216
b .L6
.p2align 2,,3
.L14:
str w0, [x2, x1, lsl 2]
cmp x1, x5
beq .L5
mov x1, x4
.L6:
ldr w3, [x2, x1, lsl 2]
add w4, w0, w1
str w4, [x6, x1, lsl 2]
add x4, x1, 1
cmp w0, w3
bcs .L14
mov w0, 0
ret
On the workloads this work is based on we see between 2-3x performance uplift
using this patch.
Follow up plan:
- Boolean vectorization has several shortcomings. I've filed PR110223 with the
bigger ones that cause vectorization to fail with this patch.
- SLP support. This is planned for GCC 15 as for majority of the cases build
SLP itself fails. This means I'll need to spend time in making this more
robust first. Additionally it requires:
* Adding support for vectorizing CFG (gconds)
* Support for CFG to differ between vector and scalar loops.
Both of which would be disruptive to the tree and I suspect I'll be handling
fallouts from this patch for a while. So I plan to work on the surrounding
building blocks first for the remainder of the year.
Additionally it also contains reduced cases from issues found running over
various codebases.
Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Also regtested with:
-march=armv8.3-a+sve
-march=armv8.3-a+nosve
-march=armv9-a
-mcpu=neoverse-v1
-mcpu=neoverse-n2
Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.
gcc/ChangeLog:
* tree-if-conv.cc (idx_within_array_bound): Expose.
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
(vect_analyze_data_ref_dependences): Use it.
* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
(vect_set_loop_controls_directly,
vect_set_loop_condition_partial_vectors,
vect_set_loop_condition_partial_vectors_avx512,
vect_set_loop_condition_normal): Support multiple exits.
(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
multiple exits.
(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
count and instead look at loop shape.
(vect_update_ivs_after_vectorizer): Drop asserts.
(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
(vect_do_peeling): Support multiple exits.
(vect_loop_versioning): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
early_breaks.
(vect_analyze_loop_form): Support loop flows with more than single BB
loop body.
(vect_create_loop_vinfo): Support niters analysis for multiple exits.
(vect_analyze_loop): Likewise.
(vect_get_vect_def): New.
(vect_create_epilog_for_reduction): Support early exit reductions.
(vectorizable_live_operation_1): New.
(find_connected_edge): New.
(vectorizable_live_operation): Support early exit live operations.
(move_early_exit_stmts): New.
(vect_transform_loop): Use it.
* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
(vect_recog_bitfield_ref_pattern): Support gconds and bools.
(vect_recog_gcond_pattern): New.
(possible_vector_mask_operation_p): Support gcond masks.
(vect_determine_mask_precision): Likewise.
(vect_mark_pattern_stmts): Set gcond def type.
(can_vectorize_live_stmts): Force early break inductions to be live.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
early breaks.
(vect_mark_stmts_to_be_vectorized): Process gcond usage.
(perm_mask_for_reverse): Expose.
(vectorizable_comparison_1): New.
(vectorizable_early_exit): New.
(vect_analyze_stmt): Support early break and gcond.
(vect_transform_stmt): Likewise.
(vect_is_simple_use): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
numbering.
* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
LOOP_VINFO_EARLY_BRK_VUSES): New.
(is_loop_header_bb_p): Drop assert.
(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
early_break_vuses.
(vect_iv_increment_position, perm_mask_for_reverse,
ref_within_array_bound): New.
(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
|
|
When querying a single set of vector defs with the overloaded
vect_get_vec_defs API then when you try to use the overload with
the vector type specified the call will be ambiguous with the
variant without the vector type. The following fixes this by
re-ordering the vector type argument to come before the output
def vector argument.
I've changed vectorizable_conversion as that triggered this
so it has coverage showing this works. The motivation is to
reduce the number of (redundant) get_vectype_for_scalar_type
calls.
* tree-vectorizer.h (vect_get_vec_defs): Re-order arguments.
* tree-vect-stmts.cc (vect_get_vec_defs): Likewise.
(vectorizable_condition): Update caller.
(vectorizable_comparison_1): Likewise.
(vectorizable_conversion): Specify the vector type to be
used for invariant/external defs.
* tree-vect-loop.cc (vect_transform_reduction): Update caller.
|
|
This prepares us for the SLP of scatters. We have to tell
vect_slp_child_index_for_operand whether we are dealing with
a scatter/gather stmt so this adds an argument similar to
the one we have for vect_get_operand_map. This also refactors
vect_check_store_rhs to get the actual rhs and the associated
SLP node instead of leaving that to the caller.
* tree-vectorizer.h (vect_slp_child_index_for_operand):
Add gatherscatter_p argument.
* tree-vect-slp.cc (vect_slp_child_index_for_operand): Likewise.
Pass it on.
* tree-vect-stmts.cc (vect_check_store_rhs): Turn the rhs
argument into an output, also output the SLP node associated
with it.
(vectorizable_simd_clone_call): Adjust.
(vectorizable_store): Likewise.
(vectorizable_load): Likewise.
|
|
The following fixes an oversight in vect_check_scalar_mask when
the mask is external or constant. When doing BB vectorization
we need to provide a group_size, best via an overload accepting
the SLP node as argument.
When fixed we then run into the issue that we have not analyzed
alignment of the .MASK_LOADs because they were not identified
as loads by vect_gather_slp_loads. Fixed by reworking the
detection.
PR tree-optimization/112404
* tree-vectorizer.h (get_mask_type_for_scalar_type): Declare
overload with SLP node argument.
* tree-vect-stmts.cc (get_mask_type_for_scalar_type): Implement it.
(vect_check_scalar_mask): Use it.
* tree-vect-slp.cc (vect_gather_slp_loads): Properly identify
loads also for nodes with children, like .MASK_LOAD.
* tree-vect-loop.cc (vect_analyze_loop_2): Look at the
representative for load nodes and check whether it is a grouped
access before looking for load-lanes support.
* gfortran.dg/pr112404.f90: New testcase.
|
|
As described in PR111401 we currently emit a COND and a PLUS expression
for conditional reductions. This makes it difficult to combine both
into a masked reduction statement later.
This patch improves that by directly emitting a COND_ADD/COND_OP during
ifcvt and adjusting some vectorizer code to handle it.
It also makes neutral_op_for_reduction return -0 if HONOR_SIGNED_ZEROS
is true.
gcc/ChangeLog:
PR middle-end/111401
* internal-fn.cc (internal_fn_else_index): New function.
* internal-fn.h (internal_fn_else_index): Define.
* tree-if-conv.cc (convert_scalar_cond_reduction): Emit COND_OP
if supported.
(predicate_scalar_phi): Add whitespace.
* tree-vect-loop.cc (fold_left_reduction_fn): Add IFN_COND_OP.
(neutral_op_for_reduction): Return -0 for PLUS.
(check_reduction_path): Don't count else operand in COND_OP.
(vect_is_simple_reduction): Ditto.
(vect_create_epilog_for_reduction): Fix whitespace.
(vectorize_fold_left_reduction): Add COND_OP handling.
(vectorizable_reduction): Don't count else operand in COND_OP.
(vect_transform_reduction): Add COND_OP handling.
* tree-vectorizer.h (neutral_op_for_reduction): Add default
parameter.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-cond-reduc-in-order-2-signed-zero.c: New test.
* gcc.target/riscv/rvv/autovec/cond/pr111401.c: New test.
* gcc.target/riscv/rvv/autovec/reduc/reduc_call-2.c: Adjust.
* gcc.target/riscv/rvv/autovec/reduc/reduc_call-4.c: Ditto.
|
|
This final patch updates peeling to maintain LCSSA all the way through.
It's significantly easier to maintain it during peeling while we still know
where all new edges connect rather than touching it up later as is currently
being done.
This allows us to remove many of the helper functions that touch up the loops
at various parts. The only complication is for loop distribution where we
should be able to use the same, however ldist depending on whether
redirect_lc_phi_defs is true or not will either try to maintain a limited LCSSA
form itself or removes are non-virtual phis.
The problem here is that if we maintain LCSSA then in some cases the blocks
connecting the two loops get PHIs to keep the loop IV up to date.
However there is no loop, the guard condition is rewritten as 0 != 0, to the
"loop" always exits. However due to the PHI nodes the probabilities get
completely wrong. It seems to think that the impossible exit is the likely
edge. This causes incorrect warnings and the presence of the PHIs prevent the
blocks to be simplified.
While it may be possible to make ldist work with LCSSA form, doing so seems more
work than not. For that reason the peeling code has an additional parameter
used by only ldist to not connect the two loops during peeling.
This preserves the current behaviour from ldist until I can dive into the
implementation more. Hopefully that's ok for now.
gcc/ChangeLog:
* tree-loop-distribution.cc (copy_loop_before): Request no LCSSA.
* tree-vect-loop-manip.cc (adjust_phi_and_debug_stmts): Add additional
asserts.
(slpeel_tree_duplicate_loop_to_edge_cfg): Keep LCSSA during peeling.
(find_guard_arg): Look value up through explicit edge and original defs.
(vect_do_peeling): Use it.
(slpeel_update_phi_nodes_for_guard2): Take explicit exit edge.
(slpeel_update_phi_nodes_for_lcssa, slpeel_update_phi_nodes_for_loops):
Remove.
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize phi.
* tree-vectorizer.h (slpeel_tree_duplicate_loop_to_edge_cfg): Add
optional param to turn off LCSSA mode.
|
|
This second part updates niters analysis to be able to analyze any number of
exits. If we have multiple exits we determine the main exit by finding the
first counting IV.
The change allows the vectorizer to pass analysis for multiple loops, but we
later gracefully reject them. It does however allow us to test if the exit
handling is using the right exit everywhere.
Additionally since we analyze all exits, we now return all conditions for them
and determine which condition belongs to the main exit.
The main condition is needed because the vectorizer needs to ignore the main IV
condition during vectorization as it will replace it during codegen.
To track versioned loops we extend the contract between ifcvt and the vectorizer
to store the exit number in aux so that we can match it up again during peeling.
gcc/ChangeLog:
* tree-if-conv.cc (tree_if_conversion): Record exits in aux.
* tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg): Use
it.
* tree-vect-loop.cc (vect_get_loop_niters): Determine main exit.
(vec_init_loop_exit_info): Extend analysis when multiple exits.
(vect_analyze_loop_form): Record conds and determine main cond.
(vect_create_loop_vinfo): Extend bookkeeping of conds.
(vect_analyze_loop): Release conds.
* tree-vectorizer.h (LOOP_VINFO_LOOP_CONDS,
LOOP_VINFO_LOOP_IV_COND): New.
(struct vect_loop_form_info): Add conds, alt_loop_conds;
(struct loop_vec_info): Add conds, loop_iv_cond.
|
|
variables
This is extracted out of the patch series to support early break vectorization
in order to simplify the review of that patch series.
The goal of this one is to separate out the refactoring from the new
functionality.
This first patch separates out the vectorizer's definition of an exit to their
own values inside loop_vinfo. During vectorization we can have three separate
copies for each loop: scalar, vectorized, epilogue. The scalar loop can also be
the versioned loop before peeling.
Because of this we track 3 different exits inside loop_vinfo corresponding to
each of these loops. Additionally each function that uses an exit, when not
obviously clear which exit is needed will now take the exit explicitly as an
argument.
This is because often times the callers switch the loops being passed around.
While the caller knows which loops it is, the callee does not.
For now the loop exits are simply initialized to same value as before determined
by single_exit (..).
No change in functionality is expected throughout this patch series.
gcc/ChangeLog:
* tree-loop-distribution.cc (copy_loop_before): Pass exit explicitly.
(loop_distribution::distribute_loop): Bail out of not single exit.
* tree-scalar-evolution.cc (get_loop_exit_condition): New.
* tree-scalar-evolution.h (get_loop_exit_condition): New.
* tree-vect-data-refs.cc (vect_enhance_data_refs_alignment): Pass exit
explicitly.
* tree-vect-loop-manip.cc (vect_set_loop_condition_partial_vectors,
vect_set_loop_condition_partial_vectors_avx512,
vect_set_loop_condition_normal, vect_set_loop_condition): Explicitly
take exit.
(slpeel_tree_duplicate_loop_to_edge_cfg): Explicitly take exit and
return new peeled corresponding peeled exit.
(slpeel_can_duplicate_loop_p): Explicitly take exit.
(find_loop_location): Handle not knowing an explicit exit.
(vect_update_ivs_after_vectorizer, vect_gen_vector_loop_niters_mult_vf,
find_guard_arg, slpeel_update_phi_nodes_for_loops,
slpeel_update_phi_nodes_for_guard2): Use new exits.
(vect_do_peeling): Update bookkeeping to keep track of exits.
* tree-vect-loop.cc (vect_get_loop_niters): Explicitly take exit to
analyze.
(vec_init_loop_exit_info): New.
(_loop_vec_info::_loop_vec_info): Initialize vec_loop_iv,
vec_epilogue_loop_iv, scalar_loop_iv.
(vect_analyze_loop_form): Initialize exits.
(vect_create_loop_vinfo): Set main exit.
(vect_create_epilog_for_reduction, vectorizable_live_operation,
vect_transform_loop): Use it.
(scale_profile_for_vect_loop): Explicitly take exit to scale.
* tree-vectorizer.cc (set_uid_loop_bbs): Initialize loop exit.
* tree-vectorizer.h (LOOP_VINFO_IV_EXIT, LOOP_VINFO_EPILOGUE_IV_EXIT,
LOOP_VINFO_SCALAR_IV_EXIT): New.
(struct loop_vec_info): Add vec_loop_iv, vec_epilogue_loop_iv,
scalar_loop_iv.
(vect_set_loop_condition, slpeel_can_duplicate_loop_p,
slpeel_tree_duplicate_loop_to_edge_cfg): Take explicit exits.
(vec_init_loop_exit_info): New.
(struct vect_loop_form_info): Add loop_exit.
|
|
The following avoids bogously re-using the simd-clone-info we
currently hang off stmt_info from two different SLP contexts where
a different number of lanes should have chosen a different best
simdclone.
PR tree-optimization/111846
* tree-vectorizer.h (_slp_tree::simd_clone_info): Add.
(SLP_TREE_SIMD_CLONE_INFO): New.
* tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize
SLP_TREE_SIMD_CLONE_INFO.
(_slp_tree::~_slp_tree): Release it.
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Use
SLP_TREE_SIMD_CLONE_INFO or STMT_VINFO_SIMD_CLONE_INFO
dependent on if we're doing SLP.
* gcc.dg/vect/pr111846.c: New testcase.
|
|
The following adds the capability to do SLP on .MASK_STORE, I do not
plan to add interleaving support.
PR tree-optimization/111115
gcc/
* tree-vectorizer.h (vect_slp_child_index_for_operand): New.
* tree-vect-data-refs.cc (can_group_stmts_p): Also group
.MASK_STORE.
* tree-vect-slp.cc (arg3_arg2_map): New.
(vect_get_operand_map): Handle IFN_MASK_STORE.
(vect_slp_child_index_for_operand): New function.
(vect_build_slp_tree_1): Handle statements with no LHS,
masked store ifns.
(vect_remove_slp_scalar_calls): Likewise.
* tree-vect-stmts.cc (vect_check_store_rhs): Lookup the
SLP child corresponding to the ifn value index.
(vectorizable_store): Likewise for the mask index. Support
masked stores.
(vectorizable_load): Lookup the SLP child corresponding to the
ifn mask index.
gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_masked_store):
Supported with check_avx_available.
* gcc.dg/vect/slp-mask-store-1.c: New testcase.
|
|
Hi, Richard and Richi.
This patch is adding MASK_LEN_{LOAD_LANES,STORE_LANES} support into vectorizer.
Consider this simple case:
void __attribute__ ((noinline, noclone))
foo (int *__restrict a, int *__restrict b, int *__restrict c,
int *__restrict d, int *__restrict e, int *__restrict f,
int *__restrict g, int *__restrict h, int *__restrict j, int n)
{
for (int i = 0; i < n; ++i)
{
a[i] = j[i * 8];
b[i] = j[i * 8 + 1];
c[i] = j[i * 8 + 2];
d[i] = j[i * 8 + 3];
e[i] = j[i * 8 + 4];
f[i] = j[i * 8 + 5];
g[i] = j[i * 8 + 6];
h[i] = j[i * 8 + 7];
}
}
RVV Gimple IR:
_79 = .SELECT_VL (ivtmp_81, POLY_INT_CST [4, 4]);
ivtmp_125 = _79 * 32;
vect_array.8 = .MASK_LEN_LOAD_LANES (vectp_j.6_124, 32B, { -1, ... }, _79, 0);
vect__8.9_122 = vect_array.8[0];
vect__8.10_121 = vect_array.8[1];
vect__8.11_120 = vect_array.8[2];
vect__8.12_119 = vect_array.8[3];
vect__8.13_118 = vect_array.8[4];
vect__8.14_117 = vect_array.8[5];
vect__8.15_116 = vect_array.8[6];
vect__8.16_115 = vect_array.8[7];
vect_array.8 ={v} {CLOBBER};
ivtmp_114 = _79 * 4;
.MASK_LEN_STORE (vectp_a.17_113, 32B, { -1, ... }, _79, 0, vect__8.9_122);
.MASK_LEN_STORE (vectp_b.19_109, 32B, { -1, ... }, _79, 0, vect__8.10_121);
.MASK_LEN_STORE (vectp_c.21_105, 32B, { -1, ... }, _79, 0, vect__8.11_120);
.MASK_LEN_STORE (vectp_d.23_101, 32B, { -1, ... }, _79, 0, vect__8.12_119);
.MASK_LEN_STORE (vectp_e.25_97, 32B, { -1, ... }, _79, 0, vect__8.13_118);
.MASK_LEN_STORE (vectp_f.27_93, 32B, { -1, ... }, _79, 0, vect__8.14_117);
.MASK_LEN_STORE (vectp_g.29_89, 32B, { -1, ... }, _79, 0, vect__8.15_116);
.MASK_LEN_STORE (vectp_h.31_85, 32B, { -1, ... }, _79, 0, vect__8.16_115);
ASM:
foo:
lw t4,8(sp)
ld t5,0(sp)
ble t4,zero,.L5
.L3:
vsetvli t1,t4,e8,mf4,ta,ma
vlseg8e32.v v8,(t5)
slli t3,t1,2
slli t6,t1,5
vse32.v v8,0(a0)
vse32.v v9,0(a1)
vse32.v v10,0(a2)
vse32.v v11,0(a3)
vse32.v v12,0(a4)
vse32.v v13,0(a5)
vse32.v v14,0(a6)
vse32.v v15,0(a7)
sub t4,t4,t1
add t5,t5,t6
add a0,a0,t3
add a1,a1,t3
add a2,a2,t3
add a3,a3,t3
add a4,a4,t3
add a5,a5,t3
add a6,a6,t3
add a7,a7,t3
bne t4,zero,.L3
.L5:
ret
The details of the approach:
Step 1 - Modifiy the LANES LOAD/STORE support function (vect_load_lanes_supported/vect_store_lanes_supported):
+/* Return FN if vec_{masked_,mask_len,}load_lanes is available for COUNT
+ vectors of type VECTYPE. MASKED_P says whether the masked form is needed. */
-bool
+internal_fn
vect_load_lanes_supported (tree vectype, unsigned HOST_WIDE_INT count,
bool masked_p)
{
- if (masked_p)
- return vect_lanes_optab_supported_p ("vec_mask_load_lanes",
- vec_mask_load_lanes_optab,
- vectype, count);
+ if (vect_lanes_optab_supported_p ("vec_mask_len_load_lanes",
+ vec_mask_len_load_lanes_optab,
+ vectype, count))
+ return IFN_MASK_LEN_LOAD_LANES;
+ else if (masked_p)
+ {
+ if (vect_lanes_optab_supported_p ("vec_mask_load_lanes",
+ vec_mask_load_lanes_optab,
+ vectype, count))
+ return IFN_MASK_LOAD_LANES;
+ }
else
- return vect_lanes_optab_supported_p ("vec_load_lanes",
- vec_load_lanes_optab,
- vectype, count);
+ {
+ if (vect_lanes_optab_supported_p ("vec_load_lanes",
+ vec_load_lanes_optab,
+ vectype, count))
+ return IFN_LOAD_LANES;
+ }
+ return IFN_LAST;
}
Instead of returning TRUE or FALSE whether target support the LANES LOAD/STORE.
I change it into return internal_fn of the LANES LOAD/STORE that target support,
If target didn't support any LANE LOAD/STORE optabs, return IFN_LAST.
Step 2 - Compute IFN for LANES LOAD/STORE (Only compute once).
if (!STMT_VINFO_STRIDED_P (first_stmt_info)
&& (can_overrun_p || !would_overrun_p)
&& compare_step_with_zero (vinfo, stmt_info) > 0)
{
/* First cope with the degenerate case of a single-element
vector. */
if (known_eq (TYPE_VECTOR_SUBPARTS (vectype), 1U))
;
else
{
/* Otherwise try using LOAD/STORE_LANES. */
*lanes_ifn
= vls_type == VLS_LOAD
? vect_load_lanes_supported (vectype, group_size, masked_p)
: vect_store_lanes_supported (vectype, group_size,
masked_p);
if (*lanes_ifn != IFN_LAST)
{
*memory_access_type = VMAT_LOAD_STORE_LANES;
overrun_p = would_overrun_p;
}
/* If that fails, try using permuting loads. */
else if (vls_type == VLS_LOAD
? vect_grouped_load_supported (vectype,
single_element_p,
group_size)
: vect_grouped_store_supported (vectype, group_size))
{
*memory_access_type = VMAT_CONTIGUOUS_PERMUTE;
overrun_p = would_overrun_p;
}
}
}
Step 3 - Build MASK_LEN_{LANES_LOAD,LANES_STORE} Gimple IR:
+ if (lanes_ifn == IFN_MASK_LEN_STORE_LANES)
+ {
+ if (loop_lens)
+ final_len = vect_get_loop_len (loop_vinfo, gsi, loop_lens,
+ ncopies, vectype, j, 1);
+ else
+ final_len = size_int (TYPE_VECTOR_SUBPARTS (vectype));
+ signed char biasval
+ = LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo);
+ bias = build_int_cst (intQI_type_node, biasval);
+ if (!final_mask)
+ {
+ mask_vectype = truth_type_for (vectype);
+ final_mask = build_minus_one_cst (mask_vectype);
+ }
+ }
+
gcall *call;
- if (final_mask)
+ if (final_len && final_mask)
+ {
+ /* Emit:
+ MASK_LEN_STORE_LANES (DATAREF_PTR, ALIAS_PTR, VEC_MASK,
+ LEN, BIAS, VEC_ARRAY). */
+ unsigned int align = TYPE_ALIGN (TREE_TYPE (vectype));
+ tree alias_ptr = build_int_cst (ref_type, align);
+ call = gimple_build_call_internal (IFN_MASK_LEN_STORE_LANES, 6,
+ dataref_ptr, alias_ptr,
+ final_mask, final_len, bias,
+ vec_array);
+ }
+ else if (final_mask)
The LEN and MASK flow is totally the same as other MASK_LEN_* load/store.
gcc/ChangeLog:
* internal-fn.cc (internal_load_fn_p): Apply
MASK_LEN_{LOAD_LANES,STORE_LANES} into vectorizer.
(internal_store_fn_p): Ditto.
(internal_fn_len_index): Ditto.
(internal_fn_mask_index): Ditto.
(internal_fn_stored_value_index): Ditto.
* tree-vect-data-refs.cc (vect_store_lanes_supported): Ditto.
(vect_load_lanes_supported): Ditto.
* tree-vect-loop.cc: Ditto.
* tree-vect-slp.cc (vect_slp_prefer_store_lanes_p): Ditto.
* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Ditto.
(get_group_load_store_type): Ditto.
(vectorizable_store): Ditto.
(vectorizable_load): Ditto.
* tree-vectorizer.h (vect_store_lanes_supported): Ditto.
(vect_load_lanes_supported): Ditto.
|
|
The following supports vectorizing BB reductions involving a
constant or an invariant.
* tree-vectorizer.h (_slp_instance::remain_stmts): Change
to ...
(_slp_instance::remain_defs): ... this.
(SLP_INSTANCE_REMAIN_STMTS): Rename to ...
(SLP_INSTANCE_REMAIN_DEFS): ... this.
(slp_root::remain): New.
(slp_root::slp_root): Adjust.
* tree-vect-slp.cc (vect_free_slp_instance): Adjust.
(vect_build_slp_instance): Get extra remain parameter,
adjust former handling of a cut off stmt.
(vect_analyze_slp_instance): Adjust.
(vect_analyze_slp): Likewise.
(_bb_vec_info::~_bb_vec_info): Likewise.
(vectorizable_bb_reduc_epilogue): Dump something if we fail.
(vect_slp_check_for_constructors): Handle non-internal
defs as remain defs of a reduction.
(vectorize_slp_instance_root_stmt): Adjust.
* gcc.dg/vect/bb-slp-75.c: New testcase.
|
|
The insert location argument isn't actually used but we compute
that ourselves. There's a single spot, namely when asking
for the loop mask via vect_get_loop_mask that the passed argument
is used but that looks like an oversight. The following fixes that
and adjusts vectorizable_live_operation and can_vectorize_live_stmts
to no longer take a stmt iterator argument.
* tree-vectorizer.h (vectorizable_live_operation): Remove
gimple_stmt_iterator * argument.
* tree-vect-loop.cc (vectorizable_live_operation): Likewise.
Adjust plumbing around vect_get_loop_mask.
(vect_analyze_loop_operations): Adjust.
* tree-vect-slp.cc (vect_slp_analyze_node_operations_1): Likewise.
(vect_bb_slp_mark_live_stmts): Likewise.
(vect_schedule_slp_node): Likewise.
* tree-vect-stmts.cc (can_vectorize_live_stmts): Likewise.
Remove gimple_stmt_iterator * argument.
(vect_transform_stmt): Adjust.
|
|
The following enhances BB reduction vectorization to support
vectorizing only a subset of the lanes, keeping the rest as
scalar ops. For now we try to make the number of lanes even
by leaving alone the "last" lane. That's because SLP discovery
with all lanes will fail too soon to get us any hint on which
lane to strip and likewise we don't know what vector modes the
target supports so restricting ourselves to power-of-two or
other cases isn't easy.
This is enough to get at the vectorization opportunity for the
testcase in the PR - albeit with the chosen lanes not optimal
but at least vectorizable.
PR tree-optimization/49955
* tree-vectorizer.h (_slp_instance::remain_stmts): New.
(SLP_INSTANCE_REMAIN_STMTS): Likewise.
* tree-vect-slp.cc (vect_free_slp_instance): Release
SLP_INSTANCE_REMAIN_STMTS.
(vect_build_slp_instance): Make the number of lanes of
a BB reduction even.
(vectorize_slp_instance_root_stmt): Handle unvectorized
defs of a BB reduction.
* gfortran.dg/vect/pr49955.f: New testcase.
|
|
The following unifies SLP_TREE_VEC_STMTS into SLP_TREE_VEC_DEFS
which can handle all cases we need.
* tree-vectorizer.h (_slp_tree::push_vec_def): Add.
(_slp_tree::vec_stmts): Remove.
(SLP_TREE_VEC_STMTS): Remove.
* tree-vect-slp.cc (_slp_tree::push_vec_def): Define.
(_slp_tree::_slp_tree): Adjust.
(_slp_tree::~_slp_tree): Likewise.
(vect_get_slp_vect_def): Simplify.
(vect_get_slp_defs): Likewise.
(vect_transform_slp_perm_load_1): Adjust.
(vect_add_slp_permutation): Likewise.
(vect_schedule_slp_node): Likewise.
(vectorize_slp_instance_root_stmt): Likewise.
(vect_schedule_scc): Likewise.
* tree-vect-stmts.cc (vectorizable_bswap): Use push_vec_def.
(vectorizable_call): Likewise.
(vectorizable_call): Likewise.
(vect_create_vectorized_demotion_stmts): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_assignment): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_load): Likewise.
(vectorizable_condition): Likewise.
(vectorizable_comparison): Likewise.
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Adjust.
(vectorize_fold_left_reduction): Use push_vec_def.
(vect_transform_reduction): Likewise.
(vect_transform_cycle_phi): Likewise.
(vectorizable_lc_phi): Likewise.
(vectorizable_phi): Likewise.
(vectorizable_recurr): Likewise.
(vectorizable_induction): Likewise.
(vectorizable_live_operation): Likewise.
|
|
The following consolidates an assert that now hits for ppc64le
with an earlier check we already do, simplifying
vect_determine_partial_vectors_and_peeling and getting rid of
its now redundant argument.
PR tree-optimization/110563
* tree-vectorizer.h (vect_determine_partial_vectors_and_peeling):
Remove second argument.
* tree-vect-loop.cc (vect_determine_partial_vectors_and_peeling):
Remove for_epilogue_p argument. Merge assert ...
(vect_analyze_loop_2): ... with check done before determining
partial vectors by moving it after.
* tree-vect-loop-manip.cc (vect_do_peeling): Adjust.
|
|
This implemens fully masked vectorization or a masked epilog for
AVX512 style masks which single themselves out by representing
each lane with a single bit and by using integer modes for the mask
(both is much like GCN).
AVX512 is also special in that it doesn't have any instruction
to compute the mask from a scalar IV like SVE has with while_ult.
Instead the masks are produced by vector compares and the loop
control retains the scalar IV (mainly to avoid dependences on
mask generation, a suitable mask test instruction is available).
Like RVV code generation prefers a decrementing IV though IVOPTs
messes things up in some cases removing that IV to eliminate
it with an incrementing one used for address generation.
One of the motivating testcases is from PR108410 which in turn
is extracted from x264 where large size vectorization shows
issues with small trip loops. Execution time there improves
compared to classic AVX512 with AVX2 epilogues for the cases
of less than 32 iterations.
size scalar 128 256 512 512e 512f
1 9.42 11.32 9.35 11.17 15.13 16.89
2 5.72 6.53 6.66 6.66 7.62 8.56
3 4.49 5.10 5.10 5.74 5.08 5.73
4 4.10 4.33 4.29 5.21 3.79 4.25
6 3.78 3.85 3.86 4.76 2.54 2.85
8 3.64 1.89 3.76 4.50 1.92 2.16
12 3.56 2.21 3.75 4.26 1.26 1.42
16 3.36 0.83 1.06 4.16 0.95 1.07
20 3.39 1.42 1.33 4.07 0.75 0.85
24 3.23 0.66 1.72 4.22 0.62 0.70
28 3.18 1.09 2.04 4.20 0.54 0.61
32 3.16 0.47 0.41 0.41 0.47 0.53
34 3.16 0.67 0.61 0.56 0.44 0.50
38 3.19 0.95 0.95 0.82 0.40 0.45
42 3.09 0.58 1.21 1.13 0.36 0.40
'size' specifies the number of actual iterations, 512e is for
a masked epilog and 512f for the fully masked loop. From
4 scalar iterations on the AVX512 masked epilog code is clearly
the winner, the fully masked variant is clearly worse and
it's size benefit is also tiny.
This patch does not enable using fully masked loops or
masked epilogues by default. More work on cost modeling
and vectorization kind selection on x86_64 is necessary
for this.
Implementation wise this introduces LOOP_VINFO_PARTIAL_VECTORS_STYLE
which could be exploited further to unify some of the flags
we have right now but there didn't seem to be many easy things
to merge, so I'm leaving this for followups.
Mask requirements as registered by vect_record_loop_mask are kept in their
original form and recorded in a hash_set now instead of being
processed to a vector of rgroup_controls. Instead that's now
left to the final analysis phase which tries forming the rgroup_controls
vector using while_ult and if that fails now tries AVX512 style
which needs a different organization and instead fills a hash_map
with the relevant info. vect_get_loop_mask now has two implementations,
one for the two mask styles we then have.
I have decided against interweaving vect_set_loop_condition_partial_vectors
with conditions to do AVX512 style masking and instead opted to
"duplicate" this to vect_set_loop_condition_partial_vectors_avx512.
Likewise for vect_verify_full_masking vs vect_verify_full_masking_avx512.
The vect_prepare_for_masked_peels hunk might run into issues with
SVE, I didn't check yet but using LOOP_VINFO_RGROUP_COMPARE_TYPE
looked odd.
Bootstrapped and tested on x86_64-unknown-linux-gnu. I've run
the testsuite with --param vect-partial-vector-usage=2 with and
without -fno-vect-cost-model and filed two bugs, one ICE (PR110221)
and one latent wrong-code (PR110237).
* tree-vectorizer.h (enum vect_partial_vector_style): New.
(_loop_vec_info::partial_vector_style): Likewise.
(LOOP_VINFO_PARTIAL_VECTORS_STYLE): Likewise.
(rgroup_controls::compare_type): Add.
(vec_loop_masks): Change from a typedef to auto_vec<>
to a structure.
* tree-vect-loop-manip.cc (vect_set_loop_condition_partial_vectors):
Adjust. Convert niters_skip to compare_type.
(vect_set_loop_condition_partial_vectors_avx512): New function
implementing the AVX512 partial vector codegen.
(vect_set_loop_condition): Dispatch to the correct
vect_set_loop_condition_partial_vectors_* function based on
LOOP_VINFO_PARTIAL_VECTORS_STYLE.
(vect_prepare_for_masked_peels): Compute LOOP_VINFO_MASK_SKIP_NITERS
in the original niter type.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize
partial_vector_style.
(can_produce_all_loop_masks_p): Adjust.
(vect_verify_full_masking): Produce the rgroup_controls vector
here. Set LOOP_VINFO_PARTIAL_VECTORS_STYLE on success.
(vect_verify_full_masking_avx512): New function implementing
verification of AVX512 style masking.
(vect_verify_loop_lens): Set LOOP_VINFO_PARTIAL_VECTORS_STYLE.
(vect_analyze_loop_2): Also try AVX512 style masking.
Adjust condition.
(vect_estimate_min_profitable_iters): Implement AVX512 style
mask producing cost.
(vect_record_loop_mask): Do not build the rgroup_controls
vector here but record masks in a hash-set.
(vect_get_loop_mask): Implement AVX512 style mask query,
complementing the existing while_ult style.
|