aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-vectorizer.h
AgeCommit message (Collapse)AuthorFilesLines
2024-10-16Remove SLP_INSTANCE_UNROLLING_FACTOR, compute VF in vect_make_slp_decisionRichard Biener1-4/+0
The following prepares us for SLP instances with a non-uniform number of lanes. We already have this with load permutation lowering, but we managed to keep that within the constraints of the per SLP instance computed VF based on its max_nunits (with a vector type fixed for each node) and the instance group size which is the number of lanes in the SLP instance root. But in the case where arbitrary splitting and merging SLP nodes at non-power-of-two lane boundaries is allowed this simple calculation based on the outgoing group size falls apart. The following, instead of computing a VF during SLP instance discovery, computes it at vect_make_slp_decision time by walking the SLP graph and looking at each SLP node in isolation. We do track max_nunits per node which could be a VF per node instead or forgo with both completely (though for BB vectorization we need to communicate a VF > 1 requirement upward, or compute that after the fact). In the end we'd like to delay vector type assignment and only compute a minimum VF here, allowing vector types to grow when the actual VF is bigger. There's slight complication with permutes of externs / constants as those get their vector type (and thus max_nunits) assigned late. While we force them to have the same vector type as the result at the moment their number of lanes can differ. So those get handled explicitly there right now to up the VF as needed - the alternative is to fail vectorization, I have an addition to vect_maybe_update_slp_op_vectype that would FAIL if the set vector type isn't within the constraints of the VF. * tree-vectorizer.h (SLP_INSTANCE_UNROLLING_FACTOR): Remove. (slp_instance::unrolling_factor): Likewise. * tree-vect-slp.cc (vect_build_slp_instance): Do not set SLP_INSTANCE_UNROLLING_FACTOR. Remove then dead code. Compute and set max_nunits from the RHS nodes merged. (vect_update_slp_vf_for_node): New function. (vect_make_slp_decision): Use vect_update_slp_vf_for_node to compute VF recursively. (vect_build_slp_store_interleaving): Get max_nunits and properly set that on the permute nodes built. (vect_analyze_slp): Do not set SLP_INSTANCE_UNROLLING_FACTOR.
2024-10-15AArch64: re-enable memory access costing after SLP change.Tamar Christina1-0/+12
While chasing down a costing difference between SLP and non-SLP for memory access costing I noticed that at some point the SLP and non-SLP costing have diverged. It used to be we only supported LOAD_LANES in SLP and so the non-SLP costing was working fine. But with the change to SLP only we now lost costing. It looks like the vectorizer for non-SLP stores the VMAT type in STMT_VINFO_MEMORY_ACCESS_TYPE on the stmt_info, but for SLP it stores it in SLP_TREE_MEMORY_ACCESS_TYPE which is on the SLP node itself. While my first attempt of a patch was to just also store the VMAT in the stmt_info https://gcc.gnu.org/pipermail/gcc-patches/2024-October/665295.html Richi pointed out that this goes wrong when the same access is used Hybrid. And so we have to do a backend specific fix. To help out other backends this also introduces a generic helper function suggested by Richi in that patch (I hope that's ok.. I didn't want to split out just the helper.) This successfully restores VMAT based costing in the new SLP only world. gcc/ChangeLog: * tree-vectorizer.h (vect_mem_access_type): New. * config/aarch64/aarch64.cc (aarch64_ld234_st234_vectors): Use it. (aarch64_detect_vector_stmt_subtype): Likewise. (aarch64_adjust_stmt_cost): Likewise. (aarch64_vector_costs::count_ops): Likewise. (aarch64_vector_costs::add_stmt_cost): Make SLP node named.
2024-10-14middle-end: support SLP early breakTamar Christina1-1/+11
This patch introduces feature parity for early break int the SLP only vectorizer. The approach taken here is to treat the early exits as root statements for an SLP tree. This means that we don't need any changes to build_slp to support gconds. Codegen for the gcond itself now has to be done out of line but the body of the SLP blocks itself is simply driven by SLP scheduling. There is a slight awkwardness in having re-used vectorizable_early_exit for both SLP and non-SLP but I've documented the differences and when I did try to refactor it it wasn't really worth it given that this is a temporary state anyway. This version is restricted to lane = 1, as such we can re-use the existing move_early_break function instead of having to do safety update through scheduling. I have a branch where I'm working on that but lane > 1 is out of scope for GCC 15 anyway. The only reason I will try to get moving through scheduling done as a stretch goal is so we get epilogue vectorization back for early break. The example: unsigned test4(unsigned x) { unsigned ret = 0; for (int i = 0; i < N; i++) { vect_b[i] = x + i; if (vect_a[i]*2 != x) break; vect_a[i] = x; } return ret; } builds the following SLP instance for early break: note: Analyzing vectorizable control flow: if (patt_6 != 0) note: Starting SLP discovery for note: patt_6 = _4 != x_9(D); note: starting SLP discovery for node 0x63abc80 note: Build SLP for patt_6 = _4 != x_9(D); note: precomputed vectype: vector(4) <signed-boolean:32> note: nunits = 4 note: vect_is_simple_use: operand x_9(D), type of def: external note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff _3 * 2, type of def: internal note: starting SLP discovery for node 0x63abdc0 note: Build SLP for _4 = _3 * 2; note: precomputed vectype: vector(4) unsigned int note: nunits = 4 note: vect_is_simple_use: operand # vect_aD.4416[i_15], type of def: internal note: vect_is_simple_use: operand 2, type of def: constant note: starting SLP discovery for node 0x63abe60 note: Build SLP for _3 = vect_a[i_15]; note: precomputed vectype: vector(4) unsigned int note: nunits = 4 note: SLP discovery for node 0x63abe60 succeeded note: SLP discovery for node 0x63abdc0 succeeded note: SLP discovery for node 0x63abc80 succeeded note: SLP size 3 vs. limit 10. note: Final SLP tree for instance 0x6474190: note: node 0x63abc80 (max_nunits=4, refcnt=2) vector(4) <signed-boolean:32> note: op template: patt_6 = _4 != x_9(D); note: stmt 0 patt_6 = _4 != x_9(D); note: children 0x63abd20 0x63abdc0 note: node (external) 0x63abd20 (max_nunits=1, refcnt=1) note: { x_9(D) } note: node 0x63abdc0 (max_nunits=4, refcnt=2) vector(4) unsigned int note: op template: _4 = _3 * 2; note: stmt 0 _4 = _3 * 2; note: children 0x63abe60 0x63abf00 note: node 0x63abe60 (max_nunits=4, refcnt=2) vector(4) unsigned int note: op template: _3 = vect_a[i_15]; note: stmt 0 _3 = vect_a[i_15]; note: load permutation { 0 } note: node (constant) 0x63abf00 (max_nunits=1, refcnt=1) note: { 2 } and during codegen: note: ------>vectorizing SLP node starting from: patt_6 = _4 != x_9(D); note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff _3 * 2, type of def: internal note: add new stmt: mask_patt_6.18_58 = _53 != vect__4.17_57; note: === vectorizable_early_exit === note: transform early-exit. note: vectorizing stmts using SLP. note: Vectorizing SLP tree: note: node 0x63abfa0 (max_nunits=4, refcnt=1) vector(4) int note: op template: i_12 = i_15 + 1; note: stmt 0 i_12 = i_15 + 1; note: children 0x63aba00 0x63ac040 note: node 0x63aba00 (max_nunits=4, refcnt=2) vector(4) int note: op template: i_15 = PHI <i_12(6), 0(14)> note: [l] stmt 0 i_15 = PHI <i_12(6), 0(14)> note: children (nil) (nil) note: node (constant) 0x63ac040 (max_nunits=1, refcnt=1) vector(4) int note: { 1 } gcc/ChangeLog: * tree-vect-loop.cc (vect_analyze_loop_2): Handle SLP trees with no children. * tree-vectorizer.h (enum slp_instance_kind): Add slp_inst_kind_gcond. (LOOP_VINFO_EARLY_BREAKS_LIVE_IVS): New. (vectorizable_early_exit): Expose. (class _loop_vec_info): Add early_break_live_stmts. * tree-vect-slp.cc (vect_build_slp_instance, vect_analyze_slp_instance): Support gcond instances. (vect_analyze_slp): Analyze gcond roots and early break live statements. (maybe_push_to_hybrid_worklist): Don't sink gconds. (vect_slp_analyze_operations): Support gconds. (vect_slp_check_for_roots): Update comments. (vectorize_slp_instance_root_stmt): Support gconds. (vect_schedule_slp): Pass vinfo to vectorize_slp_instance_root_stmt. * tree-vect-stmts.cc (vect_stmt_relevant_p): Record early break live statements. (vectorizable_early_exit): Support SLP. gcc/testsuite/ChangeLog: * gcc.dg/vect/vect-early-break_126.c: New test. * gcc.dg/vect/vect-early-break_127.c: New test. * gcc.dg/vect/vect-early-break_128.c: New test.
2024-10-11tree-optimization/117080 - Add SLP_TREE_MEMORY_ACCESS_TYPERichard Biener1-40/+62
It turns out target costing code looks at STMT_VINFO_MEMORY_ACCESS_TYPE to identify operations from (emulated) gathers for example. This doesn't work for SLP loads since we do not set STMT_VINFO_MEMORY_ACCESS_TYPE there as the vectorization strathegy might differ between different stmt uses. It seems we got away with setting it for stores though. The following adds a memory_access_type field to slp_tree and sets it from load and store vectorization code. All the costing doesn't record the SLP node (that was only done selectively for some corner case). The costing is really in need of a big overhaul, the following just massages the two relevant ops to fix gcc.dg/target/pr88531-2[bc].c FAILs when switching on SLP for non-grouped stores. In particular currently we either have a SLP node or a stmt_info in the cost hook but not both. So the following mitigates this, postponing a rewrite of costing to next stage1. Other targets look possibly affected as well but are left to respective maintainers to update. PR tree-optimization/117080 * tree-vectorizer.h (_slp_tree::memory_access_type): Add. (SLP_TREE_MEMORY_ACCESS_TYPE): New. (record_stmt_cost): Add another overload. * tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize memory_access_type. * tree-vect-stmts.cc (vectorizable_store): Set SLP_TREE_MEMORY_ACCESS_TYPE. (vectorizable_load): Likewise. Also record the SLP node when costing emulated gather offset decompose and vector composition. * config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): Also recognize SLP emulated gather/scatter.
2024-10-07tree-optimization/116982 - analyze scalar loop exit earlyRichard Biener1-2/+4
The following makes sure to discover the scalar loop IV exit during analysis as failure to do so (if DCE and friends are disabled this can happen due to if-conversion doing DCE and FRE on the if-converted loop) would ICE later. I refrained from larger refactoring to be able to eventually backport. PR tree-optimization/116982 * tree-vectorizer.h (vect_analyze_loop): Pass in .LOOP_VECTORIZED call. (vect_analyze_loop_form): Likewise. * tree-vect-loop.cc (vect_analyze_loop_form): Reject loops where we cannot determine a IV exit for the scalar loop. (vect_analyze_loop): Adjust. * tree-vectorizer.cc (try_vectorize_loop_1): Likewise. * tree-parloops.cc (gather_scalar_reductions): Likewise.
2024-09-22middle-end: lower COND_EXPR into gimple form in vect_recog_bool_patternTamar Christina1-0/+7
Currently the vectorizer cheats when lowering COND_EXPR during bool recog. In the cases where the conditonal is loop invariant or non-boolean it instead converts the operation back into GENERIC and hides much of the operation from the analysis part of the vectorizer. i.e. a ? b : c is transformed into: a != 0 ? b : c however by doing so we can't perform any optimization on the mask as they aren't explicit until quite late during codegen. To fix this this patch lowers booleans earlier and so ensures that we are always in GIMPLE. For when the value is a loop invariant boolean we have to generate an additional conversion from bool to the integer mask form. This is done by creating a loop invariant a ? -1 : 0 with the target mask precision and then doing a normal != 0 comparison on that. To support this the patch also adds the ability to during pattern matching create a loop invariant pattern that won't be seen by the vectorizer and will instead me materialized inside the loop preheader in the case of loops, or in the case of BB vectorization it materializes it in the first BB in the region. gcc/ChangeLog: * tree-vect-patterns.cc (append_inv_pattern_def_seq): New. (vect_recog_bool_pattern): Lower COND_EXPRs. * tree-vect-slp.cc (vect_slp_region): Materialize loop invariant statements. * tree-vect-loop.cc (vect_transform_loop): Likewise. * tree-vect-stmts.cc (vectorizable_comparison_1): Remove VECT_SCALAR_BOOLEAN_TYPE_P handling for vectype. * tree-vectorizer.cc (vec_info::vec_info): Initialize inv_pattern_def_seq. * tree-vectorizer.h (LOOP_VINFO_INV_PATTERN_DEF_SEQ): New. (class vec_info): Add inv_pattern_def_seq. gcc/testsuite/ChangeLog: * gcc.dg/vect/bb-slp-conditional_store_1.c: New test. * gcc.dg/vect/vect-conditional_store_5.c: New test. * gcc.dg/vect/vect-conditional_store_6.c: New test.
2024-09-19Fall back to single-lane SLP before falling back to no SLPRichard Biener1-1/+1
The following changes the fallback to disable SLP when any of the discovered SLP instances failed to pass vectorization checking into a fallback that emulates what no SLP would do with SLP - force single-lane discovery for all instances. The patch does not remove the final fallback to disable SLP but it reduces the fallout from failing vectorization when any non-SLP stmt survives analysis. * tree-vectorizer.h (vect_analyze_slp): Add force_single_lane parameter. * tree-vect-slp.cc (vect_analyze_slp_instance): Remove defaulting of force_single_lane. (vect_build_slp_instance): Likewise. Pass down appropriate force_single_lane. (vect_analyze_slp): Add force_sigle_lane parameter and pass it down appropriately. (vect_slp_analyze_bb_1): Always do multi-lane SLP. * tree-vect-loop.cc (vect_analyze_loop_2): Track two SLP modes and adjust accordingly. (vect_analyze_loop_1): Save the SLP mode when unrolling. * gcc.dg/vect/vect-outer-slp-1.c: Adjust.
2024-09-02load and store-lanes with SLPRichard Biener1-0/+4
The following is a prototype for how to represent load/store-lanes within SLP. I've for now settled with having a single load node with multiple permute nodes acting as selection, one for each loaded lane and a single store node fed from all stored lanes. For for (int i = 0; i < 1024; ++i) { a[2*i] = b[2*i] + 7; a[2*i+1] = b[2*i+1] * 3; } you have the following SLP graph where I explain how things are set up and code-generated: t.c:23:21: note: SLP graph after lowering permutations: t.c:23:21: note: node 0x50dc8b0 (max_nunits=1, refcnt=1) vector(4) int t.c:23:21: note: op template: *_6 = _7; t.c:23:21: note: stmt 0 *_6 = _7; t.c:23:21: note: stmt 1 *_12 = _13; t.c:23:21: note: children 0x50dc488 0x50dc6e8 This is the store node, it's marked with ldst_lanes = true during SLP discovery. This node code-generates vect_array.65[0] = vect__7.61_29; vect_array.65[1] = vect__13.62_28; MEM <int[8]> [(int *)vectp_a.63_27] = .STORE_LANES (vect_array.65); ... t.c:23:21: note: node 0x50dc520 (max_nunits=4, refcnt=2) vector(4) int t.c:23:21: note: op: VEC_PERM_EXPR t.c:23:21: note: stmt 0 _5 = *_4; t.c:23:21: note: lane permutation { 0[0] } t.c:23:21: note: children 0x50dc948 t.c:23:21: note: node 0x50dc780 (max_nunits=4, refcnt=2) vector(4) int t.c:23:21: note: op: VEC_PERM_EXPR t.c:23:21: note: stmt 0 _11 = *_10; t.c:23:21: note: lane permutation { 0[1] } t.c:23:21: note: children 0x50dc948 These are the selection nodes, marked with ldst_lanes = true. They code generate nothing. t.c:23:21: note: node 0x50dc948 (max_nunits=4, refcnt=3) vector(4) int t.c:23:21: note: op template: _5 = *_4; t.c:23:21: note: stmt 0 _5 = *_4; t.c:23:21: note: stmt 1 _11 = *_10; t.c:23:21: note: load permutation { 0 1 } This is the load node, marked with ldst_lanes = true (the load permutation is only accurate when taking into account the lane permute in the selection nodes). It code generates vect_array.58 = .LOAD_LANES (MEM <int[8]> [(int *)vectp_b.56_33]); vect__5.59_31 = vect_array.58[0]; vect__5.60_30 = vect_array.58[1]; This scheme allows to leave code generation in vectorizable_load/store mostly as-is. While this should support both load-lanes and (masked) store-lanes the decision to do either is done during SLP discovery time and cannot be reversed without altering the SLP tree - as-is the SLP tree is not usable for non-store-lanes on the store side, the load side is OK representation-wise but will very likely fail permute handling as the lowering to deal with the two input vector restriction isn't done - but of course since the permute node is marked as to be ignored that doesn't work out. So I've put restrictions in place that fail vectorization if a load/store-lane SLP tree is later classified differently by get_load_store_type. I'll note that for example gcc.target/aarch64/sve/mask_struct_store_3.c will not get SLP store-lanes used because the full store SLPs just fine though we then fail to handle the "splat" load-permutation t2.c:5:21: note: node 0x4db2630 (max_nunits=4, refcnt=2) vector([4,4]) int t2.c:5:21: note: op template: _6 = *_5; t2.c:5:21: note: stmt 0 _6 = *_5; t2.c:5:21: note: stmt 1 _6 = *_5; t2.c:5:21: note: stmt 2 _6 = *_5; t2.c:5:21: note: stmt 3 _6 = *_5; t2.c:5:21: note: load permutation { 0 0 0 0 } the load permute lowering code currently doesn't consider it worth lowering single loads from a group (or in this case not grouped loads). The expectation is the target can handle this by two interleaves with itself. So what we see here is that while the explicit SLP representation is helpful in some cases, in cases like this it would require changing it when we make decisions how to vectorize. My idea is that this all will change a lot when we re-do SLP discovery (for loops) and when we get rid of non-SLP as I think vectorizable_* should be allowed to alter the SLP graph during analysis. The patch also removes the code cancelling SLP if we can use load/store-lanes from the main loop vector analysis code and re-implements it as re-discovering the SLP instance with forced single-lane splits so SLP load/store-lanes scheme can be used. This is now done after SLP discovery and SLP pattern recog are complete to not disturb the latter but per SLP instance instead of being a global decision on the whole loop. This is a behavioral change that for example shows in gcc.dg/vect/slp-perm-6.c on ARM where we formerly used SLP permutes but now a mix of SLP without permutes and load/store lanes. The previous flaky heuristic is now flaky in a different way. Testing on RISC-V and aarch64 reveal several testcases that require adjustment as to now expect SLP even when load/store lanes are being used. If in doubt I've adjusted them to the final expectation which will lead to one or two new FAILs where we still do the SLP cancelling. I have a followup that implements that while remaining in SLP that's in final testing. Note that gcc.dg/vect/slp-42.c and gcc.dg/vect/pr68445.c will FAIL on aarch64 with SVE because for some odd reason vect_stridedN is true for any N for check_effective_target_vect_fully_masked targets but SVE cannot do ld8 while risc-v can. I have not bothered to adjust target tests that now fail assembly-scan. * tree-vectorizer.h (_slp_tree::ldst_lanes): New flag to mark load, store and permute nodes. * tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize ldst_lanes. (vect_build_slp_instance): For stores iff the target prefers store-lanes discover single-lane sub-groups, do not perform interleaving lowering but mark the node with ldst_lanes. Also allow i == 0 - fatal failure - for splitting up a store group when we're not doing single-lane discovery already. (vect_lower_load_permutations): When the target supports load lanes and the loads all fit the pattern split out a single level of permutes only and mark the load and permute nodes with ldst_lanes. (vectorizable_slp_permutation_1): Handle the load-lane permute forwarding of vector defs. (vect_analyze_slp): After SLP pattern recog is finished see if there are any SLP instances that would benefit from using load/store-lanes and re-discover those with forced single lanes. * tree-vect-stmts.cc (get_group_load_store_type): Support load/store-lanes for SLP. (vectorizable_store): Support SLP code generation for store-lanes. (vectorizable_load): Support SLP code generation for load-lanes. * tree-vect-loop.cc (vect_analyze_loop_2): Do not cancel SLP when store-lanes can be used. * gcc.dg/vect/slp-55.c: New testcase. * gcc.dg/vect/slp-56.c: Likewise. * gcc.dg/vect/slp-11c.c: Adjust. * gcc.dg/vect/slp-53.c: Likewise. * gcc.dg/vect/slp-cond-1.c: Likewise. * gcc.dg/vect/vect-complex-5.c: Likewise. * gcc.dg/vect/slp-1.c: Likewise. * gcc.dg/vect/slp-54.c: Remove riscv XFAIL. * gcc.dg/vect/slp-perm-5.c: Adjust. * gcc.dg/vect/slp-perm-7.c: Likewise. * gcc.dg/vect/slp-perm-8.c: Likewise. * gcc.dg/vect/slp-multitypes-11.c: Likewise. * gcc.dg/vect/slp-multitypes-11-big-array.c: Likewise. * gcc.dg/vect/slp-perm-9.c: Remove expected SLP fail due to three-vector permute. * gcc.dg/vect/slp-perm-6.c: Remove XFAIL. * gcc.dg/vect/slp-perm-1.c: Adjust. * gcc.dg/vect/slp-perm-2.c: Likewise. * gcc.dg/vect/slp-perm-3.c: Likewise. * gcc.dg/vect/slp-perm-4.c: Likewise. * gcc.dg/vect/pr68445.c: Likewise. * gcc.dg/vect/slp-11b.c: Likewise. * gcc.dg/vect/slp-2.c: Likewise. * gcc.dg/vect/slp-23.c: Likewise. * gcc.dg/vect/slp-33.c: Likewise. * gcc.dg/vect/slp-42.c: Likewise. * gcc.dg/vect/slp-46.c: Likewise. * gcc.dg/vect/slp-perm-10.c: Likewise.
2024-07-17vect: Optimize order of lane-reducing operations in loop def-use cyclesFeng Xue1-0/+6
When transforming multiple lane-reducing operations in a loop reduction chain, originally, corresponding vectorized statements are generated into def-use cycles starting from 0. The def-use cycle with smaller index, would contain more statements, which means more instruction dependency. For example: int sum = 1; for (i) { sum += d0[i] * d1[i]; // dot-prod <vector(16) char> sum += w[i]; // widen-sum <vector(16) char> sum += abs(s0[i] - s1[i]); // sad <vector(8) short> sum += n[i]; // normal <vector(4) int> } Original transformation result: for (i / 16) { sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0); sum_v1 = sum_v1; // copy sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = WIDEN_SUM (w_v0[i: 0 ~ 15], sum_v0); sum_v1 = sum_v1; // copy sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = SAD (s0_v0[i: 0 ~ 7 ], s1_v0[i: 0 ~ 7 ], sum_v0); sum_v1 = SAD (s0_v1[i: 8 ~ 15], s1_v1[i: 8 ~ 15], sum_v1); sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy ... } For a higher instruction parallelism in final vectorized loop, an optimal means is to make those effective vector lane-reducing ops be distributed evenly among all def-use cycles. Transformed as the below, DOT_PROD, WIDEN_SUM and SADs are generated into disparate cycles, instruction dependency among them could be eliminated. for (i / 16) { sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0); sum_v1 = sum_v1; // copy sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = sum_v0; // copy sum_v1 = WIDEN_SUM (w_v1[i: 0 ~ 15], sum_v1); sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = sum_v0; // copy sum_v1 = sum_v1; // copy sum_v2 = SAD (s0_v2[i: 0 ~ 7 ], s1_v2[i: 0 ~ 7 ], sum_v2); sum_v3 = SAD (s0_v3[i: 8 ~ 15], s1_v3[i: 8 ~ 15], sum_v3); ... } 2024-03-22 Feng Xue <fxue@os.amperecomputing.com> gcc/ PR tree-optimization/114440 * tree-vectorizer.h (struct _stmt_vec_info): Add a new field reduc_result_pos. * tree-vect-loop.cc (vect_transform_reduction): Generate lane-reducing statements in an optimized order.
2024-07-17vect: Support multiple lane-reducing operations for loop reduction [PR114440]Feng Xue1-0/+2
For lane-reducing operation(dot-prod/widen-sum/sad) in loop reduction, current vectorizer could only handle the pattern if the reduction chain does not contain other operation, no matter the other is normal or lane-reducing. This patches removes some constraints in reduction analysis to allow multiple arbitrary lane-reducing operations with mixed input vectypes in a loop reduction chain. For example: int sum = 1; for (i) { sum += d0[i] * d1[i]; // dot-prod <vector(16) char> sum += w[i]; // widen-sum <vector(16) char> sum += abs(s0[i] - s1[i]); // sad <vector(8) short> } The vector size is 128-bit vectorization factor is 16. Reduction statements would be transformed as: vector<4> int sum_v0 = { 0, 0, 0, 1 }; vector<4> int sum_v1 = { 0, 0, 0, 0 }; vector<4> int sum_v2 = { 0, 0, 0, 0 }; vector<4> int sum_v3 = { 0, 0, 0, 0 }; for (i / 16) { sum_v0 = DOT_PROD (d0_v0[i: 0 ~ 15], d1_v0[i: 0 ~ 15], sum_v0); sum_v1 = sum_v1; // copy sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = WIDEN_SUM (w_v0[i: 0 ~ 15], sum_v0); sum_v1 = sum_v1; // copy sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy sum_v0 = SAD (s0_v0[i: 0 ~ 7 ], s1_v0[i: 0 ~ 7 ], sum_v0); sum_v1 = SAD (s0_v1[i: 8 ~ 15], s1_v1[i: 8 ~ 15], sum_v1); sum_v2 = sum_v2; // copy sum_v3 = sum_v3; // copy } sum_v = sum_v0 + sum_v1 + sum_v2 + sum_v3; // = sum_v0 + sum_v1 2024-03-22 Feng Xue <fxue@os.amperecomputing.com> gcc/ PR tree-optimization/114440 * tree-vectorizer.h (vectorizable_lane_reducing): New function declaration. * tree-vect-stmts.cc (vect_analyze_stmt): Call new function vectorizable_lane_reducing to analyze lane-reducing operation. * tree-vect-loop.cc (vect_model_reduction_cost): Remove cost computation code related to emulated_mixed_dot_prod. (vectorizable_lane_reducing): New function. (vectorizable_reduction): Allow multiple lane-reducing operations in loop reduction. Move some original lane-reducing related code to vectorizable_lane_reducing. (vect_transform_reduction): Adjust comments with updated example. gcc/testsuite/ PR tree-optimization/114440 * gcc.dg/vect/vect-reduc-chain-1.c * gcc.dg/vect/vect-reduc-chain-2.c * gcc.dg/vect/vect-reduc-chain-3.c * gcc.dg/vect/vect-reduc-chain-dot-slp-1.c * gcc.dg/vect/vect-reduc-chain-dot-slp-2.c * gcc.dg/vect/vect-reduc-chain-dot-slp-3.c * gcc.dg/vect/vect-reduc-chain-dot-slp-4.c * gcc.dg/vect/vect-reduc-dot-slp-1.c
2024-07-17vect: Add a unified vect_get_num_copies for slp and non-slpFeng Xue1-1/+27
Extend original vect_get_num_copies (pure loop-based) to calculate number of vector stmts for slp node regarding a generic vect region. 2024-07-12 Feng Xue <fxue@os.amperecomputing.com> gcc/ * tree-vectorizer.h (vect_get_num_copies): New overload function. * tree-vect-slp.cc (vect_slp_analyze_node_operations_1): Calculate number of vector stmts for slp node with vect_get_num_copies. (vect_slp_analyze_node_operations): Calculate number of vector elements for constant/external slp node with vect_get_num_copies.
2024-06-27vect: generate suitable convert insn for int -> int, float -> float and int ↵Hu, Lin11-0/+4
<-> float. gcc/ChangeLog: PR target/107432 * tree-vect-generic.cc (expand_vector_conversion): Support convert for int -> int, float -> float and int <-> float. * tree-vect-stmts.cc (vectorizable_conversion): Wrap the indirect convert part. (supportable_indirect_convert_operation): New function. * tree-vectorizer.h (supportable_indirect_convert_operation): Define the new function. gcc/testsuite/ChangeLog: PR target/107432 * gcc.target/i386/pr107432-1.c: New test. * gcc.target/i386/pr107432-2.c: Ditto. * gcc.target/i386/pr107432-3.c: Ditto. * gcc.target/i386/pr107432-4.c: Ditto. * gcc.target/i386/pr107432-5.c: Ditto. * gcc.target/i386/pr107432-6.c: Ditto. * gcc.target/i386/pr107432-7.c: Ditto.
2024-06-20vect: Add a function to check lane-reducing stmtFeng Xue1-0/+12
Add a utility function to check if a statement is lane-reducing operation, which could simplify some existing code. 2024-06-16 Feng Xue <fxue@os.amperecomputing.com> gcc/ * tree-vectorizer.h (lane_reducing_stmt_p): New function. * tree-vect-slp.cc (vect_analyze_slp): Use new function lane_reducing_stmt_p to check statement.
2024-06-11vect: Merge loop mask and cond_op mask in fold-left reduction [PR115382].Robin Dapp1-0/+3
Currently we discard the cond-op mask when the loop is fully masked which causes wrong code in gcc.dg/vect/vect-cond-reduc-in-order-2-signed-zero.c when compiled with -O3 -march=cascadelake --param vect-partial-vector-usage=2. This patch ANDs both masks. gcc/ChangeLog: PR tree-optimization/115382 * tree-vect-loop.cc (vectorize_fold_left_reduction): Use prepare_vec_mask. * tree-vect-stmts.cc (check_load_store_for_partial_vectors): Remove static of prepare_vec_mask. * tree-vectorizer.h (prepare_vec_mask): Export.
2024-06-01vect: Add a function to check lane-reducing codeFeng Xue1-0/+6
Check if an operation is lane-reducing requires comparison of code against three kinds (DOT_PROD_EXPR/WIDEN_SUM_EXPR/SAD_EXPR). Add an utility function to make source coding for the check handy and concise. 2024-05-29 Feng Xue <fxue@os.amperecomputing.com> gcc/ * tree-vectorizer.h (lane_reducing_op_p): New function. * tree-vect-slp.cc (vect_analyze_slp): Use new function lane_reducing_op_p to check statement code. * tree-vect-loop.cc (vect_transform_reduction): Likewise. (vectorizable_reduction): Likewise, and change name of a local variable that holds the result flag.
2024-05-29vect: Unify bbs in loop_vec_info and bb_vec_infoFeng Xue1-10/+13
Both derived classes have their own "bbs" field, which have exactly same purpose of recording all basic blocks inside the corresponding vect region, while the fields are composed by different data type, one is normal array, the other is auto_vec. This difference causes some duplicated code even handling the same stuff, almost in tree-vect-patterns. One refinement is lifting this field into the base class "vec_info", and reset its value to the continuous memory area pointed by two old "bbs" in each constructor of derived classes. 2024-05-16 Feng Xue <fxue@os.amperecomputing.com> gcc/ * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Move initialization of bbs to explicit construction code. Adjust the definition of nbbs. (update_epilogue_loop_vinfo): Update nbbs for epilog vinfo. * tree-vect-patterns.cc (vect_determine_precisions): Make loop_vec_info and bb_vec_info share same code. (vect_pattern_recog): Remove duplicated vect_pattern_recog_1 loop. * tree-vect-slp.cc (vect_get_and_check_slp_defs): Access to bbs[0] via base vec_info class. (_bb_vec_info::_bb_vec_info): Initialize bbs and nbbs using data fields of input auto_vec<> bbs. (vect_slp_region): Use access to nbbs to replace original bbs.length(). (vect_schedule_slp_node): Access to bbs[0] via base vec_info class. * tree-vectorizer.cc (vec_info::vec_info): Add initialization of bbs and nbbs. (vec_info::insert_seq_on_entry): Access to bbs[0] via base vec_info class. * tree-vectorizer.h (vec_info): Add new fields bbs and nbbs. (LOOP_VINFO_NBBS): New macro. (BB_VINFO_BBS): Rename BB_VINFO_BB to BB_VINFO_BBS. (BB_VINFO_NBBS): New macro. (_loop_vec_info): Remove field bbs. (_bb_vec_info): Rename field bbs.
2024-05-16Vect: Support loop len in vectorizable early exitPan Li1-0/+4
This patch adds early break auto-vectorization support for target which use length on partial vectorization. Consider this following example: unsigned vect_a[802]; unsigned vect_b[802]; void test (unsigned x, int n) { for (int i = 0; i < n; i++) { vect_b[i] = x + i; if (vect_a[i] > x) break; vect_a[i] = x; } } We use VCOND_MASK_LEN to simulate the generate (mask && i < len + bias). And then the IR of RVV looks like below: ... _87 = .SELECT_VL (ivtmp_85, POLY_INT_CST [32, 32]); _55 = (int) _87; ... mask_patt_6.13_69 = vect_cst__62 < vect__3.12_67; vec_len_mask_72 = .VCOND_MASK_LEN (mask_patt_6.13_69, { -1, ... }, \ {0, ... }, _87, 0); if (vec_len_mask_72 != { 0, ... }) goto <bb 6>; [5.50%] else goto <bb 7>; [94.50%] The below tests are passed for this patch: 1. The riscv fully regression tests. 2. The x86 bootstrap tests. 3. The x86 fully regression tests. gcc/ChangeLog: * tree-vect-loop.cc (vect_gen_loop_len_mask): New func to gen the loop len mask. * tree-vect-stmts.cc (vectorizable_early_exit): Invoke the vect_gen_loop_len_mask for 1 or more stmt(s). * tree-vectorizer.h (vect_gen_loop_len_mask): New func decl for vect_gen_loop_len_mask. Signed-off-by: Pan Li <pan2.li@intel.com>
2024-01-03Update copyright years.Jakub Jelinek1-1/+1
2023-12-24middle-end: Support vectorization of loops with multiple exits.Tamar Christina1-2/+33
Hi All, This patch adds initial support for early break vectorization in GCC. In other words it implements support for vectorization of loops with multiple exits. The support is added for any target that implements a vector cbranch optab, this includes both fully masked and non-masked targets. Depending on the operation, the vectorizer may also require support for boolean mask reductions using Inclusive OR/Bitwise AND. This is however only checked then the comparison would produce multiple statements. This also fully decouples the vectorizer's notion of exit from the existing loop infrastructure's exit. Before this patch the vectorizer always picked the natural loop latch connected exit as the main exit. After this patch the vectorizer is free to choose any exit it deems appropriate as the main exit. This means that even if the main exit is not countable (i.e. the termination condition could not be determined) we might still be able to vectorize should one of the other exits be countable. In such situations the loop is reflowed which enabled vectorization of many other loop forms. Concretely the kind of loops supported are of the forms: for (int i = 0; i < N; i++) { <statements1> if (<condition>) { ... <action>; } <statements2> } where <action> can be: - break - return - goto Any number of statements can be used before the <action> occurs. Since this is an initial version for GCC 14 it has the following limitations and features: - Only fixed sized iterations and buffers are supported. That is to say any vectors loaded or stored must be to statically allocated arrays with known sizes. N must also be known. This limitation is because our primary target for this optimization is SVE. For VLA SVE we can't easily do cross page iteraion checks. The result is likely to also not be beneficial. For that reason we punt support for variable buffers till we have First-Faulting support in GCC 15. - any stores in <statements1> should not be to the same objects as in <condition>. Loads are fine as long as they don't have the possibility to alias. More concretely, we block RAW dependencies when the intermediate value can't be separated fromt the store, or the store itself can't be moved. - Prologue peeling, alignment peelinig and loop versioning are supported. - Fully masked loops, unmasked loops and partially masked loops are supported - Any number of loop early exits are supported. - No support for epilogue vectorization. The only epilogue supported is the scalar final one. Peeling code supports it but the code motion code cannot find instructions to make the move in the epilog. - Early breaks are only supported for inner loop vectorization. With the help of IPA and LTO this still gets hit quite often. During bootstrap it hit rather frequently. Additionally TSVC s332, s481 and s482 all pass now since these are tests for support for early exit vectorization. This implementation does not support completely handling the early break inside the vector loop itself but instead supports adding checks such that if we know that we have to exit in the current iteration then we branch to scalar code to actually do the final VF iterations which handles all the code in <action>. For the scalar loop we know that whatever exit you take you have to perform at most VF iterations. For vector code we only case about the state of fully performed iteration and reset the scalar code to the (partially) remaining loop. That is to say, the first vector loop executes so long as the early exit isn't needed. Once the exit is taken, the scalar code will perform at most VF extra iterations. The exact number depending on peeling and iteration start and which exit was taken (natural or early). For this scalar loop, all early exits are treated the same. When we vectorize we move any statement not related to the early break itself and that would be incorrect to execute before the break (i.e. has side effects) to after the break. If this is not possible we decline to vectorize. The analysis and code motion also takes into account that it doesn't introduce a RAW dependency after the move of the stores. This means that we check at the start of iterations whether we are going to exit or not. During the analyis phase we check whether we are allowed to do this moving of statements. Also note that we only move the scalar statements, but only do so after peeling but just before we start transforming statements. With this the vector flow no longer necessarily needs to match that of the scalar code. In addition most of the infrastructure is in place to support general control flow safely, however we are punting this to GCC 15. Codegen: for e.g. unsigned vect_a[N]; unsigned vect_b[N]; unsigned test4(unsigned x) { unsigned ret = 0; for (int i = 0; i < N; i++) { vect_b[i] = x + i; if (vect_a[i] > x) break; vect_a[i] = x; } return ret; } We generate for Adv. SIMD: test4: adrp x2, .LC0 adrp x3, .LANCHOR0 dup v2.4s, w0 add x3, x3, :lo12:.LANCHOR0 movi v4.4s, 0x4 add x4, x3, 3216 ldr q1, [x2, #:lo12:.LC0] mov x1, 0 mov w2, 0 .p2align 3,,7 .L3: ldr q0, [x3, x1] add v3.4s, v1.4s, v2.4s add v1.4s, v1.4s, v4.4s cmhi v0.4s, v0.4s, v2.4s umaxp v0.4s, v0.4s, v0.4s fmov x5, d0 cbnz x5, .L6 add w2, w2, 1 str q3, [x1, x4] str q2, [x3, x1] add x1, x1, 16 cmp w2, 200 bne .L3 mov w7, 3 .L2: lsl w2, w2, 2 add x5, x3, 3216 add w6, w2, w0 sxtw x4, w2 ldr w1, [x3, x4, lsl 2] str w6, [x5, x4, lsl 2] cmp w0, w1 bcc .L4 add w1, w2, 1 str w0, [x3, x4, lsl 2] add w6, w1, w0 sxtw x1, w1 ldr w4, [x3, x1, lsl 2] str w6, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 add w4, w2, 2 str w0, [x3, x1, lsl 2] sxtw x1, w4 add w6, w1, w0 ldr w4, [x3, x1, lsl 2] str w6, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 str w0, [x3, x1, lsl 2] add w2, w2, 3 cmp w7, 3 beq .L4 sxtw x1, w2 add w2, w2, w0 ldr w4, [x3, x1, lsl 2] str w2, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 str w0, [x3, x1, lsl 2] .L4: mov w0, 0 ret .p2align 2,,3 .L6: mov w7, 4 b .L2 and for SVE: test4: adrp x2, .LANCHOR0 add x2, x2, :lo12:.LANCHOR0 add x5, x2, 3216 mov x3, 0 mov w1, 0 cntw x4 mov z1.s, w0 index z0.s, #0, #1 ptrue p1.b, all ptrue p0.s, all .p2align 3,,7 .L3: ld1w z2.s, p1/z, [x2, x3, lsl 2] add z3.s, z0.s, z1.s cmplo p2.s, p0/z, z1.s, z2.s b.any .L2 st1w z3.s, p1, [x5, x3, lsl 2] add w1, w1, 1 st1w z1.s, p1, [x2, x3, lsl 2] add x3, x3, x4 incw z0.s cmp w3, 803 bls .L3 .L5: mov w0, 0 ret .p2align 2,,3 .L2: cntw x5 mul w1, w1, w5 cbz w5, .L5 sxtw x1, w1 sub w5, w5, #1 add x5, x5, x1 add x6, x2, 3216 b .L6 .p2align 2,,3 .L14: str w0, [x2, x1, lsl 2] cmp x1, x5 beq .L5 mov x1, x4 .L6: ldr w3, [x2, x1, lsl 2] add w4, w0, w1 str w4, [x6, x1, lsl 2] add x4, x1, 1 cmp w0, w3 bcs .L14 mov w0, 0 ret On the workloads this work is based on we see between 2-3x performance uplift using this patch. Follow up plan: - Boolean vectorization has several shortcomings. I've filed PR110223 with the bigger ones that cause vectorization to fail with this patch. - SLP support. This is planned for GCC 15 as for majority of the cases build SLP itself fails. This means I'll need to spend time in making this more robust first. Additionally it requires: * Adding support for vectorizing CFG (gconds) * Support for CFG to differ between vector and scalar loops. Both of which would be disruptive to the tree and I suspect I'll be handling fallouts from this patch for a while. So I plan to work on the surrounding building blocks first for the remainder of the year. Additionally it also contains reduced cases from issues found running over various codebases. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Also regtested with: -march=armv8.3-a+sve -march=armv8.3-a+nosve -march=armv9-a -mcpu=neoverse-v1 -mcpu=neoverse-n2 Bootstrapped Regtested x86_64-pc-linux-gnu and no issues. Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues. gcc/ChangeLog: * tree-if-conv.cc (idx_within_array_bound): Expose. * tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New. (vect_analyze_data_ref_dependences): Use it. * tree-vect-loop-manip.cc (vect_iv_increment_position): New. (vect_set_loop_controls_directly, vect_set_loop_condition_partial_vectors, vect_set_loop_condition_partial_vectors_avx512, vect_set_loop_condition_normal): Support multiple exits. (slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for multiple exits. (slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB count and instead look at loop shape. (vect_update_ivs_after_vectorizer): Drop asserts. (vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations. (vect_do_peeling): Support multiple exits. (vect_loop_versioning): Likewise. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise early_breaks. (vect_analyze_loop_form): Support loop flows with more than single BB loop body. (vect_create_loop_vinfo): Support niters analysis for multiple exits. (vect_analyze_loop): Likewise. (vect_get_vect_def): New. (vect_create_epilog_for_reduction): Support early exit reductions. (vectorizable_live_operation_1): New. (find_connected_edge): New. (vectorizable_live_operation): Support early exit live operations. (move_early_exit_stmts): New. (vect_transform_loop): Use it. * tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond. (vect_recog_bitfield_ref_pattern): Support gconds and bools. (vect_recog_gcond_pattern): New. (possible_vector_mask_operation_p): Support gcond masks. (vect_determine_mask_precision): Likewise. (vect_mark_pattern_stmts): Set gcond def type. (can_vectorize_live_stmts): Force early break inductions to be live. * tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for early breaks. (vect_mark_stmts_to_be_vectorized): Process gcond usage. (perm_mask_for_reverse): Expose. (vectorizable_comparison_1): New. (vectorizable_early_exit): New. (vect_analyze_stmt): Support early break and gcond. (vect_transform_stmt): Likewise. (vect_is_simple_use): Likewise. (vect_get_vector_types_for_stmt): Likewise. * tree-vectorizer.cc (pass_vectorize::execute): Update exits for value numbering. * tree-vectorizer.h (enum vect_def_type): Add vect_condition_def. (LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES, LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB, LOOP_VINFO_EARLY_BRK_VUSES): New. (is_loop_header_bb_p): Drop assert. (class loop): Add early_breaks, early_break_stores, early_break_dest_bb, early_break_vuses. (vect_iv_increment_position, perm_mask_for_reverse, ref_within_array_bound): New. (slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
2023-12-01Fix ambiguity between vect_get_vec_defs with/without vectypeRichard Biener1-4/+4
When querying a single set of vector defs with the overloaded vect_get_vec_defs API then when you try to use the overload with the vector type specified the call will be ambiguous with the variant without the vector type. The following fixes this by re-ordering the vector type argument to come before the output def vector argument. I've changed vectorizable_conversion as that triggered this so it has coverage showing this works. The motivation is to reduce the number of (redundant) get_vectype_for_scalar_type calls. * tree-vectorizer.h (vect_get_vec_defs): Re-order arguments. * tree-vect-stmts.cc (vect_get_vec_defs): Likewise. (vectorizable_condition): Update caller. (vectorizable_comparison_1): Likewise. (vectorizable_conversion): Specify the vector type to be used for invariant/external defs. * tree-vect-loop.cc (vect_transform_reduction): Update caller.
2023-11-08TLC to vect_check_store_rhs and vect_slp_child_index_for_operandRichard Biener1-1/+1
This prepares us for the SLP of scatters. We have to tell vect_slp_child_index_for_operand whether we are dealing with a scatter/gather stmt so this adds an argument similar to the one we have for vect_get_operand_map. This also refactors vect_check_store_rhs to get the actual rhs and the associated SLP node instead of leaving that to the caller. * tree-vectorizer.h (vect_slp_child_index_for_operand): Add gatherscatter_p argument. * tree-vect-slp.cc (vect_slp_child_index_for_operand): Likewise. Pass it on. * tree-vect-stmts.cc (vect_check_store_rhs): Turn the rhs argument into an output, also output the SLP node associated with it. (vectorizable_simd_clone_call): Adjust. (vectorizable_store): Likewise. (vectorizable_load): Likewise.
2023-11-06tree-optimization/112404 - two issues with SLP of .MASK_LOADRichard Biener1-0/+1
The following fixes an oversight in vect_check_scalar_mask when the mask is external or constant. When doing BB vectorization we need to provide a group_size, best via an overload accepting the SLP node as argument. When fixed we then run into the issue that we have not analyzed alignment of the .MASK_LOADs because they were not identified as loads by vect_gather_slp_loads. Fixed by reworking the detection. PR tree-optimization/112404 * tree-vectorizer.h (get_mask_type_for_scalar_type): Declare overload with SLP node argument. * tree-vect-stmts.cc (get_mask_type_for_scalar_type): Implement it. (vect_check_scalar_mask): Use it. * tree-vect-slp.cc (vect_gather_slp_loads): Properly identify loads also for nodes with children, like .MASK_LOAD. * tree-vect-loop.cc (vect_analyze_loop_2): Look at the representative for load nodes and check whether it is a grouped access before looking for load-lanes support. * gfortran.dg/pr112404.f90: New testcase.
2023-11-02ifcvt/vect: Emit COND_OP for conditional scalar reduction.Robin Dapp1-1/+1
As described in PR111401 we currently emit a COND and a PLUS expression for conditional reductions. This makes it difficult to combine both into a masked reduction statement later. This patch improves that by directly emitting a COND_ADD/COND_OP during ifcvt and adjusting some vectorizer code to handle it. It also makes neutral_op_for_reduction return -0 if HONOR_SIGNED_ZEROS is true. gcc/ChangeLog: PR middle-end/111401 * internal-fn.cc (internal_fn_else_index): New function. * internal-fn.h (internal_fn_else_index): Define. * tree-if-conv.cc (convert_scalar_cond_reduction): Emit COND_OP if supported. (predicate_scalar_phi): Add whitespace. * tree-vect-loop.cc (fold_left_reduction_fn): Add IFN_COND_OP. (neutral_op_for_reduction): Return -0 for PLUS. (check_reduction_path): Don't count else operand in COND_OP. (vect_is_simple_reduction): Ditto. (vect_create_epilog_for_reduction): Fix whitespace. (vectorize_fold_left_reduction): Add COND_OP handling. (vectorizable_reduction): Don't count else operand in COND_OP. (vect_transform_reduction): Add COND_OP handling. * tree-vectorizer.h (neutral_op_for_reduction): Add default parameter. gcc/testsuite/ChangeLog: * gcc.dg/vect/vect-cond-reduc-in-order-2-signed-zero.c: New test. * gcc.target/riscv/rvv/autovec/cond/pr111401.c: New test. * gcc.target/riscv/rvv/autovec/reduc/reduc_call-2.c: Adjust. * gcc.target/riscv/rvv/autovec/reduc/reduc_call-4.c: Ditto.
2023-10-18middle-end: maintain LCSSA throughout loop peelingTamar Christina1-1/+1
This final patch updates peeling to maintain LCSSA all the way through. It's significantly easier to maintain it during peeling while we still know where all new edges connect rather than touching it up later as is currently being done. This allows us to remove many of the helper functions that touch up the loops at various parts. The only complication is for loop distribution where we should be able to use the same, however ldist depending on whether redirect_lc_phi_defs is true or not will either try to maintain a limited LCSSA form itself or removes are non-virtual phis. The problem here is that if we maintain LCSSA then in some cases the blocks connecting the two loops get PHIs to keep the loop IV up to date. However there is no loop, the guard condition is rewritten as 0 != 0, to the "loop" always exits. However due to the PHI nodes the probabilities get completely wrong. It seems to think that the impossible exit is the likely edge. This causes incorrect warnings and the presence of the PHIs prevent the blocks to be simplified. While it may be possible to make ldist work with LCSSA form, doing so seems more work than not. For that reason the peeling code has an additional parameter used by only ldist to not connect the two loops during peeling. This preserves the current behaviour from ldist until I can dive into the implementation more. Hopefully that's ok for now. gcc/ChangeLog: * tree-loop-distribution.cc (copy_loop_before): Request no LCSSA. * tree-vect-loop-manip.cc (adjust_phi_and_debug_stmts): Add additional asserts. (slpeel_tree_duplicate_loop_to_edge_cfg): Keep LCSSA during peeling. (find_guard_arg): Look value up through explicit edge and original defs. (vect_do_peeling): Use it. (slpeel_update_phi_nodes_for_guard2): Take explicit exit edge. (slpeel_update_phi_nodes_for_lcssa, slpeel_update_phi_nodes_for_loops): Remove. * tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize phi. * tree-vectorizer.h (slpeel_tree_duplicate_loop_to_edge_cfg): Add optional param to turn off LCSSA mode.
2023-10-18middle-end: updated niters analysis to handle multiple exits.Tamar Christina1-1/+9
This second part updates niters analysis to be able to analyze any number of exits. If we have multiple exits we determine the main exit by finding the first counting IV. The change allows the vectorizer to pass analysis for multiple loops, but we later gracefully reject them. It does however allow us to test if the exit handling is using the right exit everywhere. Additionally since we analyze all exits, we now return all conditions for them and determine which condition belongs to the main exit. The main condition is needed because the vectorizer needs to ignore the main IV condition during vectorization as it will replace it during codegen. To track versioned loops we extend the contract between ifcvt and the vectorizer to store the exit number in aux so that we can match it up again during peeling. gcc/ChangeLog: * tree-if-conv.cc (tree_if_conversion): Record exits in aux. * tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg): Use it. * tree-vect-loop.cc (vect_get_loop_niters): Determine main exit. (vec_init_loop_exit_info): Extend analysis when multiple exits. (vect_analyze_loop_form): Record conds and determine main cond. (vect_create_loop_vinfo): Extend bookkeeping of conds. (vect_analyze_loop): Release conds. * tree-vectorizer.h (LOOP_VINFO_LOOP_CONDS, LOOP_VINFO_LOOP_IV_COND): New. (struct vect_loop_form_info): Add conds, alt_loop_conds; (struct loop_vec_info): Add conds, loop_iv_cond.
2023-10-18middle-end: Refactor vectorizer loop conditionals and separate out IV to new ↵Tamar Christina1-4/+22
variables This is extracted out of the patch series to support early break vectorization in order to simplify the review of that patch series. The goal of this one is to separate out the refactoring from the new functionality. This first patch separates out the vectorizer's definition of an exit to their own values inside loop_vinfo. During vectorization we can have three separate copies for each loop: scalar, vectorized, epilogue. The scalar loop can also be the versioned loop before peeling. Because of this we track 3 different exits inside loop_vinfo corresponding to each of these loops. Additionally each function that uses an exit, when not obviously clear which exit is needed will now take the exit explicitly as an argument. This is because often times the callers switch the loops being passed around. While the caller knows which loops it is, the callee does not. For now the loop exits are simply initialized to same value as before determined by single_exit (..). No change in functionality is expected throughout this patch series. gcc/ChangeLog: * tree-loop-distribution.cc (copy_loop_before): Pass exit explicitly. (loop_distribution::distribute_loop): Bail out of not single exit. * tree-scalar-evolution.cc (get_loop_exit_condition): New. * tree-scalar-evolution.h (get_loop_exit_condition): New. * tree-vect-data-refs.cc (vect_enhance_data_refs_alignment): Pass exit explicitly. * tree-vect-loop-manip.cc (vect_set_loop_condition_partial_vectors, vect_set_loop_condition_partial_vectors_avx512, vect_set_loop_condition_normal, vect_set_loop_condition): Explicitly take exit. (slpeel_tree_duplicate_loop_to_edge_cfg): Explicitly take exit and return new peeled corresponding peeled exit. (slpeel_can_duplicate_loop_p): Explicitly take exit. (find_loop_location): Handle not knowing an explicit exit. (vect_update_ivs_after_vectorizer, vect_gen_vector_loop_niters_mult_vf, find_guard_arg, slpeel_update_phi_nodes_for_loops, slpeel_update_phi_nodes_for_guard2): Use new exits. (vect_do_peeling): Update bookkeeping to keep track of exits. * tree-vect-loop.cc (vect_get_loop_niters): Explicitly take exit to analyze. (vec_init_loop_exit_info): New. (_loop_vec_info::_loop_vec_info): Initialize vec_loop_iv, vec_epilogue_loop_iv, scalar_loop_iv. (vect_analyze_loop_form): Initialize exits. (vect_create_loop_vinfo): Set main exit. (vect_create_epilog_for_reduction, vectorizable_live_operation, vect_transform_loop): Use it. (scale_profile_for_vect_loop): Explicitly take exit to scale. * tree-vectorizer.cc (set_uid_loop_bbs): Initialize loop exit. * tree-vectorizer.h (LOOP_VINFO_IV_EXIT, LOOP_VINFO_EPILOGUE_IV_EXIT, LOOP_VINFO_SCALAR_IV_EXIT): New. (struct loop_vec_info): Add vec_loop_iv, vec_epilogue_loop_iv, scalar_loop_iv. (vect_set_loop_condition, slpeel_can_duplicate_loop_p, slpeel_tree_duplicate_loop_to_edge_cfg): Take explicit exits. (vec_init_loop_exit_info): New. (struct vect_loop_form_info): Add loop_exit.
2023-10-17tree-optimization/111846 - put simd-clone-info into SLP treeRichard Biener1-0/+6
The following avoids bogously re-using the simd-clone-info we currently hang off stmt_info from two different SLP contexts where a different number of lanes should have chosen a different best simdclone. PR tree-optimization/111846 * tree-vectorizer.h (_slp_tree::simd_clone_info): Add. (SLP_TREE_SIMD_CLONE_INFO): New. * tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize SLP_TREE_SIMD_CLONE_INFO. (_slp_tree::~_slp_tree): Release it. * tree-vect-stmts.cc (vectorizable_simd_clone_call): Use SLP_TREE_SIMD_CLONE_INFO or STMT_VINFO_SIMD_CLONE_INFO dependent on if we're doing SLP. * gcc.dg/vect/pr111846.c: New testcase.
2023-08-24tree-optimization/111115 - SLP of masked storesRichard Biener1-0/+1
The following adds the capability to do SLP on .MASK_STORE, I do not plan to add interleaving support. PR tree-optimization/111115 gcc/ * tree-vectorizer.h (vect_slp_child_index_for_operand): New. * tree-vect-data-refs.cc (can_group_stmts_p): Also group .MASK_STORE. * tree-vect-slp.cc (arg3_arg2_map): New. (vect_get_operand_map): Handle IFN_MASK_STORE. (vect_slp_child_index_for_operand): New function. (vect_build_slp_tree_1): Handle statements with no LHS, masked store ifns. (vect_remove_slp_scalar_calls): Likewise. * tree-vect-stmts.cc (vect_check_store_rhs): Lookup the SLP child corresponding to the ifn value index. (vectorizable_store): Likewise for the mask index. Support masked stores. (vectorizable_load): Lookup the SLP child corresponding to the ifn mask index. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_masked_store): Supported with check_avx_available. * gcc.dg/vect/slp-mask-store-1.c: New testcase.
2023-08-16VECT: Apply MASK_LEN_{LOAD_LANES, STORE_LANES} into vectorizerJuzhe-Zhong1-2/+2
Hi, Richard and Richi. This patch is adding MASK_LEN_{LOAD_LANES,STORE_LANES} support into vectorizer. Consider this simple case: void __attribute__ ((noinline, noclone)) foo (int *__restrict a, int *__restrict b, int *__restrict c, int *__restrict d, int *__restrict e, int *__restrict f, int *__restrict g, int *__restrict h, int *__restrict j, int n) { for (int i = 0; i < n; ++i) { a[i] = j[i * 8]; b[i] = j[i * 8 + 1]; c[i] = j[i * 8 + 2]; d[i] = j[i * 8 + 3]; e[i] = j[i * 8 + 4]; f[i] = j[i * 8 + 5]; g[i] = j[i * 8 + 6]; h[i] = j[i * 8 + 7]; } } RVV Gimple IR: _79 = .SELECT_VL (ivtmp_81, POLY_INT_CST [4, 4]); ivtmp_125 = _79 * 32; vect_array.8 = .MASK_LEN_LOAD_LANES (vectp_j.6_124, 32B, { -1, ... }, _79, 0); vect__8.9_122 = vect_array.8[0]; vect__8.10_121 = vect_array.8[1]; vect__8.11_120 = vect_array.8[2]; vect__8.12_119 = vect_array.8[3]; vect__8.13_118 = vect_array.8[4]; vect__8.14_117 = vect_array.8[5]; vect__8.15_116 = vect_array.8[6]; vect__8.16_115 = vect_array.8[7]; vect_array.8 ={v} {CLOBBER}; ivtmp_114 = _79 * 4; .MASK_LEN_STORE (vectp_a.17_113, 32B, { -1, ... }, _79, 0, vect__8.9_122); .MASK_LEN_STORE (vectp_b.19_109, 32B, { -1, ... }, _79, 0, vect__8.10_121); .MASK_LEN_STORE (vectp_c.21_105, 32B, { -1, ... }, _79, 0, vect__8.11_120); .MASK_LEN_STORE (vectp_d.23_101, 32B, { -1, ... }, _79, 0, vect__8.12_119); .MASK_LEN_STORE (vectp_e.25_97, 32B, { -1, ... }, _79, 0, vect__8.13_118); .MASK_LEN_STORE (vectp_f.27_93, 32B, { -1, ... }, _79, 0, vect__8.14_117); .MASK_LEN_STORE (vectp_g.29_89, 32B, { -1, ... }, _79, 0, vect__8.15_116); .MASK_LEN_STORE (vectp_h.31_85, 32B, { -1, ... }, _79, 0, vect__8.16_115); ASM: foo: lw t4,8(sp) ld t5,0(sp) ble t4,zero,.L5 .L3: vsetvli t1,t4,e8,mf4,ta,ma vlseg8e32.v v8,(t5) slli t3,t1,2 slli t6,t1,5 vse32.v v8,0(a0) vse32.v v9,0(a1) vse32.v v10,0(a2) vse32.v v11,0(a3) vse32.v v12,0(a4) vse32.v v13,0(a5) vse32.v v14,0(a6) vse32.v v15,0(a7) sub t4,t4,t1 add t5,t5,t6 add a0,a0,t3 add a1,a1,t3 add a2,a2,t3 add a3,a3,t3 add a4,a4,t3 add a5,a5,t3 add a6,a6,t3 add a7,a7,t3 bne t4,zero,.L3 .L5: ret The details of the approach: Step 1 - Modifiy the LANES LOAD/STORE support function (vect_load_lanes_supported/vect_store_lanes_supported): +/* Return FN if vec_{masked_,mask_len,}load_lanes is available for COUNT + vectors of type VECTYPE. MASKED_P says whether the masked form is needed. */ -bool +internal_fn vect_load_lanes_supported (tree vectype, unsigned HOST_WIDE_INT count, bool masked_p) { - if (masked_p) - return vect_lanes_optab_supported_p ("vec_mask_load_lanes", - vec_mask_load_lanes_optab, - vectype, count); + if (vect_lanes_optab_supported_p ("vec_mask_len_load_lanes", + vec_mask_len_load_lanes_optab, + vectype, count)) + return IFN_MASK_LEN_LOAD_LANES; + else if (masked_p) + { + if (vect_lanes_optab_supported_p ("vec_mask_load_lanes", + vec_mask_load_lanes_optab, + vectype, count)) + return IFN_MASK_LOAD_LANES; + } else - return vect_lanes_optab_supported_p ("vec_load_lanes", - vec_load_lanes_optab, - vectype, count); + { + if (vect_lanes_optab_supported_p ("vec_load_lanes", + vec_load_lanes_optab, + vectype, count)) + return IFN_LOAD_LANES; + } + return IFN_LAST; } Instead of returning TRUE or FALSE whether target support the LANES LOAD/STORE. I change it into return internal_fn of the LANES LOAD/STORE that target support, If target didn't support any LANE LOAD/STORE optabs, return IFN_LAST. Step 2 - Compute IFN for LANES LOAD/STORE (Only compute once). if (!STMT_VINFO_STRIDED_P (first_stmt_info) && (can_overrun_p || !would_overrun_p) && compare_step_with_zero (vinfo, stmt_info) > 0) { /* First cope with the degenerate case of a single-element vector. */ if (known_eq (TYPE_VECTOR_SUBPARTS (vectype), 1U)) ; else { /* Otherwise try using LOAD/STORE_LANES. */ *lanes_ifn = vls_type == VLS_LOAD ? vect_load_lanes_supported (vectype, group_size, masked_p) : vect_store_lanes_supported (vectype, group_size, masked_p); if (*lanes_ifn != IFN_LAST) { *memory_access_type = VMAT_LOAD_STORE_LANES; overrun_p = would_overrun_p; } /* If that fails, try using permuting loads. */ else if (vls_type == VLS_LOAD ? vect_grouped_load_supported (vectype, single_element_p, group_size) : vect_grouped_store_supported (vectype, group_size)) { *memory_access_type = VMAT_CONTIGUOUS_PERMUTE; overrun_p = would_overrun_p; } } } Step 3 - Build MASK_LEN_{LANES_LOAD,LANES_STORE} Gimple IR: + if (lanes_ifn == IFN_MASK_LEN_STORE_LANES) + { + if (loop_lens) + final_len = vect_get_loop_len (loop_vinfo, gsi, loop_lens, + ncopies, vectype, j, 1); + else + final_len = size_int (TYPE_VECTOR_SUBPARTS (vectype)); + signed char biasval + = LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo); + bias = build_int_cst (intQI_type_node, biasval); + if (!final_mask) + { + mask_vectype = truth_type_for (vectype); + final_mask = build_minus_one_cst (mask_vectype); + } + } + gcall *call; - if (final_mask) + if (final_len && final_mask) + { + /* Emit: + MASK_LEN_STORE_LANES (DATAREF_PTR, ALIAS_PTR, VEC_MASK, + LEN, BIAS, VEC_ARRAY). */ + unsigned int align = TYPE_ALIGN (TREE_TYPE (vectype)); + tree alias_ptr = build_int_cst (ref_type, align); + call = gimple_build_call_internal (IFN_MASK_LEN_STORE_LANES, 6, + dataref_ptr, alias_ptr, + final_mask, final_len, bias, + vec_array); + } + else if (final_mask) The LEN and MASK flow is totally the same as other MASK_LEN_* load/store. gcc/ChangeLog: * internal-fn.cc (internal_load_fn_p): Apply MASK_LEN_{LOAD_LANES,STORE_LANES} into vectorizer. (internal_store_fn_p): Ditto. (internal_fn_len_index): Ditto. (internal_fn_mask_index): Ditto. (internal_fn_stored_value_index): Ditto. * tree-vect-data-refs.cc (vect_store_lanes_supported): Ditto. (vect_load_lanes_supported): Ditto. * tree-vect-loop.cc: Ditto. * tree-vect-slp.cc (vect_slp_prefer_store_lanes_p): Ditto. * tree-vect-stmts.cc (check_load_store_for_partial_vectors): Ditto. (get_group_load_store_type): Ditto. (vectorizable_store): Ditto. (vectorizable_load): Ditto. * tree-vectorizer.h (vect_store_lanes_supported): Ditto. (vect_load_lanes_supported): Ditto.
2023-08-15Support constants and externals in BB reduction vectorizationRichard Biener1-4/+5
The following supports vectorizing BB reductions involving a constant or an invariant. * tree-vectorizer.h (_slp_instance::remain_stmts): Change to ... (_slp_instance::remain_defs): ... this. (SLP_INSTANCE_REMAIN_STMTS): Rename to ... (SLP_INSTANCE_REMAIN_DEFS): ... this. (slp_root::remain): New. (slp_root::slp_root): Adjust. * tree-vect-slp.cc (vect_free_slp_instance): Adjust. (vect_build_slp_instance): Get extra remain parameter, adjust former handling of a cut off stmt. (vect_analyze_slp_instance): Adjust. (vect_analyze_slp): Likewise. (_bb_vec_info::~_bb_vec_info): Likewise. (vectorizable_bb_reduc_epilogue): Dump something if we fail. (vect_slp_check_for_constructors): Handle non-internal defs as remain defs of a reduction. (vectorize_slp_instance_root_stmt): Adjust. * gcc.dg/vect/bb-slp-75.c: New testcase.
2023-08-10Remove insert location argument from vectorizable_live_operationRichard Biener1-2/+1
The insert location argument isn't actually used but we compute that ourselves. There's a single spot, namely when asking for the loop mask via vect_get_loop_mask that the passed argument is used but that looks like an oversight. The following fixes that and adjusts vectorizable_live_operation and can_vectorize_live_stmts to no longer take a stmt iterator argument. * tree-vectorizer.h (vectorizable_live_operation): Remove gimple_stmt_iterator * argument. * tree-vect-loop.cc (vectorizable_live_operation): Likewise. Adjust plumbing around vect_get_loop_mask. (vect_analyze_loop_operations): Adjust. * tree-vect-slp.cc (vect_slp_analyze_node_operations_1): Likewise. (vect_bb_slp_mark_live_stmts): Likewise. (vect_schedule_slp_node): Likewise. * tree-vect-stmts.cc (can_vectorize_live_stmts): Likewise. Remove gimple_stmt_iterator * argument. (vect_transform_stmt): Adjust.
2023-08-08tree-optimization/49955 - BB reduction with odd number of lanesRichard Biener1-0/+5
The following enhances BB reduction vectorization to support vectorizing only a subset of the lanes, keeping the rest as scalar ops. For now we try to make the number of lanes even by leaving alone the "last" lane. That's because SLP discovery with all lanes will fail too soon to get us any hint on which lane to strip and likewise we don't know what vector modes the target supports so restricting ourselves to power-of-two or other cases isn't easy. This is enough to get at the vectorization opportunity for the testcase in the PR - albeit with the chosen lanes not optimal but at least vectorizable. PR tree-optimization/49955 * tree-vectorizer.h (_slp_instance::remain_stmts): New. (SLP_INSTANCE_REMAIN_STMTS): Likewise. * tree-vect-slp.cc (vect_free_slp_instance): Release SLP_INSTANCE_REMAIN_STMTS. (vect_build_slp_instance): Make the number of lanes of a BB reduction even. (vectorize_slp_instance_root_stmt): Handle unvectorized defs of a BB reduction. * gfortran.dg/vect/pr49955.f: New testcase.
2023-07-24Remove SLP_TREE_VEC_STMTS in favor of SLP_TREE_VEC_DEFSRichard Biener1-3/+4
The following unifies SLP_TREE_VEC_STMTS into SLP_TREE_VEC_DEFS which can handle all cases we need. * tree-vectorizer.h (_slp_tree::push_vec_def): Add. (_slp_tree::vec_stmts): Remove. (SLP_TREE_VEC_STMTS): Remove. * tree-vect-slp.cc (_slp_tree::push_vec_def): Define. (_slp_tree::_slp_tree): Adjust. (_slp_tree::~_slp_tree): Likewise. (vect_get_slp_vect_def): Simplify. (vect_get_slp_defs): Likewise. (vect_transform_slp_perm_load_1): Adjust. (vect_add_slp_permutation): Likewise. (vect_schedule_slp_node): Likewise. (vectorize_slp_instance_root_stmt): Likewise. (vect_schedule_scc): Likewise. * tree-vect-stmts.cc (vectorizable_bswap): Use push_vec_def. (vectorizable_call): Likewise. (vectorizable_call): Likewise. (vect_create_vectorized_demotion_stmts): Likewise. (vectorizable_conversion): Likewise. (vectorizable_assignment): Likewise. (vectorizable_shift): Likewise. (vectorizable_operation): Likewise. (vectorizable_load): Likewise. (vectorizable_condition): Likewise. (vectorizable_comparison): Likewise. * tree-vect-loop.cc (vect_create_epilog_for_reduction): Adjust. (vectorize_fold_left_reduction): Use push_vec_def. (vect_transform_reduction): Likewise. (vect_transform_cycle_phi): Likewise. (vectorizable_lc_phi): Likewise. (vectorizable_phi): Likewise. (vectorizable_recurr): Likewise. (vectorizable_induction): Likewise. (vectorizable_live_operation): Likewise.
2023-07-06tree-optimization/110563 - simplify epilogue VF checksRichard Biener1-2/+1
The following consolidates an assert that now hits for ppc64le with an earlier check we already do, simplifying vect_determine_partial_vectors_and_peeling and getting rid of its now redundant argument. PR tree-optimization/110563 * tree-vectorizer.h (vect_determine_partial_vectors_and_peeling): Remove second argument. * tree-vect-loop.cc (vect_determine_partial_vectors_and_peeling): Remove for_epilogue_p argument. Merge assert ... (vect_analyze_loop_2): ... with check done before determining partial vectors by moving it after. * tree-vect-loop-manip.cc (vect_do_peeling): Adjust.
2023-06-19AVX512 fully masked vectorizationRichard Biener1-3/+32
This implemens fully masked vectorization or a masked epilog for AVX512 style masks which single themselves out by representing each lane with a single bit and by using integer modes for the mask (both is much like GCN). AVX512 is also special in that it doesn't have any instruction to compute the mask from a scalar IV like SVE has with while_ult. Instead the masks are produced by vector compares and the loop control retains the scalar IV (mainly to avoid dependences on mask generation, a suitable mask test instruction is available). Like RVV code generation prefers a decrementing IV though IVOPTs messes things up in some cases removing that IV to eliminate it with an incrementing one used for address generation. One of the motivating testcases is from PR108410 which in turn is extracted from x264 where large size vectorization shows issues with small trip loops. Execution time there improves compared to classic AVX512 with AVX2 epilogues for the cases of less than 32 iterations. size scalar 128 256 512 512e 512f 1 9.42 11.32 9.35 11.17 15.13 16.89 2 5.72 6.53 6.66 6.66 7.62 8.56 3 4.49 5.10 5.10 5.74 5.08 5.73 4 4.10 4.33 4.29 5.21 3.79 4.25 6 3.78 3.85 3.86 4.76 2.54 2.85 8 3.64 1.89 3.76 4.50 1.92 2.16 12 3.56 2.21 3.75 4.26 1.26 1.42 16 3.36 0.83 1.06 4.16 0.95 1.07 20 3.39 1.42 1.33 4.07 0.75 0.85 24 3.23 0.66 1.72 4.22 0.62 0.70 28 3.18 1.09 2.04 4.20 0.54 0.61 32 3.16 0.47 0.41 0.41 0.47 0.53 34 3.16 0.67 0.61 0.56 0.44 0.50 38 3.19 0.95 0.95 0.82 0.40 0.45 42 3.09 0.58 1.21 1.13 0.36 0.40 'size' specifies the number of actual iterations, 512e is for a masked epilog and 512f for the fully masked loop. From 4 scalar iterations on the AVX512 masked epilog code is clearly the winner, the fully masked variant is clearly worse and it's size benefit is also tiny. This patch does not enable using fully masked loops or masked epilogues by default. More work on cost modeling and vectorization kind selection on x86_64 is necessary for this. Implementation wise this introduces LOOP_VINFO_PARTIAL_VECTORS_STYLE which could be exploited further to unify some of the flags we have right now but there didn't seem to be many easy things to merge, so I'm leaving this for followups. Mask requirements as registered by vect_record_loop_mask are kept in their original form and recorded in a hash_set now instead of being processed to a vector of rgroup_controls. Instead that's now left to the final analysis phase which tries forming the rgroup_controls vector using while_ult and if that fails now tries AVX512 style which needs a different organization and instead fills a hash_map with the relevant info. vect_get_loop_mask now has two implementations, one for the two mask styles we then have. I have decided against interweaving vect_set_loop_condition_partial_vectors with conditions to do AVX512 style masking and instead opted to "duplicate" this to vect_set_loop_condition_partial_vectors_avx512. Likewise for vect_verify_full_masking vs vect_verify_full_masking_avx512. The vect_prepare_for_masked_peels hunk might run into issues with SVE, I didn't check yet but using LOOP_VINFO_RGROUP_COMPARE_TYPE looked odd. Bootstrapped and tested on x86_64-unknown-linux-gnu. I've run the testsuite with --param vect-partial-vector-usage=2 with and without -fno-vect-cost-model and filed two bugs, one ICE (PR110221) and one latent wrong-code (PR110237). * tree-vectorizer.h (enum vect_partial_vector_style): New. (_loop_vec_info::partial_vector_style): Likewise. (LOOP_VINFO_PARTIAL_VECTORS_STYLE): Likewise. (rgroup_controls::compare_type): Add. (vec_loop_masks): Change from a typedef to auto_vec<> to a structure. * tree-vect-loop-manip.cc (vect_set_loop_condition_partial_vectors): Adjust. Convert niters_skip to compare_type. (vect_set_loop_condition_partial_vectors_avx512): New function implementing the AVX512 partial vector codegen. (vect_set_loop_condition): Dispatch to the correct vect_set_loop_condition_partial_vectors_* function based on LOOP_VINFO_PARTIAL_VECTORS_STYLE. (vect_prepare_for_masked_peels): Compute LOOP_VINFO_MASK_SKIP_NITERS in the original niter type. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialize partial_vector_style. (can_produce_all_loop_masks_p): Adjust. (vect_verify_full_masking): Produce the rgroup_controls vector here. Set LOOP_VINFO_PARTIAL_VECTORS_STYLE on success. (vect_verify_full_masking_avx512): New function implementing verification of AVX512 style masking. (vect_verify_loop_lens): Set LOOP_VINFO_PARTIAL_VECTORS_STYLE. (vect_analyze_loop_2): Also try AVX512 style masking. Adjust condition. (vect_estimate_min_profitable_iters): Implement AVX512 style mask producing cost. (vect_record_loop_mask): Do not build the rgroup_controls vector here but record masks in a hash-set. (vect_get_loop_mask): Implement AVX512 style mask query, complementing the existing while_ult style.
2023-06-19Add loop_vinfo argument to vect_get_loop_maskRichard Biener1-1/+2
This adds a loop_vinfo argument for future use, making the next patch smaller. * tree-vectorizer.h (vect_get_loop_mask): Add loop_vec_info argument. * tree-vect-loop.cc (vect_get_loop_mask): Likewise. (vectorize_fold_left_reduction): Adjust. (vect_transform_reduction): Likewise. (vectorizable_live_operation): Likewise. * tree-vect-stmts.cc (vectorizable_call): Likewise. (vectorizable_operation): Likewise. (vectorizable_store): Likewise. (vectorizable_load): Likewise. (vectorizable_condition): Likewise.
2023-06-10VECT: Add SELECT_VL supportJu-Zhe Zhong1-0/+6
This patch address comments from Richard && Richi and rebase to trunk. This patch is adding SELECT_VL middle-end support allow target have target dependent optimization in case of length calculation. This patch is inspired by RVV ISA and LLVM: https://reviews.llvm.org/D99750 The SELECT_VL is same behavior as LLVM "get_vector_length" with these following properties: 1. Only apply on single-rgroup. 2. non SLP. 3. adjust loop control IV. 4. adjust data reference IV. 5. allow non-vf elements processing in non-final iteration Code # void vvaddint32(size_t n, const int*x, const int*y, int*z) # { for (size_t i=0; i<n; i++) { z[i]=x[i]+y[i]; } } Take RVV codegen for example: Before this patch: vvaddint32: ble a0,zero,.L6 csrr a4,vlenb srli a6,a4,2 .L4: mv a5,a0 bleu a0,a6,.L3 mv a5,a6 .L3: vsetvli zero,a5,e32,m1,ta,ma vle32.v v2,0(a1) vle32.v v1,0(a2) vsetvli a7,zero,e32,m1,ta,ma sub a0,a0,a5 vadd.vv v1,v1,v2 vsetvli zero,a5,e32,m1,ta,ma vse32.v v1,0(a3) add a2,a2,a4 add a3,a3,a4 add a1,a1,a4 bne a0,zero,.L4 .L6: ret After this patch: vvaddint32: vsetvli t0, a0, e32, ta, ma # Set vector length based on 32-bit vectors vle32.v v0, (a1) # Get first vector sub a0, a0, t0 # Decrement number done slli t0, t0, 2 # Multiply number done by 4 bytes add a1, a1, t0 # Bump pointer vle32.v v1, (a2) # Get second vector add a2, a2, t0 # Bump pointer vadd.vv v2, v0, v1 # Sum vectors vse32.v v2, (a3) # Store result add a3, a3, t0 # Bump pointer bnez a0, vvaddint32 # Loop back ret # Finished Co-authored-by: Richard Sandiford<richard.sandiford@arm.com> Co-authored-by: Richard Biener <rguenther@suse.de> gcc/ChangeLog: * doc/md.texi: Add SELECT_VL support. * internal-fn.def (SELECT_VL): Ditto. * optabs.def (OPTAB_D): Ditto. * tree-vect-loop-manip.cc (vect_set_loop_controls_directly): Ditto. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Ditto. * tree-vect-stmts.cc (get_select_vl_data_ref_ptr): Ditto. (vectorizable_store): Ditto. (vectorizable_load): Ditto. * tree-vectorizer.h (LOOP_VINFO_USING_SELECT_VL_P): Ditto.
2023-06-05vect: Refactor to allow internal_fn'sAndre Vieira1-7/+9
Refactor vect-patterns to allow patterns to be internal_fns starting with widening_plus/minus patterns 2023-06-05 Andre Vieira <andre.simoesdiasvieira@arm.com> Joel Hutton <joel.hutton@arm.com> gcc/ChangeLog: * tree-vect-patterns.cc: Add include for gimple-iterator. (vect_recog_widen_op_pattern): Refactor to use code_helper. (vect_gimple_build): New function. * tree-vect-stmts.cc (simple_integer_narrowing): Refactor to use code_helper. (vectorizable_call): Likewise. (vect_gen_widened_results_half): Likewise. (vect_create_vectorized_demotion_stmts): Likewise. (vect_create_vectorized_promotion_stmts): Likewise. (vect_create_half_widening_stmts): Likewise. (vectorizable_conversion): Likewise. (supportable_widening_operation): Likewise. (supportable_narrowing_operation): Likewise. * tree-vectorizer.h (supportable_widening_operation): Change prototype to use code_helper. (supportable_narrowing_operation): Likewise. (vect_gimple_build): New function prototype. * tree.h (code_helper::safe_as_tree_code): New function. (code_helper::safe_as_fn_code): New function.
2023-05-31Enhance NARROW FLOAT_EXPR vectorization by truncating integer to lower ↵liuhongt1-0/+1
precision. Similar like WIDEN FLOAT_EXPR, when direct_optab is not existed, try intermediate integer type whenever gimple ranger can tell it's safe. .i.e. When there's no direct optab for vector long long -> vector float, but the value range of integer can be represented as int, try vector int -> vector float if availble. gcc/ChangeLog: PR tree-optimization/108804 * tree-vect-patterns.cc (vect_get_range_info): Remove static. * tree-vect-stmts.cc (vect_create_vectorized_demotion_stmts): Add new parameter narrow_src_p. (vectorizable_conversion): Enhance NARROW FLOAT_EXPR vectorization by truncating to lower precision. * tree-vectorizer.h (vect_get_range_info): New declare. gcc/testsuite/ChangeLog: * gcc.target/i386/pr108804.c: New test.
2023-05-25VECT: Add decrement IV iteration loop control by variable amount supportJu-Zhe Zhong1-0/+8
This patch is supporting decrement IV by following the flow designed by Richard: (1) In vect_set_loop_condition_partial_vectors, for the first iteration of: call vect_set_loop_controls_directly. (2) vect_set_loop_controls_directly calculates "step" as in your patch. If rgc has 1 control, this step is the SSA name created for that control. Otherwise the step is a fresh SSA name, as in your patch. (3) vect_set_loop_controls_directly stores this step somewhere for later use, probably in LOOP_VINFO. Let's use "S" to refer to this stored step. (4) After the vect_set_loop_controls_directly call above, and outside the "if" statement that now contains vect_set_loop_controls_directly, check whether rgc->controls.length () > 1. If so, use vect_adjust_loop_lens_control to set the controls based on S. Then the only caller of vect_adjust_loop_lens_control is vect_set_loop_condition_partial_vectors. And the starting step for vect_adjust_loop_lens_control is always S. This patch has well tested for single-rgroup and multiple-rgroup (SLP) and passed all testcase in RISC-V port. Signed-off-by: Ju-Zhe Zhong <juzhe.zhong@rivai.ai> Co-Authored-By: Richard Sandiford <richard.sandiford@arm.com> gcc/ChangeLog: * tree-vect-loop-manip.cc (vect_adjust_loop_lens_control): New function. (vect_set_loop_controls_directly): Add decrement IV support. (vect_set_loop_condition_partial_vectors): Ditto. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): New variable. * tree-vectorizer.h (LOOP_VINFO_USING_DECREMENTING_IV_P): New macro. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-3.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-4.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-3.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-4.c: New test.
2023-05-22VECT: Fix bug of multiple-rgroup for length is counting elementsJu-Zhe Zhong1-2/+3
Address comments from Richard that splits the patch of fixing multiple-rgroup handling of length counting elements. This patch is fixing issue of handling multiple-rgroup of length is counting elements Before this patch, multiple rgroup run fail: FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c execution test FAIL: gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c execution test After this patch, These tests are all passed. gcc/ChangeLog: * tree-vect-loop.cc (vect_get_loop_len): Fix issue for multiple-rgroup of length. * tree-vect-stmts.cc (vectorizable_store): Ditto. (vectorizable_load): Ditto. * tree-vectorizer.h (vect_get_loop_len): Ditto. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-1.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-1.h: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-2.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup-2.h: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-1.c: New test. * gcc.target/riscv/rvv/autovec/partial/multiple_rgroup_run-2.c: New test.
2023-02-16don't declare header-defined functions both static and inlinePatrick Palka1-32/+32
Many functions defined in our headers are declared 'static inline' which is a C idiom whose use predates our move to C++ as the implementation language. But in C++ the inline keyword is more than just a compiler hint, and is sufficient to give the function the intended semantics. In fact declaring a function both static and inline is a pessimization since static effectively disables the desired definition merging behavior enabled by inline, and is also a source of (harmless) ODR violations when a static inline function gets called from a non-static inline one (such as tree_operand_check calling tree_operand_length). This patch mechanically fixes the vast majority of occurrences of this anti-pattern throughout the compiler's headers via the command line sed -i 's/^static inline/inline/g' gcc/*.h gcc/*/*.h There's also a manual change to remove the redundant declarations of is_ivar and lookup_category in gcc/objc/objc-act.cc which would otherwise conflict with their modified definitions in objc-act.h (due to the difference in staticness). Besides fixing some ODR violations, this speeds up stage1 cc1plus by about 2% and reduces the size of its text segment by 1.5MB. gcc/ChangeLog: * addresses.h: Mechanically drop 'static' from 'static inline' functions via s/^static inline/inline/g. * asan.h: Likewise. * attribs.h: Likewise. * basic-block.h: Likewise. * bitmap.h: Likewise. * cfghooks.h: Likewise. * cfgloop.h: Likewise. * cgraph.h: Likewise. * cselib.h: Likewise. * data-streamer.h: Likewise. * debug.h: Likewise. * df.h: Likewise. * diagnostic.h: Likewise. * dominance.h: Likewise. * dumpfile.h: Likewise. * emit-rtl.h: Likewise. * except.h: Likewise. * expmed.h: Likewise. * expr.h: Likewise. * fixed-value.h: Likewise. * gengtype.h: Likewise. * gimple-expr.h: Likewise. * gimple-iterator.h: Likewise. * gimple-predict.h: Likewise. * gimple-range-fold.h: Likewise. * gimple-ssa.h: Likewise. * gimple.h: Likewise. * graphite.h: Likewise. * hard-reg-set.h: Likewise. * hash-map.h: Likewise. * hash-set.h: Likewise. * hash-table.h: Likewise. * hwint.h: Likewise. * input.h: Likewise. * insn-addr.h: Likewise. * internal-fn.h: Likewise. * ipa-fnsummary.h: Likewise. * ipa-icf-gimple.h: Likewise. * ipa-inline.h: Likewise. * ipa-modref.h: Likewise. * ipa-prop.h: Likewise. * ira-int.h: Likewise. * ira.h: Likewise. * lra-int.h: Likewise. * lra.h: Likewise. * lto-streamer.h: Likewise. * memmodel.h: Likewise. * omp-general.h: Likewise. * optabs-query.h: Likewise. * optabs.h: Likewise. * plugin.h: Likewise. * pretty-print.h: Likewise. * range.h: Likewise. * read-md.h: Likewise. * recog.h: Likewise. * regs.h: Likewise. * rtl-iter.h: Likewise. * rtl.h: Likewise. * sbitmap.h: Likewise. * sched-int.h: Likewise. * sel-sched-ir.h: Likewise. * sese.h: Likewise. * sparseset.h: Likewise. * ssa-iterators.h: Likewise. * system.h: Likewise. * target-globals.h: Likewise. * target.h: Likewise. * timevar.h: Likewise. * tree-chrec.h: Likewise. * tree-data-ref.h: Likewise. * tree-iterator.h: Likewise. * tree-outof-ssa.h: Likewise. * tree-phinodes.h: Likewise. * tree-scalar-evolution.h: Likewise. * tree-sra.h: Likewise. * tree-ssa-alias.h: Likewise. * tree-ssa-live.h: Likewise. * tree-ssa-loop-manip.h: Likewise. * tree-ssa-loop.h: Likewise. * tree-ssa-operands.h: Likewise. * tree-ssa-propagate.h: Likewise. * tree-ssa-sccvn.h: Likewise. * tree-ssa.h: Likewise. * tree-ssanames.h: Likewise. * tree-streamer.h: Likewise. * tree-switch-conversion.h: Likewise. * tree-vectorizer.h: Likewise. * tree.h: Likewise. * wide-int.h: Likewise. gcc/c-family/ChangeLog: * c-common.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. gcc/c/ChangeLog: * c-parser.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. gcc/cp/ChangeLog: * cp-tree.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. gcc/fortran/ChangeLog: * gfortran.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. gcc/jit/ChangeLog: * jit-dejagnu.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. * jit-recording.h: Likewise. gcc/objc/ChangeLog: * objc-act.h: Mechanically drop static from static inline functions via s/^static inline/inline/g. * objc-map.h: Likewise. * objc-act.cc: Remove the redundant redeclarations of is_ivar and lookup_category.
2023-02-02Don't peel nonlinear iv(mult or shift) for epilog when vf is not constant.liuhongt1-3/+0
Normally when vf is not constant, it will be prevented by vectorizable_nonlinear_inductions, but for this case, it failed going into if (STMT_VINFO_RELEVANT_P (stmt_info)) { need_to_vectorize = true; if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_induction_def && ! PURE_SLP_STMT (stmt_info)) ok = vectorizable_induction (loop_vinfo, stmt_info, NULL, NULL, &cost_vec); since the iv is never used outside of the loop, and will be dce later, so vectorizer doesn't bother checking if it's vectorizable. it's true but hit gcc_assert in vect_can_peel_nonlinear_iv_p when vf is not constant. One solution is ignoring the nonlinear iv peeling if it's !STMT_VINFO_RELEVANT_P (stmt_info) just like the upper code, the other solution is returning false earlier in the vect_can_peel_nonlinear_iv_p when vf is not constant, the patch chooses the second incase there's other cases using vect_can_advance_ivs_p which calls vect_can_peel_nonlinear_iv_p. Also remove vect_peel_nonlinear_iv_p from vectorizable_nonlinear_inductions. gcc/ChangeLog: PR tree-optimization/108601 * tree-vectorizer.h (vect_can_peel_nonlinear_iv_p): Removed. * tree-vect-loop.cc (vectorizable_nonlinear_induction): Remove vect_can_peel_nonlinear_iv_p. (vect_can_peel_nonlinear_iv_p): Don't peel nonlinear iv(mult or shift) for epilog when vf is not constant and moved the defination to .. * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): .. Here. gcc/testsuite/ChangeLog: * gcc.target/aarch64/pr108601.c: New test.
2023-01-02Update copyright years.Jakub Jelinek1-1/+1
2022-10-17Vectorization of first-order recurrencesRichard Biener1-0/+4
The following picks up the prototype by Ju-Zhe Zhong for vectorizing first order recurrences. That solves two TSVC missed optimization PRs. There's a new scalar cycle def kind, vect_first_order_recurrence and it's handling of the backedge value vectorization is complicated by the fact that the vectorized value isn't the PHI but instead a (series of) permute(s) shifting in the recurring value from the previous iteration. I've implemented this by creating both the single vectorized PHI and the series of permutes when vectorizing the scalar PHI but leave the backedge values in both unassigned. The backedge values are (for the testcases) computed by a load which is also the place after which the permutes are inserted. That placement also restricts the cases we can handle (without resorting to code motion). I added both costing and SLP handling though SLP handling is restricted to the case where a single vectorized PHI is enough. Missing is epilogue handling - while prologue peeling would be handled transparently by adjusting iv_phi_p the epilogue case doesn't work with just inserting a scalar LC PHI since that a) keeps the scalar load live and b) that loads is the wrong one, it has to be the last, much like when we'd vectorize the LC PHI as live operation. Unfortunately LIVE compute/analysis happens too early before we decide on peeling. When using fully masked loop vectorization the vect-recurr-6.c works as expected though. I have tested this on x86_64 for now, but since epilogue handling is missing there's probably no practical cases. My prototype WHILE_ULT AVX512 patch can handle vect-recurr-6.c just fine but I didn't feel like running SPEC within SDE nor is the WHILE_ULT patch complete enough. PR tree-optimization/99409 PR tree-optimization/99394 * tree-vectorizer.h (vect_def_type::vect_first_order_recurrence): Add. (stmt_vec_info_type::recurr_info_type): Likewise. (vectorizable_recurr): New function. * tree-vect-loop.cc (vect_phi_first_order_recurrence_p): New function. (vect_analyze_scalar_cycles_1): Look for first order recurrences. (vect_analyze_loop_operations): Handle them. (vect_transform_loop): Likewise. (vectorizable_recurr): New function. (maybe_set_vectorized_backedge_value): Handle the backedge value setting in the first order recurrence PHI and the permutes. * tree-vect-stmts.cc (vect_analyze_stmt): Handle first order recurrences. (vect_transform_stmt): Likewise. (vect_is_simple_use): Likewise. (vect_is_simple_use): Likewise. * tree-vect-slp.cc (vect_get_and_check_slp_defs): Likewise. (vect_build_slp_tree_2): Likewise. (vect_schedule_scc): Handle the backedge value setting in the first order recurrence PHI and the permutes. * gcc.dg/vect/vect-recurr-1.c: New testcase. * gcc.dg/vect/vect-recurr-2.c: Likewise. * gcc.dg/vect/vect-recurr-3.c: Likewise. * gcc.dg/vect/vect-recurr-4.c: Likewise. * gcc.dg/vect/vect-recurr-5.c: Likewise. * gcc.dg/vect/vect-recurr-6.c: Likewise. * gcc.dg/vect/tsvc/vect-tsvc-s252.c: Un-XFAIL. * gcc.dg/vect/tsvc/vect-tsvc-s254.c: Likewise. * gcc.dg/vect/tsvc/vect-tsvc-s291.c: Likewise. Co-authored-by: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
2022-09-29Check nonlinear iv in vect_can_advance_ivs_p.liuhongt1-0/+3
vectorizable_nonlinear_induction doesn't always guard vect_peel_nonlinear_iv_init when it's called by vect_update_ivs_after_vectorizer. It's supposed to be guarded by vect_can_advance_ivs_p. gcc/ChangeLog: PR tree-optimization/107055 * tree-vect-loop-manip.cc (vect_can_advance_ivs_p): Check for nonlinear induction variables. * tree-vect-loop.cc (vect_can_peel_nonlinear_iv_p): New functions. (vectorizable_nonlinear_induction): Put part codes into vect_can_peel_nonlinear_iv_p. * tree-vectorizer.h (vect_can_peel_nonlinear_iv_p): Declare. gcc/testsuite/ChangeLog: * gcc.target/i386/pr107055.c: New test.
2022-09-07Extend vectorizer to handle nonlinear induction for neg, mul/lshift/rshift ↵liuhongt1-0/+15
with a constant. For neg, the patch create a vec_init as [ a, -a, a, -a, ... ] and no vec_step is needed to update vectorized iv since vf is always multiple of 2(negative * negative is positive). For shift, the patch create a vec_init as [ a, a >> c, a >> 2*c, ..] as vec_step as [ c * nunits, c * nunits, c * nunits, ... ], vectorized iv is updated as vec_def = vec_init >>/<< vec_step. For mul, the patch create a vec_init as [ a, a * c, a * pow(c, 2), ..] as vec_step as [ pow(c,nunits), pow(c,nunits),...] iv is updated as vec_def = vec_init * vec_step. The patch handles nonlinear iv for 1. Integer type only, floating point is not handled. 2. No slp_node. 3. iv_loop should be same as vector loop, not nested loop. 4. No UD is created, for mul, use unsigned mult to avoid UD, for shift, shift count should be less than type precision. gcc/ChangeLog: PR tree-optimization/103144 * tree-vect-loop.cc (vect_is_nonlinear_iv_evolution): New function. (vect_analyze_scalar_cycles_1): Detect nonlinear iv by upper function. (vect_create_nonlinear_iv_init): New function. (vect_peel_nonlinear_iv_init): Ditto. (vect_create_nonlinear_iv_step): Ditto (vect_create_nonlinear_iv_vec_step): Ditto (vect_update_nonlinear_iv): Ditto (vectorizable_nonlinear_induction): Ditto. (vectorizable_induction): Call vectorizable_nonlinear_induction when induction_type is not vect_step_op_add. * tree-vect-loop-manip.cc (vect_update_ivs_after_vectorizer): Update nonlinear iv for epilogue loop. * tree-vectorizer.h (enum vect_induction_op_type): New enum. (STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE): New Macro. gcc/testsuite/ChangeLog: * gcc.target/i386/pr103144-mul-1.c: New test. * gcc.target/i386/pr103144-mul-2.c: New test. * gcc.target/i386/pr103144-neg-1.c: New test. * gcc.target/i386/pr103144-neg-2.c: New test. * gcc.target/i386/pr103144-shift-1.c: New test. * gcc.target/i386/pr103144-shift-2.c: New test.
2022-08-30Extend SLP permutation optimisationsRichard Sandiford1-0/+2
Currently SLP tries to force permute operations "down" the graph from loads in the hope of reducing the total number of permutations needed or (in the best case) removing the need for the permutations entirely. This patch tries to extend it as follows: - Allow loads to take a different permutation from the one they started with, rather than choosing between "original permutation" and "no permutation". - Allow changes in both directions, if the target supports the reverse permutation. - Treat the placement of permutations as a two-way dataflow problem: after propagating information from leaves to roots (as now), propagate information back up the graph. - Take execution frequency into account when optimising for speed, so that (for example) permutations inside loops have a higher cost than permutations outside loops. - Try to reduce the total number of permutations when optimising for size, even if that increases the number of permutations on a given execution path. See the big block comment above vect_optimize_slp_pass for a detailed description. The original motivation for doing this was to add a framework that would allow other layout differences in future. The two main ones are: - Make it easier to represent predicated operations, including predicated operations with gaps. E.g.: a[0] += 1; a[1] += 1; a[3] += 1; could be a single load/add/store for SVE. We could handle this by representing a layout such as { 0, 1, _, 2 } or { 0, 1, _, 3 } (depending on what's being counted). We might need to move elements between lanes at various points, like with permutes. (This would first mean adding support for stores with gaps.) - Make it easier to switch between an even/odd and unpermuted layout when switching between wide and narrow elements. E.g. if a widening operation produces an even vector and an odd vector, we should try to keep operations on the wide elements in that order rather than force them to be permuted back "in order". To give some examples of what the patch does: int f1(int *__restrict a, int *__restrict b, int *__restrict c, int *__restrict d) { a[0] = (b[1] << c[3]) - d[1]; a[1] = (b[0] << c[2]) - d[0]; a[2] = (b[3] << c[1]) - d[3]; a[3] = (b[2] << c[0]) - d[2]; } continues to produce the same code as before when optimising for speed: b, c and d are permuted at load time. But when optimising for size we instead permute c into the same order as b+d and then permute the result of the arithmetic into the same order as a: ldr q1, [x2] ldr q0, [x1] ext v1.16b, v1.16b, v1.16b, #8 // <------ sshl v0.4s, v0.4s, v1.4s ldr q1, [x3] sub v0.4s, v0.4s, v1.4s rev64 v0.4s, v0.4s // <------ str q0, [x0] ret The following function: int f2(int *__restrict a, int *__restrict b, int *__restrict c, int *__restrict d) { a[0] = (b[3] << c[3]) - d[3]; a[1] = (b[2] << c[2]) - d[2]; a[2] = (b[1] << c[1]) - d[1]; a[3] = (b[0] << c[0]) - d[0]; } continues to push the reverse down to just before the store, like the previous code did. In: int f3(int *__restrict a, int *__restrict b, int *__restrict c, int *__restrict d) { for (int i = 0; i < 100; ++i) { a[0] = (a[0] + c[3]); a[1] = (a[1] + c[2]); a[2] = (a[2] + c[1]); a[3] = (a[3] + c[0]); c += 4; } } the loads of a are hoisted and the stores of a are sunk, so that only the load from c happens in the loop. When optimising for speed, we prefer to have the loop operate on the reversed layout, changing on entry and exit from the loop: mov x3, x0 adrp x0, .LC0 add x1, x2, 1600 ldr q2, [x0, #:lo12:.LC0] ldr q0, [x3] mov v1.16b, v0.16b tbl v0.16b, {v0.16b - v1.16b}, v2.16b // <-------- .p2align 3,,7 .L6: ldr q1, [x2], 16 add v0.4s, v0.4s, v1.4s cmp x2, x1 bne .L6 mov v1.16b, v0.16b adrp x0, .LC0 ldr q2, [x0, #:lo12:.LC0] tbl v0.16b, {v0.16b - v1.16b}, v2.16b // <-------- str q0, [x3] ret Similarly, for the very artificial testcase: int f4(int *__restrict a, int *__restrict b, int *__restrict c, int *__restrict d) { int a0 = a[0]; int a1 = a[1]; int a2 = a[2]; int a3 = a[3]; for (int i = 0; i < 100; ++i) { a0 ^= c[0]; a1 ^= c[1]; a2 ^= c[2]; a3 ^= c[3]; c += 4; for (int j = 0; j < 100; ++j) { a0 += d[1]; a1 += d[0]; a2 += d[3]; a3 += d[2]; d += 4; } b[0] = a0; b[1] = a1; b[2] = a2; b[3] = a3; b += 4; } a[0] = a0; a[1] = a1; a[2] = a2; a[3] = a3; } the a vector in the inner loop maintains the order { 1, 0, 3, 2 }, even though it's part of an SCC that includes the outer loop. In other words, this is a motivating case for not assigning permutes at SCC granularity. The code we get is: ldr q0, [x0] mov x4, x1 mov x5, x0 add x1, x3, 1600 add x3, x4, 1600 .p2align 3,,7 .L11: ldr q1, [x2], 16 sub x0, x1, #1600 eor v0.16b, v1.16b, v0.16b rev64 v0.4s, v0.4s // <--- .p2align 3,,7 .L10: ldr q1, [x0], 16 add v0.4s, v0.4s, v1.4s cmp x0, x1 bne .L10 rev64 v0.4s, v0.4s // <--- add x1, x0, 1600 str q0, [x4], 16 cmp x3, x4 bne .L11 str q0, [x5] ret bb-slp-layout-17.c is a collection of compile tests for problems I hit with earlier versions of the patch. The same prolems might show up elsewhere, but it seemed worth having the test anyway. In slp-11b.c we previously pushed the permutation of the in[i*4] group down from the load to just before the store. That didn't reduce the number or frequency of the permutations (or increase them either). But separating the permute from the load meant that we could no longer use load/store lanes. Whether load/store lanes are a good idea here is another question. If there were two sets of loads, and if we could use a single permutation instead of one per load, then avoiding load/store lanes should be a good thing even under the current abstract cost model. But I think under the current model we should try to avoid splitting up potential load/store lanes groups if there is no specific benefit to the split. Preferring load/store lanes is still a source of missed optimisations that we should fix one day... gcc/ * params.opt (-param=vect-max-layout-candidates=): New parameter. * doc/invoke.texi (vect-max-layout-candidates): Document it. * tree-vectorizer.h (auto_lane_permutation_t): New typedef. (auto_load_permutation_t): Likewise. * tree-vect-slp.cc (vect_slp_node_weight): New function. (slpg_layout_cost): New class. (slpg_vertex): Replace perm_in and perm_out with partition, out_degree, weight and out_weight. (slpg_partition_info, slpg_partition_layout_costs): New classes. (vect_optimize_slp_pass): Likewise, cannibalizing some part of the previous vect_optimize_slp. (vect_optimize_slp): Use it. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_var_shift): Return true for aarch64. * gcc.dg/vect/bb-slp-layout-1.c: New test. * gcc.dg/vect/bb-slp-layout-2.c: New test. * gcc.dg/vect/bb-slp-layout-3.c: New test. * gcc.dg/vect/bb-slp-layout-4.c: New test. * gcc.dg/vect/bb-slp-layout-5.c: New test. * gcc.dg/vect/bb-slp-layout-6.c: New test. * gcc.dg/vect/bb-slp-layout-7.c: New test. * gcc.dg/vect/bb-slp-layout-8.c: New test. * gcc.dg/vect/bb-slp-layout-9.c: New test. * gcc.dg/vect/bb-slp-layout-10.c: New test. * gcc.dg/vect/bb-slp-layout-11.c: New test. * gcc.dg/vect/bb-slp-layout-13.c: New test. * gcc.dg/vect/bb-slp-layout-14.c: New test. * gcc.dg/vect/bb-slp-layout-15.c: New test. * gcc.dg/vect/bb-slp-layout-16.c: New test. * gcc.dg/vect/bb-slp-layout-17.c: New test. * gcc.dg/vect/slp-11b.c: XFAIL SLP test for load-lanes targets.
2022-07-04Revert update-ssa assert in vectorizerRichard Biener1-0/+4
The following reverts the just added assert that virtual SSA does not need updating. It instead goes for a select whitelist of transforms known to be prone to difficulties with virtual SSA update. * tree-vect-loop-manip.cc (vect_do_peeling): Revert assert and update virtual SSA form again. Assert we do so for a known set of transforms only. * tree-vectorizer.h (vec_info::any_known_not_updated_vssa): New. * tree-vect-stmts.cc (vectorizable_load): When vectorizing using load-lanes allow virtual SSA update.
2022-02-22tree-optimization/104582 - make SLP node available in vector cost hookRichard Biener1-11/+17
This adjusts the vectorizer costing API to allow passing down the SLP node the vector stmt is created from. 2022-02-18 Richard Biener <rguenther@suse.de> PR tree-optimization/104582 * tree-vectorizer.h (stmt_info_for_cost::node): New field. (vector_costs::add_stmt_cost): Add SLP node parameter. (dump_stmt_cost): Likewise. (add_stmt_cost): Likewise, new overload and adjust. (add_stmt_costs): Adjust. (record_stmt_cost): New overload. * tree-vectorizer.cc (dump_stmt_cost): Dump the SLP node. (vector_costs::add_stmt_cost): Adjust. * tree-vect-loop.cc (vect_estimate_min_profitable_iters): Adjust. * tree-vect-slp.cc (vect_prologue_cost_for_slp): Record the SLP node for costing. (vectorizable_slp_permutation): Likewise. * tree-vect-stmts.cc (record_stmt_cost): Adjust and add new overloads. * config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): Adjust. * config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost): Adjust. * config/rs6000/rs6000.cc (rs6000_vector_costs::add_stmt_cost): Adjust. (rs6000_cost_data::adjust_vect_cost_per_loop): Likewise.