aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-vectorizer.h
AgeCommit message (Collapse)AuthorFilesLines
2020-12-13middle-end: Support complex AdditionTamar Christina1-1/+83
This patch adds support for * Complex Addition with rotation of 90 and 270. Addition with rotation of the second argument around the Argand plane. Supported rotations are 90 and 180. c = a + (b * I) and c = a + (b * I * I * I) gcc/ChangeLog: * tree-vect-slp-patterns.c: New file. * Makefile.in: Add it. * doc/passes.texi: Document it. * internal-fn.def (COMPLEX_ADD_ROT90, COMPLEX_ADD_ROT270): New. * optabs.def (cadd90_optab, cadd270_optab): New. * doc/md.texi: Document them. * tree-vect-loop.c (vect_analyze_loop_2): Add dissolve code. * tree-vect-slp.c: (vect_free_slp_instance, vect_create_new_slp_node): Export. (vect_match_slp_patterns_2, vect_match_slp_patterns): New. (vect_analyze_slp): Use it. * tree-vectorizer.h (vect_free_slp_tree): Export. (enum _complex_operation): Forward declare. (class vect_pattern): New gcc/testsuite/ChangeLog: * lib/target-supports.exp (check_effective_target_arm_v8_3a_complex_neon_ok_nocache): Fix it. (check_effective_target_vect_complex_add_byte ,check_effective_target_vect_complex_add_int ,check_effective_target_vect_complex_add_short ,check_effective_target_vect_complex_add_long ,check_effective_target_vect_complex_add_half ,check_effective_target_vect_complex_add_float ,check_effective_target_vect_complex_add_double): New. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-byte.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-int.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-long.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-short.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-unsigned-byte.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-unsigned-int.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-unsigned-long.c: New test. * gcc.dg/vect/complex/bb-slp-complex-add-pattern-unsigned-short.c: New test. * gcc.dg/vect/complex/complex-add-pattern-template.c: New test. * gcc.dg/vect/complex/complex-add-template.c: New test. * gcc.dg/vect/complex/complex-operations-run.c: New test. * gcc.dg/vect/complex/complex-operations.c: New test. * gcc.dg/vect/complex/complex.exp: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-double.c: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-float.c: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-half-float.c: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-double.c: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-float.c: New test. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-half-float.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-double.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-float.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-half-float.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-pattern-double.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-pattern-float.c: New test. * gcc.dg/vect/complex/fast-math-complex-add-pattern-half-float.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-byte.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-int.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-long.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-short.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-unsigned-byte.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-unsigned-int.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-unsigned-long.c: New test. * gcc.dg/vect/complex/vect-complex-add-pattern-unsigned-short.c: New test.
2020-12-13middle-end: Refactor and expose some vectorizer helper functions.Tamar Christina1-3/+10
This is a small refactoring which exposes some helper functions in the vectorizer so they can be used in other places. gcc/ChangeLog: * tree-vect-patterns.c (vect_mark_pattern_stmts): Remove static inline. * tree-vect-slp.c (vect_create_new_slp_node): Remove static and only set smts if valid. * tree-vectorizer.c (vec_info::add_pattern_stmt): New. (vec_info::set_vinfo_for_stmt): Optionally enforce read-only. * tree-vectorizer.h (struct _slp_tree): Use new types. (lane_permutation_t, lane_permutation_t): New. (vect_create_new_slp_node, vect_mark_pattern_stmts): New.
2020-12-07tree-optimization/98113 - vectorize a sequence of BIT_INSERT_EXPRsRichard Biener1-0/+12
This adds the capability to handle a sequence of vector BIT_INSERT_EXPRs to be vectorized similar as to how we vectorize vector constructors. 2020-12-03 Richard Biener <rguenther@suse.de> PR tree-optimization/98113 * tree-vectorizer.h (struct slp_root): New. (_bb_vec_info::roots): New member. * tree-vect-slp.c (vect_analyze_slp): Also walk BB info roots. (_bb_vec_info::_bb_vec_info): Adjust. (_bb_vec_info::~_bb_vec_info): Likewise. (vld_cmp): New. (vect_slp_is_lane_insert): Likewise. (vect_slp_check_for_constructors): Match a series of BIT_INSERT_EXPRs as vector constructor. (vect_slp_analyze_bb_1): Continue if BB info roots is not empty. (vect_slp_analyze_bb_1): Mark the whole BIT_INSERT_EXPR root sequence as pure_slp. * gcc.dg/vect/bb-slp-70.c: New testcase.
2020-12-02tree-optimization/97630 - fix SLP cycle memory leakRichard Biener1-3/+6
This fixes SLP cycles leaking memory by maintaining a double-linked list of allocatd SLP nodes we can zap when we free the alloc pool. 2020-12-02 Richard Biener <rguenther@suse.de> PR tree-optimization/97630 * tree-vectorizer.h (_slp_tree::next_node, _slp_tree::prev_node): New. (vect_slp_init): Declare. (vect_slp_fini): Likewise. * tree-vectorizer.c (vectorize_loops): Call vect_slp_init/fini. (pass_slp_vectorize::execute): Likewise. * tree-vect-slp.c (vect_slp_init): New. (vect_slp_fini): Likewise. (slp_first_node): New global. (_slp_tree::_slp_tree): Link node into the SLP tree list. (_slp_tree::~_slp_tree): Delink node from the SLP tree list.
2020-11-16Delay SLP instance loads gatheringRichard Biener1-0/+1
This delays filling SLP_INSTANCE_LOADS. 2020-11-16 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_gather_slp_loads): Declare. * tree-vect-loop.c (vect_analyze_loop_2): Call vect_gather_slp_loads. * tree-vect-slp.c (vect_build_slp_instance): Do not gather SLP loads here. (vect_gather_slp_loads): Remove wrapper, new function. (vect_slp_analyze_bb_1): Call it.
2020-11-05middle-end: Store and use the SLP instance kind when aborting load/store lanesTamar Christina1-0/+13
This patch stores the SLP instance kind in the SLP instance so that we can use it later when detecting load/store lanes support. This also changes the load/store lane support check to only check if the SLP kind is a store. This means that in order for the load/lanes to work all instances must be of kind store. gcc/ChangeLog: * tree-vect-loop.c (vect_analyze_loop_2): Check kind. * tree-vect-slp.c (vect_build_slp_instance): New. (enum slp_instance_kind): Move to... * tree-vectorizer.h (enum slp_instance_kind): .. Here (SLP_INSTANCE_KIND): New.
2020-11-04add costing to SLP vectorized PHIsRichard Biener1-1/+2
I forgot to cost vectorized PHIs. Scalar PHIs are just costed as scalar_stmt so the following costs vector PHIs as vector_stmt. 2020-11-04 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vectorizable_phi): Adjust prototype. * tree-vect-stmts.c (vect_transform_stmt): Adjust. (vect_analyze_stmt): Pass cost_vec to vectorizable_phi. * tree-vect-loop.c (vectorizable_phi): Do costing.
2020-10-29vect: Fix load costs for SLP permutesRichard Sandiford1-1/+2
For the following test case (compiled with load/store lanes disabled locally): void f (uint32_t *restrict x, uint8_t *restrict y, int n) { for (int i = 0; i < n; ++i) { x[i * 2] = x[i * 2] + y[i * 2]; x[i * 2 + 1] = x[i * 2 + 1] + y[i * 2]; } } we have a redundant no-op permute on the x[] load node: node 0x4472350 (max_nunits=8, refcnt=2) stmt 0 _5 = *_4; stmt 1 _13 = *_12; load permutation { 0 1 } Then, when costing it, we pick a cost of 1, even though we need 4 copies of the x[] load to match a single y[] load: ==> examining statement: _5 = *_4; Vectorizing an unaligned access. vect_model_load_cost: unaligned supported by hardware. vect_model_load_cost: inside_cost = 1, prologue_cost = 0 . The problem is that the code only considers the permutation for the first scalar iteration, rather than for all VF iterations. This patch tries to fix that by making vect_transform_slp_perm_load calculate the value instead. gcc/ * tree-vectorizer.h (vect_transform_slp_perm_load): Take an optional extra parameter. * tree-vect-slp.c (vect_transform_slp_perm_load): Calculate the number of loads as well as the number of permutes, taking the counting loop from... * tree-vect-stmts.c (vect_model_load_cost): ...here. Use the value computed by vect_transform_slp_perm_load for ncopies.
2020-10-27SLP vectorize across PHI nodesRichard Biener1-0/+2
This makes SLP discovery detect backedges by seeding the bst_map with the node to be analyzed so it can be picked up from recursive calls. This removes the need to discover backedges in a separate walk. This enables SLP build to handle PHI nodes in full, continuing the SLP build to non-backedges. For loop vectorization this enables outer loop vectorization of nested SLP cycles and for BB vectorization this enables vectorization of PHIs at CFG merges. It also turns code generation into a SCC discovery walk to handle irreducible regions and nodes only reachable via backedges where we now also fill in vectorized backedge defs. This requires sanitizing the SLP tree for SLP reduction chains even more, manually filling the backedge SLP def. This also exposes the fact that CFG copying (and edge splitting until I fixed that) ends up with different edge order in the copy which doesn't play well with the desired 1:1 mapping of SLP PHI node children and edges for epilogue vectorization. I've tried to fixup CFG copying here but this really looks like a dead (or expensive) end there so I've done fixup in slpeel_tree_duplicate_loop_to_edge_cfg instead for the cases we can run into. There's still NULLs in the SLP_TREE_CHILDREN vectors and I'm not sure it's possible to eliminate them all this stage1 so the patch has quite some checks for this case all over the place. Bootstrapped and tested on x86_64-unknown-linux-gnu. SPEC CPU 2017 and SPEC CPU 2006 successfully built and tested. 2020-10-27 Richard Biener <rguenther@suse.de> * gimple.h (gimple_expr_type): For PHIs return the type of the result. * tree-vect-loop-manip.c (slpeel_tree_duplicate_loop_to_edge_cfg): Make sure edge order into copied loop headers line up with the originals. * tree-vect-loop.c (vect_transform_cycle_phi): Handle nested loops with SLP. (vectorizable_phi): New function. (vectorizable_live_operation): For BB vectorization compute insert location here. * tree-vect-slp.c (vect_free_slp_tree): Deal with NULL SLP_TREE_CHILDREN entries. (vect_create_new_slp_node): Add overloads with pre-existing node argument. (vect_print_slp_graph): Likewise. (vect_mark_slp_stmts): Likewise. (vect_mark_slp_stmts_relevant): Likewise. (vect_gather_slp_loads): Likewise. (vect_optimize_slp): Likewise. (vect_slp_analyze_node_operations): Likewise. (vect_bb_slp_scalar_cost): Likewise. (vect_remove_slp_scalar_calls): Likewise. (vect_get_and_check_slp_defs): Handle PHIs. (vect_build_slp_tree_1): Handle PHIs. (vect_build_slp_tree_2): Continue SLP build, following PHI arguments. Fix memory leak. (vect_build_slp_tree): Put stub node into the hash-map so we can discover cycles directly. (vect_build_slp_instance): Set the backedge SLP def for reduction chains. (vect_analyze_slp_backedges): Remove. (vect_analyze_slp): Do not call it. (vect_slp_convert_to_external): Release SLP_TREE_LOAD_PERMUTATION. (vect_slp_analyze_node_operations): Handle stray failed backedge defs by failing. (vect_slp_build_vertices): Adjust leaf condition. (vect_bb_slp_mark_live_stmts): Handle PHIs, use visited hash-set to handle cycles. (vect_slp_analyze_operations): Adjust. (vect_bb_partition_graph_r): Likewise. (vect_slp_function): Adjust split condition to allow CFG merges. (vect_schedule_slp_instance): Rename to ... (vect_schedule_slp_node): ... this. Move DFS walk to ... (vect_schedule_scc): ... this new function. (vect_schedule_slp): Call it. Remove ad-hoc vectorized backedge fill code. * tree-vect-stmts.c (vect_analyze_stmt): Call vectorizable_phi. (vect_transform_stmt): Likewise. (vect_is_simple_use): Handle vect_backedge_def. * tree-vectorizer.c (vec_info::new_stmt_vec_info): Only set loop header PHIs to vect_unknown_def_type for loop vectorization. * tree-vectorizer.h (enum vect_def_type): Add vect_backedge_def. (enum stmt_vec_info_type): Add phi_info_type. (vectorizable_phi): Declare. * gcc.dg/vect/bb-slp-54.c: New test. * gcc.dg/vect/bb-slp-55.c: Likewise. * gcc.dg/vect/bb-slp-56.c: Likewise. * gcc.dg/vect/bb-slp-57.c: Likewise. * gcc.dg/vect/bb-slp-58.c: Likewise. * gcc.dg/vect/bb-slp-59.c: Likewise. * gcc.dg/vect/bb-slp-60.c: Likewise. * gcc.dg/vect/bb-slp-61.c: Likewise. * gcc.dg/vect/bb-slp-62.c: Likewise. * gcc.dg/vect/bb-slp-63.c: Likewise. * gcc.dg/vect/bb-slp-64.c: Likewise. * gcc.dg/vect/bb-slp-65.c: Likewise. * gcc.dg/vect/bb-slp-66.c: Likewise. * gcc.dg/vect/vect-outer-slp-1.c: Likewise. * gfortran.dg/vect/O3-bb-slp-1.f: Likewise. * gfortran.dg/vect/O3-bb-slp-2.f: Likewise. * g++.dg/vect/simd-11.cc: Likewise.
2020-10-27Move SLP nodes to an alloc-poolRichard Biener1-0/+9
This introduces a global alloc-pool for SLP nodes to reduce overhead on SLP allocation churn which will get worse and to eventually release SLP cycles which will retain a refcount of one and thus are never freed at the moment. 2020-10-26 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (slp_tree_pool): Declare. (_slp_tree::operator new): Likewise. (_slp_tree::operator delete): Likewise. * tree-vectorizer.c (vectorize_loops): Allocate and free the slp_tree_pool. (pass_slp_vectorize::execute): Likewise. * tree-vect-slp.c (slp_tree_pool): Define. (_slp_tree::operator new): Likewise. (_slp_tree::operator delete): Likewise.
2020-10-13Remove STMT_VINFO_SAME_ALIGN_REFSRichard Biener1-5/+0
This makes the only consumer of STMT_VINFO_SAME_ALIGN_REFS, the loop peeling for alignment code, use locally computed data and then removes STMT_VINFO_SAME_ALIGN_REFS and its computation. It also adjusts the auto_vec<> move CTOR/assignment so you can write auto_vec<..> foo = bar.copy (); and have foo own the generated copy. 2020-10-13 Richard Biener <rguenther@suse.de> PR tree-optimization/97382 * tree-vectorizer.h (_stmt_vec_info::same_align_refs): Remove. (STMT_VINFO_SAME_ALIGN_REFS): Likewise. * tree-vectorizer.c (vec_info::new_stmt_vec_info): Do not allocate STMT_VINFO_SAME_ALIGN_REFS. (vec_info::free_stmt_vec_info): Do not release STMT_VINFO_SAME_ALIGN_REFS. * tree-vect-data-refs.c (vect_analyze_data_ref_dependences): Do not compute self and read-read dependences. (vect_dr_aligned_if_related_peeled_dr_is): New helper. (vect_dr_aligned_if_peeled_dr_is): Likewise. (vect_update_misalignment_for_peel): Use it instead of iterating over STMT_VINFO_SAME_ALIGN_REFS. (dr_align_group_sort_cmp): New function. (vect_enhance_data_refs_alignment): Count the number of same aligned refs here and elide uses of STMT_VINFO_SAME_ALIGN_REFS. (vect_find_same_alignment_drs): Remove. (vect_analyze_data_refs_alignment): Do not call it. * vec.h (auto_vec<T, 0>::auto_vec): Adjust CTOR to take a vec<>&&, assert it isn't using auto storage. (auto_vec& operator=): Apply a similar change. * gcc.dg/vect/no-vfa-vect-dv-2.c: Remove same align dump scanning. * gcc.dg/vect/vect-103.c: Likewise. * gcc.dg/vect/vect-91.c: Likewise. * gfortran.dg/vect/vect-4.f90: Likewise.
2020-10-12optimize permutes in SLP, remove vect_attempt_slp_rearrange_stmtsRichard Biener1-0/+2
This introduces a permute optimization phase for SLP which is intended to cover the existing permute eliding for SLP reductions plus handling commonizing the easy cases. It currently uses graphds to compute a postorder on the reverse SLP graph and it handles all cases vect_attempt_slp_rearrange_stmts did (hopefully - I've adjusted most testcases that triggered it a few days ago). It restricts itself to move around bijective permutations to simplify things for now, mainly around constant nodes. As a prerequesite it makes the SLP graph cyclic (ugh). It looks like it would pay off to compute a PRE/POST order visit array once and elide all the recursive SLP graph walks and their visited hash-set. At least for the time where we do not change the SLP graph during such walk. I do not like using graphds too much but at least I don't have to re-implement yet another RPO walk, so maybe it isn't too bad. It now computes permute placement during iteration and thus should get cycles more obviously correct. Richard. 2020-10-06 Richard Biener <rguenther@suse.de> * tree-vect-data-refs.c (vect_slp_analyze_instance_dependence): Use SLP_TREE_REPRESENTATIVE. * tree-vectorizer.h (_slp_tree::vertex): New member used for graphds interfacing. * tree-vect-slp.c (vect_build_slp_tree_2): Allocate space for PHI SLP children. (vect_analyze_slp_backedges): New function filling in SLP node children for PHIs that correspond to backedge values. (vect_analyze_slp): Call vect_analyze_slp_backedges for the graph. (vect_slp_analyze_node_operations): Deal with a cyclic graph. (vect_schedule_slp_instance): Likewise. (vect_schedule_slp): Likewise. (slp_copy_subtree): Remove. (vect_slp_rearrange_stmts): Likewise. (vect_attempt_slp_rearrange_stmts): Likewise. (vect_slp_build_vertices): New functions. (vect_slp_permute): Likewise. (vect_slp_perms_eq): Likewise. (vect_optimize_slp): Remove special code to elide permutations with SLP reductions. Implement generic permute optimization. * gcc.dg/vect/bb-slp-50.c: New testcase. * gcc.dg/vect/bb-slp-51.c: Likewise.
2020-10-08SLP vectorize multiple BBs at onceRichard Biener1-86/+7
This work from Martin Liska was motivated by gcc.dg/vect/bb-slp-22.c which shows how poorly we currently BB vectorize code like a0 = in[0] + 23; a1 = in[1] + 142; a2 = in[2] + 2; a3 = in[3] + 31; if (x > y) { b[0] = a0; b[1] = a1; b[2] = a2; b[3] = a3; } else { out[0] = a0 * (x + 1); out[1] = a1 * (y + 1); out[2] = a2 * (x + 1); out[3] = a3 * (y + 1); } namely by vectorizing the stores but not the common load (and add) they are feeded with. Thus with the following patch we change the BB vectorizer from operating on a single basic-block at a time to consider somewhat larger regions (but not the whole function yet because of issues with vector size iteration). I took the opportunity to remove the fancy region iterations again now that we operate on BB granularity and in the end need to visit PHI nodes as well. 2020-10-08 Martin Liska <mliska@suse.cz> Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_bb_vec_info::const_iterator): Remove. (_bb_vec_info::const_reverse_iterator): Likewise. (_bb_vec_info::region_stmts): Likewise. (_bb_vec_info::reverse_region_stmts): Likewise. (_bb_vec_info::_bb_vec_info): Adjust. (_bb_vec_info::bb): Remove. (_bb_vec_info::region_begin): Remove. (_bb_vec_info::region_end): Remove. (_bb_vec_info::bbs): New vector of BBs. (vect_slp_function): Declare. * tree-vect-patterns.c (vect_determine_precisions): Use regular stmt iteration. (vect_pattern_recog): Likewise. * tree-vect-slp.c: Include cfganal.h, tree-eh.h and tree-cfg.h. (vect_build_slp_tree_1): Properly refuse to vectorize volatile and throwing stmts. (vect_build_slp_tree_2): Pass group-size down to get_vectype_for_scalar_type. (_bb_vec_info::_bb_vec_info): Use regular stmt iteration, adjust for changed region specification. (_bb_vec_info::~_bb_vec_info): Likewise. (vect_slp_check_for_constructors): Likewise. (vect_slp_region): Likewise. (vect_slp_bbs): New worker operating on a vector of BBs. (vect_slp_bb): Wrap it. (vect_slp_function): New function splitting the function into multi-BB regions. (vect_create_constant_vectors): Handle the case of inserting after a throwing def. (vect_schedule_slp_instance): Adjust. * tree-vectorizer.c (vec_info::remove_stmt): Simplify again. (vec_info::insert_seq_on_entry): Adjust. (pass_slp_vectorize::execute): Also init PHIs. Call vect_slp_function. * gcc.dg/vect/bb-slp-22.c: Adjust. * gfortran.dg/pr68627.f: Likewise.
2020-09-30middle-end: Refactor refcnt to use SLP_TREE_REF_COUNT for consistencyTamar Christina1-0/+1
This is a small refactoring which introduces SLP_TREE_REF_COUNT and replaces the uses of refcnt with it. This for consistency between the other properties. A similar patch was pre-approved last year but since there are more use now I am sending it for review anyway. gcc/ChangeLog: * tree-vectorizer.h (SLP_TREE_REF_COUNT): New. * tree-vect-slp.c (_slp_tree::_slp_tree, _slp_tree::~_slp_tree, vect_free_slp_tree, vect_build_slp_tree, vect_print_slp_tree, slp_copy_subtree, vect_attempt_slp_rearrange_stmts): Use it.
2020-09-29move permute optimization to optimize-slpRichard Biener1-1/+0
This moves optimizing permutes of SLP reductions to vect_optimize_slp, eliding the global slp_loads array. 2020-09-29 Richard Biener <rguenther@suse.de> * tree-vect-slp.c (vect_analyze_slp): Move SLP reduction re-arrangement and SLP graph load gathering... (vect_optimize_slp): ... here. * tree-vectorizer.h (vec_info::slp_loads): Remove.
2020-09-23vect: Fix epilogue loop handling of partial vectorsRichard Sandiford1-1/+2
This patch fixes the fallout that Kewen reported on Power after the recent change to avoid unnecessary use of partial vectors. As Kewen said, the problem is that vect_analyze_loop_2 doesn't know how many epilogue iterations there will be, and so it cannot make a final decision about whether the number of iterations forces an epilogue loop to use partial vectors. This is similar to the current situation for peeling: we don't know during initial analysis whether an epilogue loop will itself require peeling. Instead we decide that during vect_do_peeling, where the final number of epilogue loop iterations is known. The patch takes a similar approach for the decision about whether to use partial vectors. As the comments in the patch say, the idea is that vect_analyze_loop_2 should make peeling and partial- vector decisions based on the assumption that the loop_vinfo will be used as the main loop, while vect_do_peeling should make them in the knowledge that the loop_vinfo will be used as an epilogue loop. This allows the same analysis to be used for both cases, which we rely on for implementing VECT_COMPARE_COSTS; see the big comment in vect_analyze_loop for details. I hope the patch makes the (mostly preexisting) structure a bit more obvious. It isn't what anyone would design from scratch, but that's the nature of working with a mature vector framework. Arranging things this way means that vect_verify_full_masking and vect_verify_loop_lens now become part of the “can” rather than “will” test for partial vectors. Also, while splitting out the logic that handles epilogues with constant iterations, I added a check to make sure that we don't try to use partial vectors to vectorise a single-scalar loop. This required some changes to the Power tests. gcc/ * tree-vectorizer.h (determine_peel_for_niter): Delete in favor of... (vect_determine_partial_vectors_and_peeling): ...this new function. * tree-vect-loop-manip.c (vect_update_epilogue_niters): New function. Reject using vector epilogue loops for single iterations. Install the constant number of epilogue loop iterations in the associated loop_vinfo. Rely on vect_determine_partial_vectors_and_peeling to do the main part of the test. (vect_do_peeling): Use vect_update_epilogue_niters to handle epilogue loops with a known number of iterations. Skip recomputing the number of iterations later in that case. Otherwise, use vect_determine_partial_vectors_and_peeling to decide whether the epilogue loop needs to use partial vectors or peeling. * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Set the default can_use_partial_vectors_p to false if partial-vector-usage=0. (determine_peel_for_niter): Remove in favor of... (vect_determine_partial_vectors_and_peeling): ...this new function, split out from... (vect_analyze_loop_2): ...here. Reflect the vect_verify_full_masking and vect_verify_loop_lens results in CAN_USE_PARTIAL_VECTORS_P rather than USING_PARTIAL_VECTORS_P. gcc/testsuite/ * gcc.target/powerpc/p9-vec-length-epil-1.c: Do not expect the single-iteration epilogues of the 64-bit loops to be vectorized. * gcc.target/powerpc/p9-vec-length-epil-7.c: Likewise. * gcc.target/powerpc/p9-vec-length-epil-8.c: Likewise.
2020-09-16remove STMT_VINFO_NUM_SLP_USESRichard Biener1-5/+2
This removes STMT_VINFO_NUM_SLP_USES by pushing the setting of the shared stmt_vec_info vector type to where we actually need it which is alignment analysis and vectorizable_* analysis (where we could eventually elide it for non-load/store operations). In particular "uses" in the cache and in disqualified SLP subgraphs should no longer provide conflicting vector types this way. 2020-09-16 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_stmt_vec_info::num_slp_uses): Remove. (STMT_VINFO_NUM_SLP_USES): Likewise. (vect_free_slp_instance): Adjust. (vect_update_shared_vectype): Declare. * tree-vectorizer.c (vec_info::~vec_info): Adjust. * tree-vect-loop.c (vect_analyze_loop_2): Likewise. (vectorizable_live_operation): Use vector type from SLP_TREE_REPRESENTATIVE. (vect_transform_loop): Adjust. * tree-vect-data-refs.c (vect_slp_analyze_node_alignment): Set the shared vector type. * tree-vect-slp.c (vect_free_slp_tree): Remove final_p parameter, remove STMT_VINFO_NUM_SLP_USES updating. (vect_free_slp_instance): Adjust. (vect_create_new_slp_node): Remove STMT_VINFO_NUM_SLP_USES updating. (vect_update_shared_vectype): Always compare with the present vector type, update if NULL. (vect_build_slp_tree_1): Do not update the shared vector type here. (vect_build_slp_tree_2): Adjust. (slp_copy_subtree): Likewise. (vect_attempt_slp_rearrange_stmts): Likewise. (vect_analyze_slp_instance): Likewise. (vect_analyze_slp): Likewise. (vect_slp_analyze_node_operations_1): Update the shared vector type. (vect_slp_analyze_operations): Adjust. (vect_slp_analyze_bb_1): Likewise.
2020-09-11improve BB vectorization dump locationsRichard Biener1-1/+3
This tries to improve BB vectorization dumps by providing more precise locations. Currently the vect_location is simply the very last stmt in a basic-block that has a location. So for double a[4], b[4]; int x[4], y[4]; void foo() { a[0] = b[0]; // line 5 a[1] = b[1]; a[2] = b[2]; a[3] = b[3]; x[0] = y[0]; // line 9 x[1] = y[1]; x[2] = y[2]; x[3] = y[3]; } // line 13 we show the user with -O3 -fopt-info-vec t.c:13:1: optimized: basic block part vectorized using 16 byte vectors while with the patch we point to both independently vectorized opportunities: t.c:5:8: optimized: basic block part vectorized using 16 byte vectors t.c:9:8: optimized: basic block part vectorized using 16 byte vectors there's the possibility that the location regresses in case the root stmt in the SLP instance has no location. For a SLP subgraph with multiple entries the location also chooses one entry at random, not sure in which case we want to dump both. Still as the plan is to extend the basic-block vectorization scope from single basic-block to multiple ones this is a first step to preserve something sensible. Implementation-wise this makes both costing and code-generation happen on the subgraphs as analyzed. 2020-09-11 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_instance::location): New method. (vect_schedule_slp): Adjust prototype. * tree-vectorizer.c (vec_info::remove_stmt): Adjust the BB region begin if we removed the stmt it points to. * tree-vect-loop.c (vect_transform_loop): Adjust. * tree-vect-slp.c (_slp_instance::location): Implement. (vect_analyze_slp_instance): For BB vectorization set vect_location to that of the instance. (vect_slp_analyze_operations): Likewise. (vect_bb_vectorization_profitable_p): Remove wrapper. (vect_slp_analyze_bb_1): Remove cost check here. (vect_slp_region): Cost check and code generate subgraphs separately, report optimized locations and missed optimizations due to profitability for each of them. (vect_schedule_slp): Get the vector of SLP graph entries to vectorize as argument.
2020-09-10tree-optimization/96043 - BB vectorization costing improvementRichard Biener1-1/+7
This makes the BB vectorizer cost independent SLP subgraphs separately. While on pristine trunk and for x86_64 I failed to distill a testcase where the vectorizer would think _any_ basic-block vectorization opportunity is not profitable I do have pending work that would make the cost savings of a profitable opportunity make another independently not profitable opportunity vectorized. 2020-09-08 Richard Biener <rguenther@suse.de> PR tree-optimization/96043 * tree-vectorizer.h (_slp_instance::cost_vec): New. (_slp_instance::subgraph_entries): Likewise. (BB_VINFO_TARGET_COST_DATA): Remove. * tree-vect-slp.c (vect_free_slp_instance): Free cost_vec and subgraph_entries. (vect_analyze_slp_instance): Initialize them. (vect_slp_analyze_operations): Defer passing costs to the target, instead record them in the SLP graph entry. (get_ultimate_leader): New helper for graph partitioning. (vect_bb_partition_graph_r): Likewise. (vect_bb_partition_graph): New function to partition the SLP graph into independently costable parts. (vect_bb_vectorization_profitable_p): Adjust to work on a subgraph. (vect_bb_vectorization_profitable_p): New wrapper, discarding non-profitable vectorization of subgraphs. (vect_slp_analyze_bb_1): Call vect_bb_partition_graph before costing. * gcc.dg/vect/costmodel/x86_64/costmodel-pr69297.c: Adjust.
2020-09-07code generate live lanes in basic-block vectorizationRichard Biener1-1/+1
The following adds the capability to code-generate live lanes in basic-block vectorization using lane extracts from vector stmts rather than keeping the original scalar code around for those. This eventually makes previously not profitable vectorizations profitable (the live scalar code was appropriately costed so are the lane extracts now), without considering the cost model this patch doesn't add or remove any basic-block vectorization capabilities. The patch re/ab-uses STMT_VINFO_LIVE_P in basic-block vectorization mode to tell whether a live lane is vectorized or whether it is provided by means of keeping the scalar code live. The patch is a first step towards vectorizing sequences of stmts that do not end up in stores or vector constructors though. Bootstrapped and tested on x86_64-unknown-linux-gnu. 2020-09-04 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vectorizable_live_operation): Adjust. * tree-vect-loop.c (vectorizable_live_operation): Vectorize live lanes out of basic-block vectorization nodes. * tree-vect-slp.c (vect_bb_slp_mark_live_stmts): New function. (vect_slp_analyze_operations): Analyze live lanes and their vectorization possibility after the whole SLP graph is final. (vect_bb_slp_scalar_cost): Adjust for vectorized live lanes. * tree-vect-stmts.c (can_vectorize_live_stmts): Adjust. (vect_transform_stmt): Call can_vectorize_live_stmts also for basic-block vectorization. * gcc.dg/vect/bb-slp-46.c: New testcase. * gcc.dg/vect/bb-slp-47.c: Likewise. * gcc.dg/vect/bb-slp-32.c: Adjust.
2020-09-04tree-optimization/96920 - another ICE when vectorizing nested cyclesRichard Biener1-5/+0
This refines the previous fix for PR96698 by re-doing how and where we arrange for setting vectorized cycle PHI backedge values. 2020-09-04 Richard Biener <rguenther@suse.de> PR tree-optimization/96698 PR tree-optimization/96920 * tree-vectorizer.h (loop_vec_info::reduc_latch_defs): Remove. (loop_vec_info::reduc_latch_slp_defs): Likewise. * tree-vect-stmts.c (vect_transform_stmt): Remove vectorized cycle PHI latch code. * tree-vect-loop.c (maybe_set_vectorized_backedge_value): New helper to set vectorized cycle PHI latch values. (vect_transform_loop): Walk over all PHIs again after vectorizing them, calling maybe_set_vectorized_backedge_value. Call maybe_set_vectorized_backedge_value for each vectorized stmt. Remove delayed update code. * tree-vect-slp.c (vect_analyze_slp_instance): Initialize SLP instance reduc_phis member. (vect_schedule_slp): Set vectorized cycle PHI latch values. * gfortran.dg/vect/pr96920.f90: New testcase. * gcc.dg/vect/pr96920.c: Likewise.
2020-08-26tree-optimization/96698 - fix ICE when vectorizing nested cyclesRichard Biener1-0/+5
This fixes vectorized PHI latch edge updating and delay it until all of the loop is code generated to deal with the case that the latch def is a PHI in the same block. 2020-08-26 Richard Biener <rguenther@suse.de> PR tree-optimization/96698 * tree-vectorizer.h (loop_vec_info::reduc_latch_defs): New. (loop_vec_info::reduc_latch_slp_defs): Likewise. * tree-vect-stmts.c (vect_transform_stmt): Only record stmts to update PHI latches from, perform the update ... * tree-vect-loop.c (vect_transform_loop): ... here after vectorizing those PHIs. (info_for_reduction): Properly handle non-reduction PHIs. * gcc.dg/vect/pr96698.c: New testcase.
2020-08-24SLP: support entire BB.Martin Liska1-2/+3
gcc/ChangeLog: * tree-vect-data-refs.c (dr_group_sort_cmp): Work on data_ref_pair. (vect_analyze_data_ref_accesses): Work on groups. (vect_find_stmt_data_reference): Add group_id argument and fill up dataref_groups vector. * tree-vect-loop.c (vect_get_datarefs_in_loop): Pass new arguments. (vect_analyze_loop_2): Likewise. * tree-vect-slp.c (vect_slp_analyze_bb_1): Pass argument. (vect_slp_bb_region): Likewise. (vect_slp_region): Likewise. (vect_slp_bb):Work on the entire BB. * tree-vectorizer.h (vect_analyze_data_ref_accesses): Add new argument. (vect_find_stmt_data_reference): Likewise. gcc/testsuite/ChangeLog: * gcc.dg/vect/bb-slp-38.c: Adjust pattern as now we only process a single vectorization and now 2 partial. * gcc.dg/vect/bb-slp-45.c: New test.
2020-08-06vect/rs6000: Support vector with length cost modelingKewen Lin1-0/+1
This patch is to add the cost modeling for vector with length, it mainly follows what we generate for vector with length in functions vect_set_loop_controls_directly and vect_gen_len at the worst case. For Power, the length is expected to be in bits 0-7 (high bits), we have to model the cost of shifting bits, which is implemented in adjust_vect_cost_per_loop. Bootstrapped/regtested on powerpc64le-linux-gnu (P9) with explicit param vect-partial-vector-usage=1. gcc/ChangeLog: * config/rs6000/rs6000.c (rs6000_adjust_vect_cost_per_loop): New function. (rs6000_finish_cost): Call rs6000_adjust_vect_cost_per_loop. * tree-vect-loop.c (vect_estimate_min_profitable_iters): Add cost modeling for vector with length. (vect_rgroup_iv_might_wrap_p): New function, factored out from... * tree-vect-loop-manip.c (vect_set_loop_controls_directly): ...this. Update function comment. * tree-vect-stmts.c (vect_gen_len): Update function comment. * tree-vectorizer.h (vect_rgroup_iv_might_wrap_p): New declare.
2020-07-19vect: Support length-based partial vectors approachKewen Lin1-3/+32
Power9 supports vector load/store instruction lxvl/stxvl which allow us to operate partial vectors with one specific length. This patch extends some of current mask-based partial vectors support code for length-based approach, also adds some length specific support code. So far it assumes that we can only have one partial vectors approach at the same time, it will disable to use partial vectors if both approaches co-exist. Like the description of optab len_load/len_store, the length-based approach can have two flavors, one is length in bytes, the other is length in lanes. This patch is mainly implemented and tested for length in bytes, but as Richard S. suggested, most of code has considered both flavors. This also introduces one parameter vect-partial-vector-usage allow users to control when the loop vectorizer considers using partial vectors as an alternative to falling back to scalar code. gcc/ChangeLog: * config/rs6000/rs6000.c (rs6000_option_override_internal): Set param_vect_partial_vector_usage to 0 explicitly. * doc/invoke.texi (vect-partial-vector-usage): Document new option. * optabs-query.c (get_len_load_store_mode): New function. * optabs-query.h (get_len_load_store_mode): New declare. * params.opt (vect-partial-vector-usage): New. * tree-vect-loop-manip.c (vect_set_loop_controls_directly): Add the handlings for vectorization using length-based partial vectors, call vect_gen_len for length generation, and rename some variables with items instead of scalars. (vect_set_loop_condition_partial_vectors): Add the handlings for vectorization using length-based partial vectors. (vect_do_peeling): Allow remaining eiters less than epilogue vf for LOOP_VINFO_USING_PARTIAL_VECTORS_P. * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Init epil_using_partial_vectors_p. (_loop_vec_info::~_loop_vec_info): Call release_vec_loop_controls for lengths destruction. (vect_verify_loop_lens): New function. (vect_analyze_loop): Add handlings for epilogue of loop when it's marked to use vectorization using partial vectors. (vect_analyze_loop_2): Add the check to allow only one vectorization approach using partial vectorization at the same time. Check param vect-partial-vector-usage for partial vectors decision. Mark LOOP_VINFO_EPIL_USING_PARTIAL_VECTORS_P if the epilogue is considerable to use partial vectors. Call release_vec_loop_controls for lengths destruction. (vect_estimate_min_profitable_iters): Adjust for loop vectorization using length-based partial vectors. (vect_record_loop_mask): Init factor to 1 for vectorization using mask-based partial vectors. (vect_record_loop_len): New function. (vect_get_loop_len): Likewise. * tree-vect-stmts.c (check_load_store_for_partial_vectors): Add checks for vectorization using length-based partial vectors. Factor some code to lambda function get_valid_nvectors. (vectorizable_store): Add handlings when using length-based partial vectors. (vectorizable_load): Likewise. (vect_gen_len): New function. * tree-vectorizer.h (struct rgroup_controls): Add field factor mainly for length-based partial vectors. (vec_loop_lens): New typedef. (_loop_vec_info): Add lens and epil_using_partial_vectors_p. (LOOP_VINFO_EPIL_USING_PARTIAL_VECTORS_P): New macro. (LOOP_VINFO_LENS): Likewise. (LOOP_VINFO_FULLY_WITH_LENGTH_P): Likewise. (vect_record_loop_len): New declare. (vect_get_loop_len): Likewise. (vect_gen_len): Likewise.
2020-07-09remove premature vect_verify_datarefs_alignmentRichard Biener1-3/+1
This followup removes vect_verify_datarefs_alignment and its premature cancellation of vectorization leaving the actual decision whether alignment is supported to the functions deciding whether we can vectorize a load or store. 2020-07-08 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_verify_datarefs_alignment): Remove. (vect_slp_analyze_and_verify_instance_alignment): Rename to ... (vect_slp_analyze_instance_alignment): ... this. * tree-vect-data-refs.c (verify_data_ref_alignment): Remove. (vect_verify_datarefs_alignment): Likewise. (vect_enhance_data_refs_alignment): Do not call vect_verify_datarefs_alignment. (vect_slp_analyze_node_alignment): Rename from vect_slp_analyze_and_verify_node_alignment and do not call verify_data_ref_alignment. (vect_slp_analyze_instance_alignment): Rename from vect_slp_analyze_and_verify_instance_alignment. * tree-vect-stmts.c (vectorizable_store): Dump when we vectorize an unaligned access. (vectorizable_load): Likewise. * tree-vect-loop.c (vect_analyze_loop_2): Do not call vect_verify_datarefs_alignment. * tree-vect-slp.c (vect_slp_analyze_bb_1): Adjust. * gcc.dg/vect/bb-slp-10.c: Adjust. * gcc.dg/vect/slp-45.c: Likewise. * gcc.dg/vect/vect-109.c: Likewise.
2020-07-03refactor SLP constant insertion and provde entry insert helperRichard Biener1-0/+2
This provides helpers to insert stmts on region entry abstracted from loop/basic-block split out from vec_init_vector and used from the SLP constant code generation path. The SLP constant code generation path is also changed to avoid needless SSA copying since we can store VECTOR_CSTs directly in the vectorized defs array, improving the IL from the vectorizer. 2020-07-03 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vec_info::insert_on_entry): New. (vec_info::insert_seq_on_entry): Likewise. * tree-vectorizer.c (vec_info::insert_on_entry): Implement. (vec_info::insert_seq_on_entry): Likewise. * tree-vect-stmts.c (vect_init_vector_1): Use vec_info::insert_on_entry. (vect_finish_stmt_generation): Set modified bit after adjusting VUSE. * tree-vect-slp.c (vect_create_constant_vectors): Simplify by using vec_info::insert_seq_on_entry and bypassing vec_init_vector. (vect_schedule_slp_instance): Deal with all-constant children later.
2020-06-29do not include <utility> from tree-vectorizer.hRichard Biener1-1/+1
This removes the duplicate <utility> include from tree-vectorizer.h. 2020-06-29 Richard Biener <rguenther@suse.de> * tree-vectorizer.h: Do not include <utility>.
2020-06-29Use gsi_bb instead of iterator->bb.Martin Liska1-1/+1
gcc/ChangeLog: * tree-ssa-ccp.c (gsi_prev_dom_bb_nondebug): Use gsi_bb instead of gimple_stmt_iterator::bb. * tree-ssa-math-opts.c (insert_reciprocals): Likewise. * tree-vectorizer.h: Likewise.
2020-06-26tree-optimization/95897 - fix fold-left SLP reduction insert placeRichard Biener1-1/+0
This fixes computation of the insertion place for fold-left SLP reductions where the PHIs do not have vectorized stmts. The SLP representation isn't perfect here thus the following. 2020-06-26 Richard Biener <rguenther@suse.de> PR tree-optimization/95897 * tree-vectorizer.h (vectorizable_induction): Remove unused gimple_stmt_iterator * parameter. * tree-vect-loop.c (vectorizable_induction): Likewise. (vect_analyze_loop_operations): Adjust. * tree-vect-stmts.c (vect_analyze_stmt): Likewise. (vect_transform_stmt): Likewise. * tree-vect-slp.c (vect_schedule_slp_instance): Adjust for fold-left reductions, clarify existing reduction case. * gcc.dg/vect/pr95897.c: New testcase.
2020-06-24emit SLP vectorized loads earlierRichard Biener1-0/+1
This makes sure to emit SLP vectorized loads where the first scalar load is. This makes SLP dependence checking more powerful because hoisting loads can use TBAA and it increases the freedom for vector placement when there are constraints from live lanes. Vectorized shifts block inserting vectorized stmts always after vectorized defs because it ends up using the original scalar operand even when the SLP graph indicates the shift operand is vectorized (and we actually emit and cost those stmts). vect_slp_analyze_and_verify_node_alignment shows we need alignment for too many places, this is a temporary solution and my plan is to have a single meta-info for a dataref group instead (also getting rid of DR_GROUP_FIRST/NEXT_ELEMENT). 2020-06-24 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_find_first_scalar_stmt_in_slp): Declare. * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Simplify for new position of vectorized SLP loads. (vect_slp_analyze_node_dependences): Adjust for it. (vect_slp_analyze_and_verify_node_alignment): Compute alignment for the first stmts dataref. * tree-vect-slp.c (vect_find_first_scalar_stmt_in_slp): New. (vect_schedule_slp_instance): Emit loads before the first scalar stmt. * tree-vect-stmts.c (vectorizable_load): Do what the comment says and use vect_find_first_scalar_stmt_in_slp.
2020-06-18vectorizer: add _bb_vec_info::region_stmts and reverse_region_stmtsMartin Liska1-0/+82
gcc/ChangeLog: * coretypes.h (struct iterator_range): New type. * tree-vect-patterns.c (vect_determine_precisions): Use range-based iterator. (vect_pattern_recog): Likewise. * tree-vect-slp.c (_bb_vec_info): Likewise. (_bb_vec_info::~_bb_vec_info): Likewise. (vect_slp_check_for_constructors): Likewise. * tree-vectorizer.h:Add new iterators and functions that use it.
2020-06-18remove SLP_TREE_TWO_OPERATORS, add SLP permutation nodeRichard Biener1-4/+9
This removes the SLP_TREE_TWO_OPERATORS hack in favor of having explicit SLP nodes for both computations and the blend operation. For this introduce a generic merge + select + permute SLP node (with implementation limits). Building upon earlier patches it adds vect_stmt_dominates_stmt_p and the ability to compute a vector insertion place from vectorized stmts (which now have UID zero) as needed for the permute node. 2020-06-17 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::two_operators): Remove. (_slp_tree::lane_permutation): New member. (_slp_tree::code): Likewise. (SLP_TREE_TWO_OPERATORS): Remove. (SLP_TREE_LANE_PERMUTATION): New. (SLP_TREE_CODE): Likewise. (vect_stmt_dominates_stmt_p): Declare. * tree-vectorizer.c (vect_stmt_dominates_stmt_p): New function. * tree-vect-stmts.c (vect_model_simple_cost): Remove SLP_TREE_TWO_OPERATORS handling. * tree-vect-slp.c (_slp_tree::_slp_tree): Amend. (_slp_tree::~_slp_tree): Likewise. (vect_two_operations_perm_ok_p): Remove. (vect_build_slp_tree_1): Remove verification of two-operator permutation here. (vect_build_slp_tree_2): When we have two different operators build two computation SLP nodes and a blend. (vect_print_slp_tree): Print the lane permutation if it exists. (slp_copy_subtree): Copy it. (vect_slp_rearrange_stmts): Re-arrange it. (vect_slp_analyze_node_operations_1): Handle SLP_TREE_CODE VEC_PERM_EXPR explicitely. (vect_schedule_slp_instance): Likewise. Remove old SLP_TREE_TWO_OPERATORS code. (vectorizable_slp_permutation): New function.
2020-06-12vect: Factor out and rename some functions/macrosKewen Lin1-9/+10
Power supports vector memory access with length (in bytes) instructions. Like existing fully masking for SVE, it is another approach to vectorize the loop using partially-populated vectors. As Richard Sandiford suggested, we should share the codes in approaches with partial vectors if possible. This patch is to: 1) factor out two functions: - vect_min_prec_for_max_niters - vect_known_niters_smaller_than_vf. 2) rename four functions: - vect_iv_limit_for_full_masking - check_load_store_masking - vect_set_loop_condition_masked - vect_set_loop_condition_unmasked 3) rename macros LOOP_VINFO_MASK_COMPARE_TYPE and LOOP_VINFO_MASK_IV_TYPE. Bootstrapped/regtested on aarch64-linux-gnu. gcc/ChangeLog: * tree-vect-loop-manip.c (vect_set_loop_controls_directly): Rename LOOP_VINFO_MASK_COMPARE_TYPE to LOOP_VINFO_RGROUP_COMPARE_TYPE. Rename LOOP_VINFO_MASK_IV_TYPE to LOOP_VINFO_RGROUP_IV_TYPE. (vect_set_loop_condition_masked): Renamed to ... (vect_set_loop_condition_partial_vectors): ... this. Rename LOOP_VINFO_MASK_COMPARE_TYPE to LOOP_VINFO_RGROUP_COMPARE_TYPE. Rename vect_iv_limit_for_full_masking to vect_iv_limit_for_partial_vectors. (vect_set_loop_condition_unmasked): Renamed to ... (vect_set_loop_condition_normal): ... this. (vect_set_loop_condition): Rename vect_set_loop_condition_unmasked to vect_set_loop_condition_normal. Rename vect_set_loop_condition_masked to vect_set_loop_condition_partial_vectors. (vect_prepare_for_masked_peels): Rename LOOP_VINFO_MASK_COMPARE_TYPE to LOOP_VINFO_RGROUP_COMPARE_TYPE. * tree-vect-loop.c (vect_known_niters_smaller_than_vf): New, factored out from ... (vect_analyze_loop_costing): ... this. (_loop_vec_info::_loop_vec_info): Rename mask_compare_type to compare_type. (vect_min_prec_for_max_niters): New, factored out from ... (vect_verify_full_masking): ... this. Rename vect_iv_limit_for_full_masking to vect_iv_limit_for_partial_vectors. Rename LOOP_VINFO_MASK_COMPARE_TYPE to LOOP_VINFO_RGROUP_COMPARE_TYPE. Rename LOOP_VINFO_MASK_IV_TYPE to LOOP_VINFO_RGROUP_IV_TYPE. (vectorizable_reduction): Update some dumpings with partial vectors instead of fully-masked. (vectorizable_live_operation): Likewise. (vect_iv_limit_for_full_masking): Renamed to ... (vect_iv_limit_for_partial_vectors): ... this. * tree-vect-stmts.c (check_load_store_masking): Renamed to ... (check_load_store_for_partial_vectors): ... this. Update some dumpings with partial vectors instead of fully-masked. (vectorizable_store): Rename check_load_store_masking to check_load_store_for_partial_vectors. (vectorizable_load): Likewise. * tree-vectorizer.h (LOOP_VINFO_MASK_COMPARE_TYPE): Renamed to ... (LOOP_VINFO_RGROUP_COMPARE_TYPE): ... this. (LOOP_VINFO_MASK_IV_TYPE): Renamed to ... (LOOP_VINFO_RGROUP_IV_TYPE): ... this. (vect_iv_limit_for_full_masking): Renamed to ... (vect_iv_limit_for_partial_vectors): this. (_loop_vec_info): Rename mask_compare_type to rgroup_compare_type. Rename iv_type to rgroup_iv_type.
2020-06-11vect: Rename things related to rgroup_masksKewen Lin1-23/+25
Power supports vector memory access with length (in bytes) instructions. Like existing fully masking for SVE, it is another approach to vectorize the loop using partially-populated vectors. As Richard Sandiford pointed out, we can rename the rgroup struct rgroup_masks to rgroup_controls, rename its members mask_type to type, masks to controls to be more generic. Besides, this patch also renames some functions like vect_set_loop_mask to vect_set_loop_control, release_vec_loop_masks to release_vec_loop_controls, vect_set_loop_masks_directly to vect_set_loop_controls_directly. Bootstrapped/regtested on aarch64-linux-gnu. gcc/ChangeLog: * tree-vect-loop-manip.c (vect_set_loop_mask): Renamed to ... (vect_set_loop_control): ... this. (vect_maybe_permute_loop_masks): Rename rgroup_masks related things. (vect_set_loop_masks_directly): Renamed to ... (vect_set_loop_controls_directly): ... this. Also rename some variables with ctrl instead of mask. Rename vect_set_loop_mask to vect_set_loop_control. (vect_set_loop_condition_masked): Rename rgroup_masks related things. Also rename some variables with ctrl instead of mask. * tree-vect-loop.c (release_vec_loop_masks): Renamed to ... (release_vec_loop_controls): ... this. Rename rgroup_masks related things. (_loop_vec_info::~_loop_vec_info): Rename release_vec_loop_masks to release_vec_loop_controls. (can_produce_all_loop_masks_p): Rename rgroup_masks related things. (vect_get_max_nscalars_per_iter): Likewise. (vect_estimate_min_profitable_iters): Likewise. (vect_record_loop_mask): Likewise. (vect_get_loop_mask): Likewise. * tree-vectorizer.h (struct rgroup_masks): Renamed to ... (struct rgroup_controls): ... this. Also rename mask_type to type and rename masks to controls.
2020-06-11vect: Rename fully_masked_p to using_partial_vectors_pKewen Lin1-3/+8
Power supports vector memory access with length (in bytes) instructions. Like existing fully masking for SVE, it is another approach to vectorize the loop using partially-populated vectors. As Richard Sandiford suggested, this patch is to update the existing fully_masked_p field to using_partial_vectors_p. Introduce one macro LOOP_VINFO_USING_PARTIAL_VECTORS_P for partial vectorization checking usage, update the LOOP_VINFO_FULLY_MASKED_P with LOOP_VINFO_USING_PARTIAL_VECTORS_P && !masks.is_empty() and still use it for mask-based partial vectors approach specific checks. Bootstrapped/regtested on aarch64-linux-gnu. gcc/ChangeLog: * tree-vect-loop-manip.c (vect_set_loop_condition): Rename LOOP_VINFO_FULLY_MASKED_P to LOOP_VINFO_USING_PARTIAL_VECTORS_P. (vect_gen_vector_loop_niters): Likewise. (vect_do_peeling): Likewise. * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Rename fully_masked_p to using_partial_vectors_p. (vect_analyze_loop_costing): Rename LOOP_VINFO_FULLY_MASKED_P to LOOP_VINFO_USING_PARTIAL_VECTORS_P. (determine_peel_for_niter): Likewise. (vect_estimate_min_profitable_iters): Likewise. (vect_transform_loop): Likewise. * tree-vectorizer.h (LOOP_VINFO_FULLY_MASKED_P): Updated. (LOOP_VINFO_USING_PARTIAL_VECTORS_P): New macro.
2020-06-11vect: Rename can_fully_mask_p to can_use_partial_vectors_pKewen Lin1-3/+6
Power supports vector memory access with length (in bytes) instructions. Like existing fully masking for SVE, it is another approach to vectorize the loop using partially-populated vectors. As Richard Sandiford pointed out, we should extend the existing flag can_fully_mask_p to be more generic, to indicate whether we have any chances with partial vectors for this loop. So this patch is to rename this flag to can_use_partial_vectors_p to be more meaningful, also rename the macro LOOP_VINFO_CAN_FULLY_MASK_P to LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P. Bootstrapped/regtested on aarch64-linux-gnu. gcc/ChangeLog: * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Rename can_fully_mask_p to can_use_partial_vectors_p. (vect_analyze_loop_2): Rename LOOP_VINFO_CAN_FULLY_MASK_P to LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P. Rename saved_can_fully_mask_p to saved_can_use_partial_vectors_p. (vectorizable_reduction): Rename LOOP_VINFO_CAN_FULLY_MASK_P to LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P. (vectorizable_live_operation): Likewise. * tree-vect-stmts.c (permute_vec_elements): Likewise. (check_load_store_masking): Likewise. (vectorizable_operation): Likewise. (vectorizable_store): Likewise. (vectorizable_load): Likewise. (vectorizable_condition): Likewise. * tree-vectorizer.h (LOOP_VINFO_CAN_FULLY_MASK_P): Renamed to ... (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P): ... this. (_loop_vec_info): Rename can_fully_mask_p to can_use_partial_vectors_p.
2020-06-10Make {SLP_TREE,STMT_VINFO}_VEC_STMTS a vector of gimple *Richard Biener1-14/+12
This makes {SLP_TREE,STMT_VINFO}_VEC_STMTS a vector of gimple * and not allocate a stmt_vec_info for vectorizer generated stmts since this is now possible after removing the only use which was chaining of vector stmts via STMT_VINFO_RELATED_STMT. This also removes all stmt_vec_info allocations done for vector stmts, the remaining ones are for stmts in the scalar IL and for patterns which are not part of the IL. Thus after this the stmt UIDs inside a basic-block are suitable for dominance checking if you ignore (or lazy-fill) UIDs of zero of the vector stmts inserted during transform. This property is ensured by a new flag set when pattern analysis is complete. 2020-06-10 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::vec_stmts): Make it a vector of gimple * stmts. (_stmt_vec_info::vec_stmts): Likewise. (vec_info::stmt_vec_info_ro): New flag. (vect_finish_replace_stmt): Adjust declaration. (vect_finish_stmt_generation): Likewise. (vectorizable_induction): Likewise. (vect_transform_reduction): Likewise. (vectorizable_lc_phi): Likewise. * tree-vect-data-refs.c (vect_create_data_ref_ptr): Do not allocate stmt infos for increments. (vect_record_grouped_load_vectors): Adjust. * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise. (vectorize_fold_left_reduction): Likewise. (vect_transform_reduction): Likewise. (vect_transform_cycle_phi): Likewise. (vectorizable_lc_phi): Likewise. (vectorizable_induction): Likewise. (vectorizable_live_operation): Likewise. (vect_transform_loop): Likewise. * tree-vect-patterns.c (vect_pattern_recog): Set stmt_vec_info_ro. * tree-vect-slp.c (vect_get_slp_vect_def): Adjust. (vect_get_slp_defs): Likewise. (vect_transform_slp_perm_load): Likewise. (vect_schedule_slp_instance): Likewise. (vectorize_slp_instance_root_stmt): Likewise. * tree-vect-stmts.c (vect_get_vec_defs_for_operand): Likewise. (vect_finish_stmt_generation_1): Do not allocate a stmt info. (vect_finish_replace_stmt): Do not return anything. (vect_finish_stmt_generation): Likewise. (vect_build_gather_load_calls): Adjust. (vectorizable_bswap): Likewise. (vectorizable_call): Likewise. (vectorizable_simd_clone_call): Likewise. (vect_create_vectorized_demotion_stmts): Likewise. (vectorizable_conversion): Likewise. (vectorizable_assignment): Likewise. (vectorizable_shift): Likewise. (vectorizable_operation): Likewise. (vectorizable_scan_store): Likewise. (vectorizable_store): Likewise. (vectorizable_load): Likewise. (vectorizable_condition): Likewise. (vectorizable_comparison): Likewise. (vect_transform_stmt): Likewise. * tree-vectorizer.c (vec_info::vec_info): Initialize stmt_vec_info_ro. (vec_info::replace_stmt): Copy over stmt UID rather than unsetting/setting a stmt info allocating a new UID. (vec_info::set_vinfo_for_stmt): Assert !stmt_vec_info_ro.
2020-06-10Introduce STMT_VINFO_VEC_STMTSRichard Biener1-12/+18
This gets rid of the linked list of STMT_VINFO_VECT_STMT and STMT_VINFO_RELATED_STMT in preparation for vectorized stmts no longer needing a stmt_vec_info (just for this chaining). This has ripple-down effects in all places we gather vectorized defs. For this new interfaces are introduced and used throughout vectorization, simplifying code in a lot of places and merging it with the SLP way of gathering vectorized operands. There is vect_get_vec_defs as the new recommended unified interface and vect_get_vec_defs_for_operand as one for non-SLP operation. I've resorted to keep the structure of the code the same where using vect_get_vec_defs would have been too disruptive for this already large patch. 2020-06-10 Richard Biener <rguenther@suse.de> * tree-vect-data-refs.c (vect_vfa_access_size): Adjust. (vect_record_grouped_load_vectors): Likewise. * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise. (vectorize_fold_left_reduction): Likewise. (vect_transform_reduction): Likewise. (vect_transform_cycle_phi): Likewise. (vectorizable_lc_phi): Likewise. (vectorizable_induction): Likewise. (vectorizable_live_operation): Likewise. (vect_transform_loop): Likewise. * tree-vect-slp.c (vect_get_slp_defs): New function, split out from overload. * tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Remove. (vect_get_vec_def_for_operand): Likewise. (vect_get_vec_def_for_stmt_copy): Likewise. (vect_get_vec_defs_for_stmt_copy): Likewise. (vect_get_vec_defs_for_operand): New function. (vect_get_vec_defs): Likewise. (vect_build_gather_load_calls): Adjust. (vect_get_gather_scatter_ops): Likewise. (vectorizable_bswap): Likewise. (vectorizable_call): Likewise. (vectorizable_simd_clone_call): Likewise. (vect_get_loop_based_defs): Remove. (vect_create_vectorized_demotion_stmts): Adjust. (vectorizable_conversion): Likewise. (vectorizable_assignment): Likewise. (vectorizable_shift): Likewise. (vectorizable_operation): Likewise. (vectorizable_scan_store): Likewise. (vectorizable_store): Likewise. (vectorizable_load): Likewise. (vectorizable_condition): Likewise. (vectorizable_comparison): Likewise. (vect_transform_stmt): Adjust and remove no longer applicable sanity checks. * tree-vectorizer.c (vec_info::new_stmt_vec_info): Initialize STMT_VINFO_VEC_STMTS. (vec_info::free_stmt_vec_info): Relase it. * tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Remove. (_stmt_vec_info::vec_stmts): Add. (STMT_VINFO_VEC_STMT): Remove. (STMT_VINFO_VEC_STMTS): New. (vect_get_vec_def_for_operand_1): Remove. (vect_get_vec_def_for_operand): Likewise. (vect_get_vec_defs_for_stmt_copy): Likewise. (vect_get_vec_def_for_stmt_copy): Likewise. (vect_get_vec_defs): New overloads. (vect_get_vec_defs_for_operand): New. (vect_get_slp_defs): Declare.
2020-06-04add vect_get_slp_vect_defRichard Biener1-0/+1
This adds vect_get_slp_vect_def to get at a SLP nodes vectorized def, abstracting away the details. It also fixes one stray failure to use SLP_TREE_REPRESENTATIVE. 2020-05-04 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_get_slp_vect_def): Declare. * tree-vect-loop.c (vect_create_epilog_for_reduction): Use it. * tree-vect-stmts.c (vect_transform_stmt): Likewise. (vect_is_simple_use): Use SLP_TREE_REPRESENTATIVE. * tree-vect-slp.c (vect_get_slp_vect_defs): Fold into single use ... (vect_get_slp_defs): ... here. (vect_get_slp_vect_def): New function.
2020-06-04Add explicit SLP_TREE_LANESRichard Biener1-0/+3
This adds an explicit number of scalar lanes to the SLP node avoiding to dispatch between stmts/ops and eventually not require those vectors at all. 2020-05-27 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::lanes): New. (SLP_TREE_LANES): Likewise. * tree-vect-loop.c (vect_create_epilog_for_reduction): Use it. (vectorizable_reduction): Likewise. (vect_transform_cycle_phi): Likewise. (vectorizable_induction): Likewise. (vectorizable_live_operation): Likewise. * tree-vect-slp.c (_slp_tree::_slp_tree): Initialize lanes. (vect_create_new_slp_node): Likewise. (slp_copy_subtree): Copy it. (vect_optimize_slp): Use it. (vect_slp_analyze_node_operations_1): Likewise. (vect_slp_convert_to_external): Likewise. (vect_bb_vectorization_profitable_p): Likewise. * tree-vect-stmts.c (vectorizable_load): Likewise. (get_vectype_for_scalar_type): Likewise.
2020-05-29tree-optimization/95272 - add SLP_TREE_REPRESENTATIVERichard Biener1-0/+4
This adds SLP_TREE_REPRESENTATIVE - a representative stmt-info that is used by SLP analysis and code generation. This avoids the need for the hack in vect_slp_rearrange_stmts which previously avoided to re-arrange stmts that might not have been isomorphic because of operand swapping. It also plays nice with future directions of SLP and for the forseeable future is easier than replicating more and more info in the SLP node as long as non-SLP is in-tree. 2020-05-29 Richard Biener <rguenther@suse.de> PR tree-optimization/95272 * tree-vectorizer.h (_slp_tree::representative): Add. (SLP_TREE_REPRESENTATIVE): Likewise. * tree-vect-loop.c (vectorizable_reduction): Adjust SLP node gathering. (vectorizable_live_operation): Use the representative to attach the reduction info to. * tree-vect-slp.c (_slp_tree::_slp_tree): Initialize SLP_TREE_REPRESENTATIVE. (vect_create_new_slp_node): Likewise. (slp_copy_subtree): Copy it. (vect_slp_rearrange_stmts): Re-arrange even COND_EXPR stmts. (vect_slp_analyze_node_operations_1): Pass the representative to vect_analyze_stmt. (vect_schedule_slp_instance): Pass the representative to vect_transform_stmt. * gcc.dg/vect/pr95272.c: New testcase.
2020-05-28Code generate externals/invariants during the SLP graph walkRichard Biener1-0/+2
This generates vector defs for externals and invariants during the SLP walk rather than as part of getting vectorized defs when vectorizing the users. This is a requirement to make sharing of external/invariant nodes be reflected in actual code generation. This temporarily adds a SLP_TREE_VEC_DEFS vector alongside the SLP_TREE_VEC_STMTS one. Eventually the latter can go away. 2020-05-27 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::vec_defs): Add. (SLP_TREE_VEC_DEFS): Likewise. * tree-vect-slp.c (_slp_tree::_slp_tree): Adjust. (_slp_tree::~_slp_tree): Likewise. (vect_mask_constant_operand_p): Remove unused function. (vect_get_constant_vectors): Rename to... (vect_create_constant_vectors): ... this. Take the invariant node as argument and code generate it. Remove dead code, remove temporary asserts. Pass a NULL stmt_info to vect_init_vector. (vect_get_slp_defs): Simplify. (vect_schedule_slp_instance): Code-generate externals and invariants using vect_create_constant_vectors.
2020-05-22enfoce SLP_TREE_VECTYPE for invariantsRichard Biener1-0/+5
This tries to enforce a set SLP_TREE_VECTYPE in vect_get_constant_vectors and provides some infrastructure for setting it in the vectorizable_* functions, amending those. 2020-05-22 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_is_simple_use): New overload. (vect_maybe_update_slp_op_vectype): New. * tree-vect-stmts.c (vect_is_simple_use): New overload accessing operands of SLP vs. non-SLP operation transparently. (vect_maybe_update_slp_op_vectype): New function updating the possibly shared SLP operands vector type. (vectorizable_operation): Be a bit more SLP vs non-SLP agnostic using the new vect_is_simple_use overload; update SLP invariant operand nodes vector type. (vectorizable_comparison): Likewise. (vectorizable_call): Likewise. (vectorizable_conversion): Likewise. (vectorizable_shift): Likewise. (vectorizable_store): Likewise. (vectorizable_condition): Likewise. (vectorizable_assignment): Likewise. * tree-vect-loop.c (vectorizable_reduction): Likewise. * tree-vect-slp.c (vect_get_constant_vectors): Enforce present SLP_TREE_VECTYPE and check it matches previous behavior.
2020-05-22add ctor/dtor to slp_treeRichard Biener1-0/+3
This adds constructor and destructor to slp_tree factoring common code. I've not changed the wrappers to overloaded CTORs since I hope to use object_allocator<> and am not sure whether that can be done in any fancy way yet. 2020-05-22 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::_slp_tree): New. (_slp_tree::~_slp_tree): Likewise. * tree-vect-slp.c (_slp_tree::_slp_tree): Factor out code from allocators. (_slp_tree::~_slp_tree): Implement. (vect_free_slp_tree): Simplify. (vect_create_new_slp_node): Likewise. Add nops parameter. (vect_build_slp_tree_2): Adjust. (vect_analyze_slp_instance): Likewise.
2020-05-19cost invariant nodes from vect_slp_analyze_node_operations SLP walkRichard Biener1-0/+2
2020-05-19 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (_slp_tree::vectype): Add field. (SLP_TREE_VECTYPE): New. * tree-vect-slp.c (vect_create_new_slp_node): Initialize SLP_TREE_VECTYPE. (vect_create_new_slp_node): Likewise. (vect_prologue_cost_for_slp): Move here from tree-vect-stmts.c and simplify. (vect_slp_analyze_node_operations): Walk nodes children for invariant costing. (vect_get_constant_vectors): Use local scope op variable. * tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Remove here. (vect_model_simple_cost): Adjust. (vect_model_store_cost): Likewise. (vectorizable_store): Likewise.
2020-05-13add vectype parameter to add_stmt_cost hookRichard Biener1-6/+21
This adds a vectype parameter to add_stmt_cost which avoids the need to pass down a (wrong) stmt_info just to carry this information. Useful for invariants which do not have a stmt_info associated. 2020-05-13 Richard Biener <rguenther@suse.de> * target.def (add_stmt_cost): Add new vectype parameter. * targhooks.c (default_add_stmt_cost): Adjust. * targhooks.h (default_add_stmt_cost): Likewise. * config/aarch64/aarch64.c (aarch64_add_stmt_cost): Take new vectype parameter. * config/arm/arm.c (arm_add_stmt_cost): Likewise. * config/i386/i386.c (ix86_add_stmt_cost): Likewise. * config/rs6000/rs6000.c (rs6000_add_stmt_cost): Likewise. * tree-vectorizer.h (stmt_info_for_cost::vectype): Add. (dump_stmt_cost): Add new vectype parameter. (add_stmt_cost): Likewise. (record_stmt_cost): Likewise. (record_stmt_cost): Add overload with old signature. * tree-vect-loop.c (vect_compute_single_scalar_iteration_cost): Adjust. (vect_get_known_peeling_cost): Likewise. (vect_estimate_min_profitable_iters): Likewise. * tree-vectorizer.c (dump_stmt_cost): Add new vectype parameter. * tree-vect-stmts.c (record_stmt_cost): Likewise. (vect_prologue_cost_for_slp_op): Remove stmt_vec_info parameter and pass down correct vectype and NULL stmt_info. (vect_model_simple_cost): Adjust. (vect_model_store_cost): Likewise.
2020-05-13Remove SLP_INSTANCE_GROUP_SIZERichard Biener1-4/+4
This removes the SLP_INSTANCE_GROUP_SIZE member since the number of lanes throughout a SLP subgraph is not necessarily constant. 2020-05-13 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (SLP_INSTANCE_GROUP_SIZE): Remove. (_slp_instance::group_size): Likewise. * tree-vect-loop.c (vectorizable_reduction): The group size is the number of lanes in the node. * tree-vect-slp.c (vect_attempt_slp_rearrange_stmts): Likewise. (vect_analyze_slp_instance): Do not set SLP_INSTANCE_GROUP_SIZE, verify it matches the instance trees number of lanes. (vect_slp_analyze_node_operations_1): Use the numer of lanes in the node as group size. (vect_bb_vectorization_profitable_p): Use the instance root number of lanes for the size of life. (vect_schedule_slp_instance): Use the number of lanes as group_size. * tree-vect-stmts.c (vectorizable_load): Remove SLP instance parameter. Use the number of lanes of the load for the group size in the gap adjustment code. (vect_analyze_stmt): Adjust. (vect_transform_stmt): Likewise.
2020-05-08move permutation validity checkRichard Biener1-1/+3
This delays the SLP permutation check to vectorizable_load and optimizes permutations only after all SLP instances have been generated and the vectorization factor is determined. 2020-05-08 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vec_info::slp_loads): New. (vect_optimize_slp): Declare. * tree-vect-slp.c (vect_attempt_slp_rearrange_stmts): Do nothing when there are no loads. (vect_gather_slp_loads): Gather loads into a vector. (vect_supported_load_permutation_p): Remove. (vect_analyze_slp_instance): Do not verify permutation validity here. (vect_analyze_slp): Optimize permutations of reductions after all SLP instances have been gathered and gather all loads. (vect_optimize_slp): New function split out from vect_supported_load_permutation_p. Elide some permutations. (vect_slp_analyze_bb_1): Call vect_optimize_slp. * tree-vect-loop.c (vect_analyze_loop_2): Likewise. * tree-vect-stmts.c (vectorizable_load): Check whether the load can be permuted. When generating code assert we can. * gcc.dg/vect/bb-slp-pr68892.c: Adjust for not supported SLP permutations becoming builds from scalars. * gcc.dg/vect/bb-slp-pr78205.c: Likewise. * gcc.dg/vect/bb-slp-34.c: Likewise.
2020-05-06Prepare removal of SLP_INSTANCE_GROUP_SIZERichard Biener1-1/+1
This removes trivial instances of SLP_INSTANCE_GROUP_SIZE and refrains from using a "SLP instance" which nowadays is just one of the possibly many entries into the SLP graph. 2020-05-06 Richard Biener <rguenther@suse.de> * tree-vectorizer.h (vect_transform_slp_perm_load): Adjust. * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Remove slp_instance parameter, just iterate over all scalar stmts. (vect_slp_analyze_instance_dependence): Adjust and likewise. * tree-vect-slp.c (vect_bb_slp_scalar_cost): Remove unused BB parameter. (vect_schedule_slp): Just iterate over all scalar stmts. (vect_supported_load_permutation_p): Adjust. (vect_transform_slp_perm_load): Remove slp_instance parameter, instead use the number of lanes in the node as group size. * tree-vect-stmts.c (vect_model_load_cost): Get vectorization factor instead of slp_instance as parameter. (vectorizable_load): Adjust.