aboutsummaryrefslogtreecommitdiff
path: root/gcc/internal-fn.c
AgeCommit message (Collapse)AuthorFilesLines
2020-10-12SLP: fix SVE issuesMartin Liska1-0/+1
The patch fixes the following 2 issues: .MASK_STORE_LANES (&a, 4B, max_mask_34, vect_array.12); here we miss to return the last argument as stored value. ivtmp_32 = ivtmp_31 + POLY_INT_CST [4, 4]; here we miss a bail out in vect_recog_over_widening_pattern. gcc/ChangeLog: PR tree-optimization/97079 * internal-fn.c (internal_fn_stored_value_index): Handle also .MASK_STORE_LANES. * tree-vect-patterns.c (vect_recog_over_widening_pattern): Bail out for unsupported TREE_TYPE. gcc/testsuite/ChangeLog: PR tree-optimization/97079 * gcc.target/aarch64/sve/pr97079.c: New test.
2020-10-06divmod: Match and expand DIVMOD even in some cases of constant divisor [PR97282]Jakub Jelinek1-3/+64
As written in the comment, tree-ssa-math-opts.c wouldn't create a DIVMOD ifn call for division + modulo by constant for the fear that during expansion we could generate better code for those cases. If the divisoris a power of two, that is certainly the case always, but otherwise expand_divmod can punt in many cases, e.g. if the division type's precision is above HOST_BITS_PER_WIDE_INT, we don't even call choose_multiplier, because it works on HOST_WIDE_INTs (true, something we should fix eventually now that we have wide_ints), or if pre/post shift is larger than BITS_PER_WORD. So, the following patch recognizes DIVMOD with constant last argument even when it is unclear if expand_divmod will be able to optimize it, and then during DIVMOD expansion if the divisor is constant attempts to expand it as division + modulo and if they actually don't contain any libcalls or division/modulo, they are kept as is, otherwise that sequence is thrown away and divmod optab or libcall is used. 2020-10-06 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/97282 * tree-ssa-math-opts.c (divmod_candidate_p): Don't return false for constant op2 if it is not a power of two and the type has precision larger than HOST_BITS_PER_WIDE_INT or BITS_PER_WORD. * internal-fn.c (contains_call_div_mod): New function. (expand_DIVMOD): If last argument is a constant, try to expand it as TRUNC_DIV_EXPR followed by TRUNC_MOD_EXPR, but if the sequence contains any calls or {,U}{DIV,MOD} rtxes, throw it away and use divmod optab or divmod libfunc. * gcc.target/i386/pr97282.c: New test.
2020-10-01Fix handling of fnspec for internal functions.Jan Hubicka1-1/+1
* internal-fn.c (DEF_INTERNAL_FN): Fix handling of fnspec
2020-09-27IFN: Implement IFN_VEC_SET for ARRAY_REF with VIEW_CONVERT_EXPRXionghu Luo1-0/+41
This patch enables transformation from ARRAY_REF(VIEW_CONVERT_EXPR) to VEC_SET internal function in gimple-isel pass if target supports vec_set with variable index by checking can_vec_set_var_idx_p. gcc/ChangeLog: 2020-09-27 Xionghu Luo <luoxhu@linux.ibm.com> * gimple-isel.cc (gimple_expand_vec_set_expr): New function. (gimple_expand_vec_cond_exprs): Rename to ... (gimple_expand_vec_exprs): ... this and call gimple_expand_vec_set_expr. * internal-fn.c (vec_set_direct): New define. (expand_vec_set_optab_fn): New function. (direct_vec_set_optab_supported_p): New define. * internal-fn.def (VEC_SET): New DEF_INTERNAL_OPTAB_FN. * optabs.c (can_vec_set_var_idx_p): New function. * optabs.h (can_vec_set_var_idx_p): New declaration.
2020-09-23middle-end/96466 - fix VEC_COND isel/expansion issueRichard Biener1-1/+1
We need to avoid forcing BLKmode for truth vectors, instead do as other code and use VOIDmode so layout_type can pick a suitable and consistent mode. RTL expansion of vect_cond_mask also needs to deal with CONST_INT operands which means passing the mode explicitely. 2020-09-23 Richard Biener <rguenther@suse.de> PR middle-end/96466 * internal-fn.c (expand_vect_cond_mask_optab_fn): Use appropriate mode for force_reg. * tree.c (build_truth_vector_type_for): Pass VOIDmode to make_vector_type. * gcc.dg/pr96466.c: New testcase.
2020-07-11middle-end: Improve RTL expansion in expand_mul_overflow,Roger Sayle1-0/+3
This patch improves the RTL that the middle-end generates for testing signed overflow following a widening multiplication. During this expansion the middle-end generates a truncation which can get used multiple times. Placing this intermediate value in a pseudo register reduces the amount of code generated on platforms where this truncation requires an explicit instruction. 2020-07-11 Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog: * internal-fn.c (expand_mul_overflow): When checking for signed overflow from a widening multiplication, we access the truncated lowpart RES twice, so keep this value in a pseudo register.
2020-07-08IFN/optabs: Support vector load/store with lengthKewen Lin1-6/+29
This patch is to add the internal function and optabs support for vector load/store with length. For the vector load/store with length optab, the length item would be measured in lanes by default. For the targets which support length measured in bytes like Power, they should only define VnQI modes to wrap the other same size vector modes. If the length is larger than total lane/byte count of the given mode, the behavior is undefined. For the remaining lanes/bytes which isn't specified by length, they would be taken as undefined value. gcc/ChangeLog: * doc/md.texi (len_load_@var{m}): Document. (len_store_@var{m}): Likewise. * internal-fn.c (len_load_direct): New macro. (len_store_direct): Likewise. (expand_len_load_optab_fn): Likewise. (expand_len_store_optab_fn): Likewise. (direct_len_load_optab_supported_p): Likewise. (direct_len_store_optab_supported_p): Likewise. (expand_mask_load_optab_fn): New macro. Original renamed to ... (expand_partial_load_optab_fn): ... here. Add handlings for len_load_optab. (expand_mask_store_optab_fn): New macro. Original renamed to ... (expand_partial_store_optab_fn): ... here. Add handlings for len_store_optab. (internal_load_fn_p): Handle IFN_LEN_LOAD. (internal_store_fn_p): Handle IFN_LEN_STORE. (internal_fn_stored_value_index): Handle IFN_LEN_STORE. * internal-fn.def (LEN_LOAD): New internal function. (LEN_STORE): Likewise. * optabs.def (len_load_optab, len_store_optab): New optab.
2020-06-27IFN: Fix mask_{load,store} optab support macrosKewen Lin1-2/+2
When I am working on IFNs for vector with length, I noticed that the current optab support query for mask_load/mask_store looks unexpected. The mask_load/mask_store requires two modes for convert_optab query, but the macros direct_mask_{load,store}_optab_supported_p uses direct_optab_supported_p which asserts type pair should have the same mode. I'm not sure whether we have some special reason here or just a typo, since everything goes well now, mask_{load,store} optab check is mainly handled by can_vec_mask_load_store_p. But if we have some codes as below (eg: one checking for all IFNs finally) tree_pair types = direct_internal_fn_types (ifn, call); if(direct_internal_fn_supported_p (ifn, types, OPTIMIZE_FOR_SPEED) ... It will cause ICE. gcc/ChangeLog: * internal-fn.c (direct_mask_load_optab_supported_p): Use convert_optab_supported_p instead of direct_optab_supported_p. (direct_mask_store_optab_supported_p): Likewise.
2020-06-18middle-end/95739 - fix vector condition IFN expansionRichard Biener1-0/+4
This fixes the omission of moving the expansion result to the target. 2020-06-18 Richard Biener <rguenther@suse.de> PR middle-end/95739 * internal-fn.c (expand_vect_cond_optab_fn): Move the result to the target if necessary. (expand_vect_cond_mask_optab_fn): Likewise.
2020-06-17Lower VEC_COND_EXPR into internal functions.Martin Liska1-0/+89
gcc/ChangeLog: * Makefile.in: Add new file. * expr.c (expand_expr_real_2): Add gcc_unreachable as we should not meet this condition. (do_store_flag): Likewise. * gimplify.c (gimplify_expr): Gimplify first argument of VEC_COND_EXPR to be a SSA name. * internal-fn.c (vec_cond_mask_direct): New. (vec_cond_direct): Likewise. (vec_condu_direct): Likewise. (vec_condeq_direct): Likewise. (expand_vect_cond_optab_fn): New. (expand_vec_cond_optab_fn): Likewise. (expand_vec_condu_optab_fn): Likewise. (expand_vec_condeq_optab_fn): Likewise. (expand_vect_cond_mask_optab_fn): Likewise. (expand_vec_cond_mask_optab_fn): Likewise. (direct_vec_cond_mask_optab_supported_p): Likewise. (direct_vec_cond_optab_supported_p): Likewise. (direct_vec_condu_optab_supported_p): Likewise. (direct_vec_condeq_optab_supported_p): Likewise. * internal-fn.def (VCOND): New OPTAB. (VCONDU): Likewise. (VCONDEQ): Likewise. (VCOND_MASK): Likewise. * optabs.c (get_rtx_code): Make it global. (expand_vec_cond_mask_expr): Removed. (expand_vec_cond_expr): Removed. * optabs.h (expand_vec_cond_expr): Likewise. (vector_compare_rtx): Make it global. * passes.def: Add new pass_gimple_isel pass. * tree-cfg.c (verify_gimple_assign_ternary): Add check for VEC_COND_EXPR about first argument. * tree-pass.h (make_pass_gimple_isel): New. * tree-ssa-forwprop.c (pass_forwprop::execute): Prevent propagation of the first argument of a VEC_COND_EXPR. * tree-ssa-reassoc.c (ovce_extract_ops): Support SSA_NAME as first argument of a VEC_COND_EXPR. (optimize_vec_cond_expr): Likewise. * tree-vect-generic.c (expand_vector_divmod): Make SSA_NAME for a first argument of created VEC_COND_EXPR. (expand_vector_condition): Fix coding style. * tree-vect-stmts.c (vectorizable_condition): Gimplify first argument. * gimple-isel.cc: New file. gcc/testsuite/ChangeLog: * g++.dg/vect/vec-cond-expr-eh.C: New test.
2020-05-04internal-fn: Avoid dropping the lhs of some calls [PR94941]Richard Sandiford1-0/+6
create_output_operand coerces an output operand to the insn's predicates, using a suggested rtx location if convenient. But if that rtx location is actually required rather than optional, the builder of the insn has to emit a move afterwards. (We could instead add a new interface that does this automatically, but that's future work.) This PR shows that we were failing to emit the move for some of the vector load internal functions. I think there are other routines in internal-fn.c that potentially have the same problem, but this patch is supposed to be a conservative subset suitable for backporting to GCC 10. 2020-05-04 Richard Sandiford <richard.sandiford@arm.com> gcc/ PR middle-end/94941 * internal-fn.c (expand_load_lanes_optab_fn): Emit a move if the chosen lhs is different from the gcall lhs. (expand_mask_load_optab_fn): Likewise. (expand_gather_load_optab_fn): Likewise. gcc/testsuite/ PR middle-end/94941 * gcc.target/aarch64/sve/acle/general/unoptimized_1.c: New test.
2020-01-18[C++ coroutines] Initial implementation.Iain Sandoe1-0/+26
This is the squashed version of the first 6 patches that were split to facilitate review. The changes to libiberty (7th patch) to support demangling the co_await operator stand alone and are applied separately. The patch series is an initial implementation of a coroutine feature, expected to be standardised in C++20. Standardisation status (and potential impact on this implementation) -------------------------------------------------------------------- The facility was accepted into the working draft for C++20 by WG21 in February 2019. During following WG21 meetings, design and national body comments have been reviewed, with no significant change resulting. The current GCC implementation is against n4835 [1]. At this stage, the remaining potential for change comes from: * Areas of national body comments that were not resolved in the version we have worked to: (a) handling of the situation where aligned allocation is available. (b) handling of the situation where a user wants coroutines, but does not want exceptions (e.g. a GPU). * Agreed changes that have not yet been worded in a draft standard that we have worked to. It is not expected that the resolution to these can produce any major change at this phase of the standardisation process. Such changes should be limited to the coroutine-specific code. ABI --- The various compiler developers 'vendors' have discussed a minimal ABI to allow one implementation to call coroutines compiled by another. This amounts to: 1. The layout of a public portion of the coroutine frame. Coroutines need to preserve state across suspension points, the storage for this is called a "coroutine frame". The ABI mandates that pointers into the coroutine frame point to an area begining with two function pointers (to the resume and destroy functions described below); these are immediately followed by the "promise object" described in the standard. This is sufficient that the builtins can take a coroutine frame pointer and determine the address of the promise (or call the resume/destroy functions). 2. A number of compiler builtins that the standard library might use. These are implemented by this patch series. 3. This introduces a new operator 'co_await' the mangling for which is also agreed between vendors (and has an issue filed for that against the upstream c++abi). Demangling for this is added to libiberty in a separate patch. The ABI has currently no target-specific content (a given psABI might elect to mandate alignment, but the common ABI does not do this). Standard Library impact ----------------------- The current implementations require addition of only a single header to the standard library (no change to the runtime). This header is part of the patch. GCC Implementation outline -------------------------- The standard's design for coroutines does not decorate the definition of a coroutine in any way, so that a function is only known to be a coroutine when one of the keywords (co_await, co_yield, co_return) is encountered. This means that we cannot special-case such functions from the outset, but must process them differently when they are finalised - which we do from "finish_function ()". At a high level, this design of coroutine produces four pieces from the original user's function: 1. A coroutine state frame (taking the logical place of the activation record for a regular function). One item stored in that state is the index of the current suspend point. 2. A "ramp" function This is what the user calls to construct the coroutine frame and start the coroutine execution. This will return some object representing the coroutine's eventual return value (or means to continue it when it it suspended). 3. A "resume" function. This is what gets called when a the coroutine is resumed when suspended. 4. A "destroy" function. This is what gets called when the coroutine state should be destroyed and its memory released. The standard's coroutines involve cooperation of the user's authored function with a provided "promise" class, which includes mandatory methods for handling the state transitions and providing output values. Most realistic coroutines will also have one or more 'awaiter' classes that implement the user's actions for each suspend point. As we parse (or during template expansion) the types of the promise and awaiter classes become known, and can then be verified against the signatures expected by the standard. Once the function is parsed (and templates expanded) we are able to make the transformation into the four pieces noted above. The implementation here takes the approach of a series of AST transforms. The state machine suspend points are encoded in three internal functions (one of which represents an exit from scope without cleanups). These three IFNs are lowered early in the middle end, such that the majority of GCC's optimisers can be run on the resulting output. As a design choice, we have carried out the outlining of the user's function in the front end, and taken advantage of the existing middle end's abilities to inline and DCE where that is profitable. Since the state machine is actually common to both resumer and destroyer functions, we make only a single function "actor" that contains both the resume and destroy paths. The destroy function is represented by a small stub that sets a value to signal the use of the destroy path and calls the actor. The idea is that optimisation of the state machine need only be done once - and then the resume and destroy paths can be identified allowing the middle end's inline and DCE machinery to optimise as profitable as noted above. The middle end components for this implementation are: A pass that: 1. Lowers the coroutine builtins that allow the standard library header to interact with the coroutine frame (these fairly simple logical or numerical substitution of values, given a coroutine frame pointer). 2. Lowers the IFN that represents the exit from state without cleanup. Essentially, this becomes a gimple goto. 3. Sets the final size of the coroutine frame at this stage. A second pass (that requires the revised CFG that results from the lowering of the scope exit IFNs in the first). 1. Lower the IFNs that represent the state machine paths for the resume and destroy cases. Patches squashed into this commit: [C++ coroutines 1] Common code and base definitions. This part of the patch series provides the gating flag, the keywords, cpp defines etc. [C++ coroutines 2] Define builtins and internal functions. This part of the patch series provides the builtin functions used by the standard library code and the internal functions used to implement lowering of the coroutine state machine. [C++ coroutines 3] Front end parsing and transforms. There are two parts to this. 1. Parsing, template instantiation and diagnostics for the standard- mandated class entries. The user authors a function that becomes a coroutine (lazily) by making use of any of the co_await, co_yield or co_return keywords. Unlike a regular function, where the activation record is placed on the stack, and is destroyed on function exit, a coroutine has some state that persists between calls - the 'coroutine frame' (thus analogous to a stack frame). We transform the user's function into three pieces: 1. A so-called ramp function, that establishes the coroutine frame and begins execution of the coroutine. 2. An actor function that contains the state machine corresponding to the user's suspend/resume structure. 3. A stub function that calls the actor function in 'destroy' mode. The actor function is executed: * from "resume point 0" by the ramp. * from resume point N ( > 0 ) for handle.resume() calls. * from the destroy stub for destroy point N for handle.destroy() calls. The C++ coroutine design described in the standard makes use of some helper methods that are authored in a so-called "promise" class provided by the user. At parse time (or post substitution) the type of the coroutine promise will be determined. At that point, we can look up the required promise class methods and issue diagnostics if they are missing or incorrect. To avoid repeating these actions at code-gen time, we make use of temporary 'proxy' variables for the coroutine handle and the promise - which will eventually be instantiated in the coroutine frame. Each of the keywords will expand to a code sequence (although co_yield is just syntactic sugar for a co_await). We defer the analysis and transformatin until template expansion is complete so that we have complete types at that time. 2. AST analysis and transformation which performs the code-gen for the outlined state machine. The entry point here is morph_fn_to_coro () which is called from finish_function () when we have completed any template expansion. This is preceded by helper functions that implement the phases below. The process proceeds in four phases. A Initial framing. The user's function body is wrapped in the initial and final suspend points and we begin building the coroutine frame. We build empty decls for the actor and destroyer functions at this time too. When exceptions are enabled, the user's function body will also be wrapped in a try-catch block with the catch invoking the promise class 'unhandled_exception' method. B Analysis. The user's function body is analysed to determine the suspend points, if any, and to capture local variables that might persist across such suspensions. In most cases, it is not necessary to capture compiler temporaries, since the tree-lowering nests the suspensions correctly. However, in the case of a captured reference, there is a lifetime extension to the end of the full expression - which can mean across a suspend point in which case it must be promoted to a frame variable. At the conclusion of analysis, we have a conservative frame layout and maps of the local variables to their frame entry points. C Build the ramp function. Carry out the allocation for the coroutine frame (NOTE; the actual size computation is deferred until late in the middle end to allow for future optimisations that will be allowed to elide unused frame entries). We build the return object. D Build and expand the actor and destroyer function bodies. The destroyer is a trivial shim that sets a bit to indicate that the destroy dispatcher should be used and then calls into the actor. The actor function is the implementation of the user's state machine. The current suspend point is noted in an index. Each suspend point is encoded as a pair of internal functions, one in the relevant dispatcher, and one representing the suspend point. During this process, the user's local variables and the proxies for the self-handle and the promise class instanceare re-written to their coroutine frame equivalents. The complete bodies for the ramp, actor and destroy function are passed back to finish_function for folding and gimplification. [C++ coroutines 4] Middle end expanders and transforms. The first part of this is a pass that provides: * expansion of the library support builtins, these are simple boolean or numerical substitutions. * The functionality of implementing an exit from scope without cleanup is performed here by lowering an IFN to a gimple goto. This pass has to run for non-coroutine functions, since functions calling the builtins are not necessarily coroutines (i.e. they are implementing the library interfaces which may be called from anywhere). The second part is the expansion of the coroutine IFNs that describe the state machine connections to the dispatchers. This only has to be run for functions that are coroutine components. The work done by this pass is: In the front end we construct a single actor function that contains the coroutine state machine. The actor function has three entry conditions: 1. from the ramp, resume point 0 - to initial-suspend. 2. when resume () is executed (resume point N). 3. from the destroy () shim when that is executed. The actor function begins with two dispatchers; one for resume and one for destroy (where the initial entry from the ramp is a special- case of resume point 0). Each suspend point and each dispatch entry is marked with an IFN such that we can connect the relevant dispatchers to their target labels. So, if we have: CO_YIELD (NUM, FINAL, RES_LAB, DEST_LAB, FRAME_PTR) This is await point NUM, and is the final await if FINAL is non-zero. The resume point is RES_LAB, and the destroy point is DEST_LAB. We expect to find a CO_ACTOR (NUM) in the resume dispatcher and a CO_ACTOR (NUM+1) in the destroy dispatcher. Initially, the intent of keeping the resume and destroy paths together is that the conditionals controlling them are identical, and thus there would be duplication of any optimisation of those paths if the split were earlier. Subsequent inlining of the actor (and DCE) is then able to extract the resume and destroy paths as separate functions if that is found profitable by the optimisers. Once we have remade the connections to their correct postions, we elide the labels that the front end inserted. [C++ coroutines 5] Standard library header. This provides the interfaces mandated by the standard and implements the interaction with the coroutine frame by means of inline use of builtins expanded at compile-time. There should be a 1:1 correspondence with the standard sections which are cross-referenced. There is no runtime content. At this stage, we have the content in an inline namespace "__n4835" for the CD we worked to. [C++ coroutines 6] Testsuite. There are two categories of test: 1. Checks for correctly formed source code and the error reporting. 2. Checks for transformation and code-gen. The second set are run as 'torture' tests for the standard options set, including LTO. These are also intentionally run with no options provided (from the coroutines.exp script). gcc/ChangeLog: 2020-01-18 Iain Sandoe <iain@sandoe.co.uk> * Makefile.in: Add coroutine-passes.o. * builtin-types.def (BT_CONST_SIZE): New. (BT_FN_BOOL_PTR): New. (BT_FN_PTR_PTR_CONST_SIZE_BOOL): New. * builtins.def (DEF_COROUTINE_BUILTIN): New. * coroutine-builtins.def: New file. * coroutine-passes.cc: New file. * function.h (struct GTY function): Add a bit to indicate that the function is a coroutine component. * internal-fn.c (expand_CO_FRAME): New. (expand_CO_YIELD): New. (expand_CO_SUSPN): New. (expand_CO_ACTOR): New. * internal-fn.def (CO_ACTOR): New. (CO_YIELD): New. (CO_SUSPN): New. (CO_FRAME): New. * passes.def: Add pass_coroutine_lower_builtins, pass_coroutine_early_expand_ifns. * tree-pass.h (make_pass_coroutine_lower_builtins): New. (make_pass_coroutine_early_expand_ifns): New. * doc/invoke.texi: Document the fcoroutines command line switch. gcc/c-family/ChangeLog: 2020-01-18 Iain Sandoe <iain@sandoe.co.uk> * c-common.c (co_await, co_yield, co_return): New. * c-common.h (RID_CO_AWAIT, RID_CO_YIELD, RID_CO_RETURN): New enumeration values. (D_CXX_COROUTINES): Bit to identify coroutines are active. (D_CXX_COROUTINES_FLAGS): Guard for coroutine keywords. * c-cppbuiltin.c (__cpp_coroutines): New cpp define. * c.opt (fcoroutines): New command-line switch. gcc/cp/ChangeLog: 2020-01-18 Iain Sandoe <iain@sandoe.co.uk> * Make-lang.in: Add coroutines.o. * cp-tree.h (lang_decl-fn): coroutine_p, new bit. (DECL_COROUTINE_P): New. * lex.c (init_reswords): Enable keywords when the coroutine flag is set, * operators.def (co_await): New operator. * call.c (add_builtin_candidates): Handle CO_AWAIT_EXPR. (op_error): Likewise. (build_new_op_1): Likewise. (build_new_function_call): Validate coroutine builtin arguments. * constexpr.c (potential_constant_expression_1): Handle CO_AWAIT_EXPR, CO_YIELD_EXPR, CO_RETURN_EXPR. * coroutines.cc: New file. * cp-objcp-common.c (cp_common_init_ts): Add CO_AWAIT_EXPR, CO_YIELD_EXPR, CO_RETRN_EXPR as TS expressions. * cp-tree.def (CO_AWAIT_EXPR, CO_YIELD_EXPR, (CO_RETURN_EXPR): New. * cp-tree.h (coro_validate_builtin_call): New. * decl.c (emit_coro_helper): New. (finish_function): Handle the case when a function is found to be a coroutine, perform the outlining and emit the outlined functions. Set a bit to signal that this is a coroutine component. * parser.c (enum required_token): New enumeration RT_CO_YIELD. (cp_parser_unary_expression): Handle co_await. (cp_parser_assignment_expression): Handle co_yield. (cp_parser_statement): Handle RID_CO_RETURN. (cp_parser_jump_statement): Handle co_return. (cp_parser_operator): Handle co_await operator. (cp_parser_yield_expression): New. (cp_parser_required_error): Handle RT_CO_YIELD. * pt.c (tsubst_copy): Handle CO_AWAIT_EXPR. (tsubst_expr): Handle CO_AWAIT_EXPR, CO_YIELD_EXPR and CO_RETURN_EXPRs. * tree.c (cp_walk_subtrees): Likewise. libstdc++-v3/ChangeLog: 2020-01-18 Iain Sandoe <iain@sandoe.co.uk> * include/Makefile.am: Add coroutine to the std set. * include/Makefile.in: Regenerated. * include/std/coroutine: New file. gcc/testsuite/ChangeLog: 2020-01-18 Iain Sandoe <iain@sandoe.co.uk> * g++.dg/coroutines/co-await-syntax-00-needs-expr.C: New test. * g++.dg/coroutines/co-await-syntax-01-outside-fn.C: New test. * g++.dg/coroutines/co-await-syntax-02-outside-fn.C: New test. * g++.dg/coroutines/co-await-syntax-03-auto.C: New test. * g++.dg/coroutines/co-await-syntax-04-ctor-dtor.C: New test. * g++.dg/coroutines/co-await-syntax-05-constexpr.C: New test. * g++.dg/coroutines/co-await-syntax-06-main.C: New test. * g++.dg/coroutines/co-await-syntax-07-varargs.C: New test. * g++.dg/coroutines/co-await-syntax-08-lambda-auto.C: New test. * g++.dg/coroutines/co-return-syntax-01-outside-fn.C: New test. * g++.dg/coroutines/co-return-syntax-02-outside-fn.C: New test. * g++.dg/coroutines/co-return-syntax-03-auto.C: New test. * g++.dg/coroutines/co-return-syntax-04-ctor-dtor.C: New test. * g++.dg/coroutines/co-return-syntax-05-constexpr-fn.C: New test. * g++.dg/coroutines/co-return-syntax-06-main.C: New test. * g++.dg/coroutines/co-return-syntax-07-vararg.C: New test. * g++.dg/coroutines/co-return-syntax-08-bad-return.C: New test. * g++.dg/coroutines/co-return-syntax-09-lambda-auto.C: New test. * g++.dg/coroutines/co-yield-syntax-00-needs-expr.C: New test. * g++.dg/coroutines/co-yield-syntax-01-outside-fn.C: New test. * g++.dg/coroutines/co-yield-syntax-02-outside-fn.C: New test. * g++.dg/coroutines/co-yield-syntax-03-auto.C: New test. * g++.dg/coroutines/co-yield-syntax-04-ctor-dtor.C: New test. * g++.dg/coroutines/co-yield-syntax-05-constexpr.C: New test. * g++.dg/coroutines/co-yield-syntax-06-main.C: New test. * g++.dg/coroutines/co-yield-syntax-07-varargs.C: New test. * g++.dg/coroutines/co-yield-syntax-08-needs-expr.C: New test. * g++.dg/coroutines/co-yield-syntax-09-lambda-auto.C: New test. * g++.dg/coroutines/coro-builtins.C: New test. * g++.dg/coroutines/coro-missing-gro.C: New test. * g++.dg/coroutines/coro-missing-promise-yield.C: New test. * g++.dg/coroutines/coro-missing-ret-value.C: New test. * g++.dg/coroutines/coro-missing-ret-void.C: New test. * g++.dg/coroutines/coro-missing-ueh-1.C: New test. * g++.dg/coroutines/coro-missing-ueh-2.C: New test. * g++.dg/coroutines/coro-missing-ueh-3.C: New test. * g++.dg/coroutines/coro-missing-ueh.h: New test. * g++.dg/coroutines/coro-pre-proc.C: New test. * g++.dg/coroutines/coro.h: New file. * g++.dg/coroutines/coro1-ret-int-yield-int.h: New file. * g++.dg/coroutines/coroutines.exp: New file. * g++.dg/coroutines/torture/alloc-00-gro-on-alloc-fail.C: New test. * g++.dg/coroutines/torture/alloc-01-overload-newdel.C: New test. * g++.dg/coroutines/torture/call-00-co-aw-arg.C: New test. * g++.dg/coroutines/torture/call-01-multiple-co-aw.C: New test. * g++.dg/coroutines/torture/call-02-temp-co-aw.C: New test. * g++.dg/coroutines/torture/call-03-temp-ref-co-aw.C: New test. * g++.dg/coroutines/torture/class-00-co-ret.C: New test. * g++.dg/coroutines/torture/class-01-co-ret-parm.C: New test. * g++.dg/coroutines/torture/class-02-templ-parm.C: New test. * g++.dg/coroutines/torture/class-03-operator-templ-parm.C: New test. * g++.dg/coroutines/torture/class-04-lambda-1.C: New test. * g++.dg/coroutines/torture/class-05-lambda-capture-copy-local.C: New test. * g++.dg/coroutines/torture/class-06-lambda-capture-ref.C: New test. * g++.dg/coroutines/torture/co-await-00-trivial.C: New test. * g++.dg/coroutines/torture/co-await-01-with-value.C: New test. * g++.dg/coroutines/torture/co-await-02-xform.C: New test. * g++.dg/coroutines/torture/co-await-03-rhs-op.C: New test. * g++.dg/coroutines/torture/co-await-04-control-flow.C: New test. * g++.dg/coroutines/torture/co-await-05-loop.C: New test. * g++.dg/coroutines/torture/co-await-06-ovl.C: New test. * g++.dg/coroutines/torture/co-await-07-tmpl.C: New test. * g++.dg/coroutines/torture/co-await-08-cascade.C: New test. * g++.dg/coroutines/torture/co-await-09-pair.C: New test. * g++.dg/coroutines/torture/co-await-10-template-fn-arg.C: New test. * g++.dg/coroutines/torture/co-await-11-forwarding.C: New test. * g++.dg/coroutines/torture/co-await-12-operator-2.C: New test. * g++.dg/coroutines/torture/co-await-13-return-ref.C: New test. * g++.dg/coroutines/torture/co-ret-00-void-return-is-ready.C: New test. * g++.dg/coroutines/torture/co-ret-01-void-return-is-suspend.C: New test. * g++.dg/coroutines/torture/co-ret-03-different-GRO-type.C: New test. * g++.dg/coroutines/torture/co-ret-04-GRO-nontriv.C: New test. * g++.dg/coroutines/torture/co-ret-05-return-value.C: New test. * g++.dg/coroutines/torture/co-ret-06-template-promise-val-1.C: New test. * g++.dg/coroutines/torture/co-ret-07-void-cast-expr.C: New test. * g++.dg/coroutines/torture/co-ret-08-template-cast-ret.C: New test. * g++.dg/coroutines/torture/co-ret-09-bool-await-susp.C: New test. * g++.dg/coroutines/torture/co-ret-10-expression-evaluates-once.C: New test. * g++.dg/coroutines/torture/co-ret-11-co-ret-co-await.C: New test. * g++.dg/coroutines/torture/co-ret-12-co-ret-fun-co-await.C: New test. * g++.dg/coroutines/torture/co-ret-13-template-2.C: New test. * g++.dg/coroutines/torture/co-ret-14-template-3.C: New test. * g++.dg/coroutines/torture/co-yield-00-triv.C: New test. * g++.dg/coroutines/torture/co-yield-01-multi.C: New test. * g++.dg/coroutines/torture/co-yield-02-loop.C: New test. * g++.dg/coroutines/torture/co-yield-03-tmpl.C: New test. * g++.dg/coroutines/torture/co-yield-04-complex-local-state.C: New test. * g++.dg/coroutines/torture/co-yield-05-co-aw.C: New test. * g++.dg/coroutines/torture/co-yield-06-fun-parm.C: New test. * g++.dg/coroutines/torture/co-yield-07-template-fn-param.C: New test. * g++.dg/coroutines/torture/co-yield-08-more-refs.C: New test. * g++.dg/coroutines/torture/co-yield-09-more-templ-refs.C: New test. * g++.dg/coroutines/torture/coro-torture.exp: New file. * g++.dg/coroutines/torture/exceptions-test-0.C: New test. * g++.dg/coroutines/torture/func-params-00.C: New test. * g++.dg/coroutines/torture/func-params-01.C: New test. * g++.dg/coroutines/torture/func-params-02.C: New test. * g++.dg/coroutines/torture/func-params-03.C: New test. * g++.dg/coroutines/torture/func-params-04.C: New test. * g++.dg/coroutines/torture/func-params-05.C: New test. * g++.dg/coroutines/torture/func-params-06.C: New test. * g++.dg/coroutines/torture/lambda-00-co-ret.C: New test. * g++.dg/coroutines/torture/lambda-01-co-ret-parm.C: New test. * g++.dg/coroutines/torture/lambda-02-co-yield-values.C: New test. * g++.dg/coroutines/torture/lambda-03-auto-parm-1.C: New test. * g++.dg/coroutines/torture/lambda-04-templ-parm.C: New test. * g++.dg/coroutines/torture/lambda-05-capture-copy-local.C: New test. * g++.dg/coroutines/torture/lambda-06-multi-capture.C: New test. * g++.dg/coroutines/torture/lambda-07-multi-yield.C: New test. * g++.dg/coroutines/torture/lambda-08-co-ret-parm-ref.C: New test. * g++.dg/coroutines/torture/local-var-0.C: New test. * g++.dg/coroutines/torture/local-var-1.C: New test. * g++.dg/coroutines/torture/local-var-2.C: New test. * g++.dg/coroutines/torture/local-var-3.C: New test. * g++.dg/coroutines/torture/local-var-4.C: New test. * g++.dg/coroutines/torture/mid-suspend-destruction-0.C: New test. * g++.dg/coroutines/torture/pr92933.C: New test.
2020-01-01Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r279813
2019-11-19re PR middle-end/91450 (__builtin_mul_overflow(A,B,R) wrong code if product ↵Jakub Jelinek1-11/+16
< 0, *R is unsigned, and !(A&B)) PR middle-end/91450 * internal-fn.c (expand_mul_overflow): For s1 * s2 -> ur, if one operand is negative and one non-negative, compare the non-negative one against 0 rather than comparing s1 & s2 against 0. Otherwise, don't compare (s1 & s2) == 0, but compare separately both s1 == 0 and s2 == 0, unless one of them is known to be negative. Remove tem2 variable, use tem where tem2 has been used before. * gcc.c-torture/execute/pr91450-1.c: New test. * gcc.c-torture/execute/pr91450-2.c: New test. From-SVN: r278437
2019-11-18Add optabs for accelerating RAW and WAR alias checksRichard Sandiford1-0/+23
This patch adds optabs that check whether a read followed by a write or a write followed by a read can be divided into interleaved byte accesses without changing the dependencies between the bytes. This is one of the uses of the SVE2 WHILERW and WHILEWR instructions. (The instructions can also be used to limit the VF at runtime, but that's future work.) 2019-11-18 Richard Sandiford <richard.sandiford@arm.com> gcc/ * doc/sourcebuild.texi (vect_check_ptrs): Document. * optabs.def (check_raw_ptrs_optab, check_war_ptrs_optab): New optabs. * doc/md.texi: Document them. * internal-fn.def (IFN_CHECK_RAW_PTRS, IFN_CHECK_WAR_PTRS): New internal functions. * internal-fn.h (internal_check_ptrs_fn_supported_p): Declare. * internal-fn.c (check_ptrs_direct): New macro. (expand_check_ptrs_optab_fn): Likewise. (direct_check_ptrs_optab_supported_p): Likewise. (internal_check_ptrs_fn_supported_p): New fuction. * tree-data-ref.c: Include internal-fn.h. (create_ifn_alias_checks): New function. (create_intersect_range_checks): Use it. * config/aarch64/iterators.md (SVE2_WHILE_PTR): New int iterator. (optab, cmp_op): Handle it. (raw_war, unspec): New int attributes. * config/aarch64/aarch64.md (UNSPEC_WHILERW, UNSPEC_WHILE_WR): New constants. * config/aarch64/predicates.md (aarch64_bytes_per_sve_vector_operand): New predicate. * config/aarch64/aarch64-sve2.md (check_<raw_war>_ptrs<mode>): New expander. (@aarch64_sve2_while<cmp_op><GPI:mode><PRED_ALL:mode>_ptest): New pattern. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_check_ptrs): New procedure. * gcc.dg/vect/vect-alias-check-14.c: Expect IFN_CHECK_WAR to be used, if available. * gcc.dg/vect/vect-alias-check-15.c: Likewise. * gcc.dg/vect/vect-alias-check-16.c: Likewise IFN_CHECK_RAW. * gcc.target/aarch64/sve2/whilerw_1.c: New test. * gcc.target/aarch64/sve2/whilewr_1.c: Likewise. * gcc.target/aarch64/sve2/whilewr_2.c: Likewise. From-SVN: r278414
2019-11-08Generalise gather and scatter optabsRichard Sandiford1-19/+22
The gather and scatter optabs required the vector offset to be the integer equivalent of the vector mode being loaded or stored. This patch generalises them so that the two vectors can have different element sizes, although they still need to have the same number of elements. One consequence of this is that it's possible (if unlikely) for two IFN_GATHER_LOADs to have the same arguments but different return types. E.g. the same scalar base and vector of 32-bit offsets could be used to load 8-bit elements and to load 16-bit elements. From just looking at the arguments, we could wrongly deduce that they're equivalent. I know we saw this happen at one point with IFN_WHILE_ULT, and we dealt with it there by passing a zero of the return type as an extra argument. Doing the same here also makes the load and store functions have the same argument assignment. For now this patch should be a no-op, but later SVE patches take advantage of the new flexibility. 2019-11-08 Richard Sandiford <richard.sandiford@arm.com> gcc/ * optabs.def (gather_load_optab, mask_gather_load_optab) (scatter_store_optab, mask_scatter_store_optab): Turn into conversion optabs, with the offset mode given explicitly. * doc/md.texi: Update accordingly. * config/aarch64/aarch64-sve-builtins-base.cc (svld1_gather_impl::expand): Likewise. (svst1_scatter_impl::expand): Likewise. * internal-fn.c (gather_load_direct, scatter_store_direct): Likewise. (expand_scatter_store_optab_fn): Likewise. (direct_gather_load_optab_supported_p): Likewise. (direct_scatter_store_optab_supported_p): Likewise. (expand_gather_load_optab_fn): Likewise. Expect the mask argument to be argument 4. (internal_fn_mask_index): Return 4 for IFN_MASK_GATHER_LOAD. (internal_gather_scatter_fn_supported_p): Replace the offset sign argument with the offset vector type. Require the two vector types to have the same number of elements but allow their element sizes to be different. Treat the optabs as conversion optabs. * internal-fn.h (internal_gather_scatter_fn_supported_p): Update prototype accordingly. * optabs-query.c (supports_at_least_one_mode_p): Replace with... (supports_vec_convert_optab_p): ...this new function. (supports_vec_gather_load_p): Update accordingly. (supports_vec_scatter_store_p): Likewise. * tree-vectorizer.h (vect_gather_scatter_fn_p): Take a vec_info. Replace the offset sign and bits parameters with a scalar type tree. * tree-vect-data-refs.c (vect_gather_scatter_fn_p): Likewise. Pass back the offset vector type instead of the scalar element type. Allow the offset to be wider than the memory elements. Search for an offset type that the target supports, stopping once we've reached the maximum of the element size and pointer size. Update call to internal_gather_scatter_fn_supported_p. (vect_check_gather_scatter): Update calls accordingly. When testing a new scale before knowing the final offset type, check whether the scale is supported for any signed or unsigned offset type. Check whether the target supports the source and target types of a conversion before deciding whether to look through the conversion. Record the chosen offset_vectype. * tree-vect-patterns.c (vect_get_gather_scatter_offset_type): Delete. (vect_recog_gather_scatter_pattern): Get the scalar offset type directly from the gs_info's offset_vectype instead. Pass a zero of the result type to IFN_GATHER_LOAD and IFN_MASK_GATHER_LOAD. * tree-vect-stmts.c (check_load_store_masking): Update call to internal_gather_scatter_fn_supported_p, passing the offset vector type recorded in the gs_info. (vect_truncate_gather_scatter_offset): Update call to vect_check_gather_scatter, leaving it to search for a valid offset vector type. (vect_use_strided_gather_scatters_p): Convert the offset to the element type of the gs_info's offset_vectype. (vect_get_gather_scatter_ops): Get the offset vector type directly from the gs_info. (vect_get_strided_load_store_ops): Likewise. (vectorizable_load): Pass a zero of the result type to IFN_GATHER_LOAD and IFN_MASK_GATHER_LOAD. * config/aarch64/aarch64-sve.md (gather_load<mode>): Rename to... (gather_load<mode><v_int_equiv>): ...this. (mask_gather_load<mode>): Rename to... (mask_gather_load<mode><v_int_equiv>): ...this. (scatter_store<mode>): Rename to... (scatter_store<mode><v_int_equiv>): ...this. (mask_scatter_store<mode>): Rename to... (mask_scatter_store<mode><v_int_equiv>): ...this. From-SVN: r277949
2019-09-12Vectorise multiply high with scaling operations (PR 89386)Yuliang Wang1-0/+2
2019-09-12 Yuliang Wang <yuliang.wang@arm.com> gcc/ PR tree-optimization/89386 * config/aarch64/aarch64-sve2.md (<su>mull<bt><Vwide>) (<r>shrnb<mode>, <r>shrnt<mode>): New SVE2 patterns. (<su>mulh<r>s<mode>3): New pattern for MULHRS. * config/aarch64/iterators.md (UNSPEC_SMULLB, UNSPEC_SMULLT) (UNSPEC_UMULLB, UNSPEC_UMULLT, UNSPEC_SHRNB, UNSPEC_SHRNT) (UNSPEC_RSHRNB, UNSPEC_RSHRNT, UNSPEC_SMULHS, UNSPEC_SMULHRS) UNSPEC_UMULHS, UNSPEC_UMULHRS): New unspecs. (MULLBT, SHRNB, SHRNT, MULHRS): New int iterators. (su, r): Handle the unspecs above. (bt): New int attribute. * internal-fn.def (IFN_MULHS, IFN_MULHRS): New internal functions. * internal-fn.c (first_commutative_argument): Commutativity info for above. * optabs.def (smulhs_optab, smulhrs_optab, umulhs_optab) (umulhrs_optab): New optabs. * doc/md.texi (smulhs$var{m3}, umulhs$var{m3}) (smulhrs$var{m3}, umulhrs$var{m3}): Documentation for the above. * tree-vect-patterns.c (vect_recog_mulhs_pattern): New pattern function. (vect_vect_recog_func_ptrs): Add it. * testsuite/gcc.target/aarch64/sve2/mulhrs_1.c: New test. * testsuite/gcc.dg/vect/vect-mulhrs-1.c: As above. * testsuite/gcc.dg/vect/vect-mulhrs-2.c: As above. * testsuite/gcc.dg/vect/vect-mulhrs-3.c: As above. * testsuite/gcc.dg/vect/vect-mulhrs-4.c: As above. * doc/sourcebuild.texi (vect_mulhrs_hi): Document new target selector. * testsuite/lib/target-supports.exp (check_effective_target_vect_mulhrs_hi): Return true for AArch64 with SVE2. From-SVN: r275682
2019-08-15Add support for conditional shiftsRichard Sandiford1-1/+3
This patch adds support for IFN_COND shifts left and shifts right. This is mostly mechanical, but since we try to handle conditional operations in the same way as unconditional operations in match.pd, we need to support IFN_COND shifts by scalars as well as vectors. E.g.: IFN_COND_SHL (cond, a, { 1, 1, ... }, fallback) and: IFN_COND_SHL (cond, a, 1, fallback) are the same operation, with: (for shiftrotate (lrotate rrotate lshift rshift) ... /* Prefer vector1 << scalar to vector1 << vector2 if vector2 is uniform. */ (for vec (VECTOR_CST CONSTRUCTOR) (simplify (shiftrotate @0 vec@1) (with { tree tem = uniform_vector_p (@1); } (if (tem) (shiftrotate @0 { tem; })))))) preferring the latter. The patch copes with this by extending create_convert_operand_from to handle scalar-to-vector conversions. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> gcc/ * internal-fn.def (IFN_COND_SHL, IFN_COND_SHR): New internal functions. * internal-fn.c (FOR_EACH_CODE_MAPPING): Handle shifts. * match.pd (UNCOND_BINARY, COND_BINARY): Likewise. * optabs.def (cond_ashl_optab, cond_ashr_optab, cond_lshr_optab): New optabs. * optabs.h (create_convert_operand_from): Expand comment. * optabs.c (maybe_legitimize_operand): Allow implicit broadcasts when mapping scalar rtxes to vector operands. * config/aarch64/iterators.md (SVE_INT_BINARY): Add ashift, ashiftrt and lshiftrt. (sve_int_op, sve_int_op_rev, sve_pred_int_rhs2_operand): Handle them. * config/aarch64/aarch64-sve.md (*cond_<optab><mode>_2_const) (*cond_<optab><mode>_any_const): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_shift_1.c: New test. * gcc.target/aarch64/sve/cond_shift_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_2.c: Likewise. * gcc.target/aarch64/sve/cond_shift_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_3.c: Likewise. * gcc.target/aarch64/sve/cond_shift_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_4.c: Likewise. * gcc.target/aarch64/sve/cond_shift_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_5.c: Likewise. * gcc.target/aarch64/sve/cond_shift_5_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_6.c: Likewise. * gcc.target/aarch64/sve/cond_shift_6_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_7.c: Likewise. * gcc.target/aarch64/sve/cond_shift_7_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_8.c: Likewise. * gcc.target/aarch64/sve/cond_shift_8_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_9.c: Likewise. * gcc.target/aarch64/sve/cond_shift_9_run.c: Likewise. Co-Authored-By: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> From-SVN: r274505
2019-07-09PR c++/61339 - add mismatch between struct and class [-Wmismatched-tags] to ↵Martin Sebor1-17/+17
non-bugs gcc/c/ChangeLog: PR c++/61339 * c-decl.c (xref_tag): Change class-key of PODs to struct and others to class. (field_decl_cmp): Same. * c-parser.c (c_parser_struct_or_union_specifier): Same. * c-tree.h: Same. * gimple-parser.c (c_parser_gimple_compound_statement): Same. gcc/c-family/ChangeLog: PR c++/61339 * c-opts.c (handle_deferred_opts): : Change class-key of PODs to struct and others to class. * c-pretty-print.h: Same. gcc/cp/ChangeLog: PR c++/61339 * cp-tree.h: Change class-key of PODs to struct and others to class. * search.c: Same. * semantics.c (finalize_nrv_r): Same. gcc/lto/ChangeLog: PR c++/61339 * lto-common.c (lto_splay_tree_new): : Change class-key of PODs to struct and others to class. (mentions_vars_p): Same. (register_resolution): Same. (lto_register_var_decl_in_symtab): Same. (lto_register_function_decl_in_symtab): Same. (cmp_tree): Same. (lto_read_decls): Same. gcc/ChangeLog: PR c++/61339 * auto-profile.c: Change class-key of PODs to struct and others to class. * basic-block.h: Same. * bitmap.c (bitmap_alloc): Same. * bitmap.h: Same. * builtins.c (expand_builtin_prefetch): Same. (expand_builtin_interclass_mathfn): Same. (expand_builtin_strlen): Same. (expand_builtin_mempcpy_args): Same. (expand_cmpstr): Same. (expand_builtin___clear_cache): Same. (expand_ifn_atomic_bit_test_and): Same. (expand_builtin_thread_pointer): Same. (expand_builtin_set_thread_pointer): Same. * caller-save.c (setup_save_areas): Same. (replace_reg_with_saved_mem): Same. (insert_restore): Same. (insert_save): Same. (add_used_regs): Same. * cfg.c (get_bb_copy): Same. (set_loop_copy): Same. * cfg.h: Same. * cfganal.h: Same. * cfgexpand.c (alloc_stack_frame_space): Same. (add_stack_var): Same. (add_stack_var_conflict): Same. (add_scope_conflicts_1): Same. (update_alias_info_with_stack_vars): Same. (expand_used_vars): Same. * cfghooks.c (redirect_edge_and_branch_force): Same. (delete_basic_block): Same. (split_edge): Same. (make_forwarder_block): Same. (force_nonfallthru): Same. (duplicate_block): Same. (lv_flush_pending_stmts): Same. * cfghooks.h: Same. * cfgloop.c (flow_loops_cfg_dump): Same. (flow_loop_nested_p): Same. (superloop_at_depth): Same. (get_loop_latch_edges): Same. (flow_loop_dump): Same. (flow_loops_dump): Same. (flow_loops_free): Same. (flow_loop_nodes_find): Same. (establish_preds): Same. (flow_loop_tree_node_add): Same. (flow_loop_tree_node_remove): Same. (flow_loops_find): Same. (find_subloop_latch_edge_by_profile): Same. (find_subloop_latch_edge_by_ivs): Same. (mfb_redirect_edges_in_set): Same. (form_subloop): Same. (merge_latch_edges): Same. (disambiguate_multiple_latches): Same. (disambiguate_loops_with_multiple_latches): Same. (flow_bb_inside_loop_p): Same. (glb_enum_p): Same. (get_loop_body_with_size): Same. (get_loop_body): Same. (fill_sons_in_loop): Same. (get_loop_body_in_dom_order): Same. (get_loop_body_in_custom_order): Same. (release_recorded_exits): Same. (get_loop_exit_edges): Same. (num_loop_branches): Same. (remove_bb_from_loops): Same. (find_common_loop): Same. (delete_loop): Same. (cancel_loop): Same. (verify_loop_structure): Same. (loop_preheader_edge): Same. (loop_exit_edge_p): Same. (single_exit): Same. (loop_exits_to_bb_p): Same. (loop_exits_from_bb_p): Same. (get_loop_location): Same. (record_niter_bound): Same. (get_estimated_loop_iterations_int): Same. (max_stmt_executions_int): Same. (likely_max_stmt_executions_int): Same. (get_estimated_loop_iterations): Same. (get_max_loop_iterations): Same. (get_max_loop_iterations_int): Same. (get_likely_max_loop_iterations): Same. * cfgloop.h (simple_loop_desc): Same. (get_loop): Same. (loop_depth): Same. (loop_outer): Same. (loop_iterator::next): Same. (loop_outermost): Same. * cfgloopanal.c (mark_irreducible_loops): Same. (num_loop_insns): Same. (average_num_loop_insns): Same. (expected_loop_iterations_unbounded): Same. (expected_loop_iterations): Same. (mark_loop_exit_edges): Same. (single_likely_exit): Same. * cfgloopmanip.c (fix_bb_placement): Same. (fix_bb_placements): Same. (remove_path): Same. (place_new_loop): Same. (add_loop): Same. (scale_loop_frequencies): Same. (scale_loop_profile): Same. (create_empty_if_region_on_edge): Same. (create_empty_loop_on_edge): Same. (loopify): Same. (unloop): Same. (fix_loop_placements): Same. (copy_loop_info): Same. (duplicate_loop): Same. (duplicate_subloops): Same. (loop_redirect_edge): Same. (can_duplicate_loop_p): Same. (duplicate_loop_to_header_edge): Same. (mfb_keep_just): Same. (has_preds_from_loop): Same. (create_preheader): Same. (create_preheaders): Same. (lv_adjust_loop_entry_edge): Same. (loop_version): Same. * cfgloopmanip.h: Same. * cgraph.h: Same. * cgraphbuild.c: Same. * combine.c (make_extraction): Same. * config/i386/i386-features.c: Same. * config/i386/i386-features.h: Same. * config/i386/i386.c (ix86_emit_outlined_ms2sysv_save): Same. (ix86_emit_outlined_ms2sysv_restore): Same. (ix86_noce_conversion_profitable_p): Same. (ix86_init_cost): Same. (ix86_simd_clone_usable): Same. * configure.ac: Same. * coretypes.h: Same. * data-streamer-in.c (string_for_index): Same. (streamer_read_indexed_string): Same. (streamer_read_string): Same. (bp_unpack_indexed_string): Same. (bp_unpack_string): Same. (streamer_read_uhwi): Same. (streamer_read_hwi): Same. (streamer_read_gcov_count): Same. (streamer_read_wide_int): Same. * data-streamer.h (streamer_write_bitpack): Same. (bp_unpack_value): Same. (streamer_write_char_stream): Same. (streamer_write_hwi_in_range): Same. (streamer_write_record_start): Same. * ddg.c (create_ddg_dep_from_intra_loop_link): Same. (add_cross_iteration_register_deps): Same. (build_intra_loop_deps): Same. * df-core.c (df_analyze): Same. (loop_post_order_compute): Same. (loop_inverted_post_order_compute): Same. * df-problems.c (df_rd_alloc): Same. (df_rd_simulate_one_insn): Same. (df_rd_local_compute): Same. (df_rd_init_solution): Same. (df_rd_confluence_n): Same. (df_rd_transfer_function): Same. (df_rd_free): Same. (df_rd_dump_defs_set): Same. (df_rd_top_dump): Same. (df_lr_alloc): Same. (df_lr_reset): Same. (df_lr_local_compute): Same. (df_lr_init): Same. (df_lr_confluence_n): Same. (df_lr_free): Same. (df_lr_top_dump): Same. (df_lr_verify_transfer_functions): Same. (df_live_alloc): Same. (df_live_reset): Same. (df_live_init): Same. (df_live_confluence_n): Same. (df_live_finalize): Same. (df_live_free): Same. (df_live_top_dump): Same. (df_live_verify_transfer_functions): Same. (df_mir_alloc): Same. (df_mir_reset): Same. (df_mir_init): Same. (df_mir_confluence_n): Same. (df_mir_free): Same. (df_mir_top_dump): Same. (df_word_lr_alloc): Same. (df_word_lr_reset): Same. (df_word_lr_init): Same. (df_word_lr_confluence_n): Same. (df_word_lr_free): Same. (df_word_lr_top_dump): Same. (df_md_alloc): Same. (df_md_simulate_one_insn): Same. (df_md_reset): Same. (df_md_init): Same. (df_md_free): Same. (df_md_top_dump): Same. * df-scan.c (df_insn_delete): Same. (df_insn_rescan): Same. (df_notes_rescan): Same. (df_sort_and_compress_mws): Same. (df_install_mws): Same. (df_refs_add_to_chains): Same. (df_ref_create_structure): Same. (df_ref_record): Same. (df_def_record_1): Same. (df_find_hard_reg_defs): Same. (df_uses_record): Same. (df_get_conditional_uses): Same. (df_get_call_refs): Same. (df_recompute_luids): Same. (df_get_entry_block_def_set): Same. (df_entry_block_defs_collect): Same. (df_get_exit_block_use_set): Same. (df_exit_block_uses_collect): Same. (df_mws_verify): Same. (df_bb_verify): Same. * df.h (df_scan_get_bb_info): Same. * doc/tm.texi: Same. * dse.c (record_store): Same. * dumpfile.h: Same. * emit-rtl.c (const_fixed_hasher::equal): Same. (set_mem_attributes_minus_bitpos): Same. (change_address): Same. (adjust_address_1): Same. (offset_address): Same. * emit-rtl.h: Same. * except.c (dw2_build_landing_pads): Same. (sjlj_emit_dispatch_table): Same. * explow.c (allocate_dynamic_stack_space): Same. (emit_stack_probe): Same. (probe_stack_range): Same. * expmed.c (store_bit_field_using_insv): Same. (store_bit_field_1): Same. (store_integral_bit_field): Same. (extract_bit_field_using_extv): Same. (extract_bit_field_1): Same. (emit_cstore): Same. * expr.c (emit_block_move_via_cpymem): Same. (expand_cmpstrn_or_cmpmem): Same. (set_storage_via_setmem): Same. (emit_single_push_insn_1): Same. (expand_assignment): Same. (store_constructor): Same. (expand_expr_real_2): Same. (expand_expr_real_1): Same. (try_casesi): Same. * flags.h: Same. * function.c (try_fit_stack_local): Same. (assign_stack_local_1): Same. (assign_stack_local): Same. (cut_slot_from_list): Same. (insert_slot_to_list): Same. (max_slot_level): Same. (move_slot_to_level): Same. (temp_address_hasher::equal): Same. (remove_unused_temp_slot_addresses): Same. (assign_temp): Same. (combine_temp_slots): Same. (update_temp_slot_address): Same. (preserve_temp_slots): Same. * function.h: Same. * fwprop.c: Same. * gcc-rich-location.h: Same. * gcov.c: Same. * genattrtab.c (check_attr_test): Same. (check_attr_value): Same. (convert_set_attr_alternative): Same. (convert_set_attr): Same. (check_defs): Same. (copy_boolean): Same. (get_attr_value): Same. (expand_delays): Same. (make_length_attrs): Same. (min_fn): Same. (make_alternative_compare): Same. (simplify_test_exp): Same. (tests_attr_p): Same. (get_attr_order): Same. (clear_struct_flag): Same. (gen_attr): Same. (compares_alternatives_p): Same. (gen_insn): Same. (gen_delay): Same. (find_attrs_to_cache): Same. (write_test_expr): Same. (walk_attr_value): Same. (write_attr_get): Same. (eliminate_known_true): Same. (write_insn_cases): Same. (write_attr_case): Same. (write_attr_valueq): Same. (write_attr_value): Same. (write_dummy_eligible_delay): Same. (next_comma_elt): Same. (find_attr): Same. (make_internal_attr): Same. (copy_rtx_unchanging): Same. (gen_insn_reserv): Same. (check_tune_attr): Same. (make_automaton_attrs): Same. (handle_arg): Same. * genextract.c (gen_insn): Same. (VEC_char_to_string): Same. * genmatch.c (print_operand): Same. (lower): Same. (parser::parse_operation): Same. (parser::parse_capture): Same. (parser::parse_c_expr): Same. (parser::parse_simplify): Same. (main): Same. * genoutput.c (output_operand_data): Same. (output_get_insn_name): Same. (compare_operands): Same. (place_operands): Same. (process_template): Same. (validate_insn_alternatives): Same. (validate_insn_operands): Same. (gen_expand): Same. (note_constraint): Same. * genpreds.c (write_one_predicate_function): Same. (add_constraint): Same. (process_define_register_constraint): Same. (write_lookup_constraint_1): Same. (write_lookup_constraint_array): Same. (write_insn_constraint_len): Same. (write_reg_class_for_constraint_1): Same. (write_constraint_satisfied_p_array): Same. * genrecog.c (optimize_subroutine_group): Same. * gensupport.c (process_define_predicate): Same. (queue_pattern): Same. (remove_from_queue): Same. (process_rtx): Same. (is_predicable): Same. (change_subst_attribute): Same. (subst_pattern_match): Same. (alter_constraints): Same. (alter_attrs_for_insn): Same. (shift_output_template): Same. (alter_output_for_subst_insn): Same. (process_one_cond_exec): Same. (subst_dup): Same. (process_define_cond_exec): Same. (mnemonic_htab_callback): Same. (gen_mnemonic_attr): Same. (read_md_rtx): Same. * ggc-page.c: Same. * gimple-loop-interchange.cc (dump_reduction): Same. (dump_induction): Same. (loop_cand::~loop_cand): Same. (free_data_refs_with_aux): Same. (tree_loop_interchange::interchange_loops): Same. (tree_loop_interchange::map_inductions_to_loop): Same. (tree_loop_interchange::move_code_to_inner_loop): Same. (compute_access_stride): Same. (compute_access_strides): Same. (proper_loop_form_for_interchange): Same. (tree_loop_interchange_compute_ddrs): Same. (prune_datarefs_not_in_loop): Same. (prepare_data_references): Same. (pass_linterchange::execute): Same. * gimple-loop-jam.c (bb_prevents_fusion_p): Same. (unroll_jam_possible_p): Same. (fuse_loops): Same. (adjust_unroll_factor): Same. (tree_loop_unroll_and_jam): Same. * gimple-loop-versioning.cc (loop_versioning::~loop_versioning): Same. (loop_versioning::expensive_stmt_p): Same. (loop_versioning::version_for_unity): Same. (loop_versioning::dump_inner_likelihood): Same. (loop_versioning::find_per_loop_multiplication): Same. (loop_versioning::analyze_term_using_scevs): Same. (loop_versioning::record_address_fragment): Same. (loop_versioning::analyze_expr): Same. (loop_versioning::analyze_blocks): Same. (loop_versioning::prune_conditions): Same. (loop_versioning::merge_loop_info): Same. (loop_versioning::add_loop_to_queue): Same. (loop_versioning::decide_whether_loop_is_versionable): Same. (loop_versioning::make_versioning_decisions): Same. (loop_versioning::implement_versioning_decisions): Same. * gimple-ssa-evrp-analyze.c (evrp_range_analyzer::record_ranges_from_phis): Same. * gimple-ssa-store-merging.c (split_store::split_store): Same. (count_multiple_uses): Same. (split_group): Same. (imm_store_chain_info::output_merged_store): Same. (pass_store_merging::process_store): Same. * gimple-ssa-strength-reduction.c (slsr_process_phi): Same. * gimple-ssa-warn-alloca.c (adjusted_warn_limit): Same. (is_max): Same. (alloca_call_type): Same. (pass_walloca::execute): Same. * gimple-streamer-in.c (input_phi): Same. (input_gimple_stmt): Same. * gimple-streamer.h: Same. * godump.c (go_force_record_alignment): Same. (go_format_type): Same. (go_output_type): Same. (go_output_fndecl): Same. (go_output_typedef): Same. (keyword_hash_init): Same. (find_dummy_types): Same. * graph.c (draw_cfg_nodes_no_loops): Same. (draw_cfg_nodes_for_loop): Same. * hard-reg-set.h (hard_reg_set_iter_next): Same. * hsa-brig.c: Same. * hsa-common.h (hsa_internal_fn_hasher::equal): Same. * hsa-dump.c (dump_hsa_cfun): Same. * hsa-gen.c (gen_function_def_parameters): Same. * hsa-regalloc.c (dump_hsa_cfun_regalloc): Same. * input.c (dump_line_table_statistics): Same. (test_lexer): Same. * input.h: Same. * internal-fn.c (get_multi_vector_move): Same. (expand_load_lanes_optab_fn): Same. (expand_GOMP_SIMT_ENTER_ALLOC): Same. (expand_GOMP_SIMT_EXIT): Same. (expand_GOMP_SIMT_LAST_LANE): Same. (expand_GOMP_SIMT_ORDERED_PRED): Same. (expand_GOMP_SIMT_VOTE_ANY): Same. (expand_GOMP_SIMT_XCHG_BFLY): Same. (expand_GOMP_SIMT_XCHG_IDX): Same. (expand_addsub_overflow): Same. (expand_neg_overflow): Same. (expand_mul_overflow): Same. (expand_call_mem_ref): Same. (expand_mask_load_optab_fn): Same. (expand_scatter_store_optab_fn): Same. (expand_gather_load_optab_fn): Same. * ipa-cp.c (ipa_get_parm_lattices): Same. (print_all_lattices): Same. (ignore_edge_p): Same. (build_toporder_info): Same. (free_toporder_info): Same. (push_node_to_stack): Same. (ipcp_lattice<valtype>::set_contains_variable): Same. (set_agg_lats_to_bottom): Same. (ipcp_bits_lattice::meet_with): Same. (set_single_call_flag): Same. (initialize_node_lattices): Same. (ipa_get_jf_ancestor_result): Same. (ipcp_verify_propagated_values): Same. (propagate_scalar_across_jump_function): Same. (propagate_context_across_jump_function): Same. (propagate_bits_across_jump_function): Same. (ipa_vr_operation_and_type_effects): Same. (propagate_vr_across_jump_function): Same. (set_check_aggs_by_ref): Same. (set_chain_of_aglats_contains_variable): Same. (merge_aggregate_lattices): Same. (agg_pass_through_permissible_p): Same. (propagate_aggs_across_jump_function): Same. (call_passes_through_thunk_p): Same. (propagate_constants_across_call): Same. (devirtualization_time_bonus): Same. (good_cloning_opportunity_p): Same. (context_independent_aggregate_values): Same. (gather_context_independent_values): Same. (perform_estimation_of_a_value): Same. (estimate_local_effects): Same. (value_topo_info<valtype>::add_val): Same. (add_all_node_vals_to_toposort): Same. (value_topo_info<valtype>::propagate_effects): Same. (ipcp_propagate_stage): Same. (ipcp_discover_new_direct_edges): Same. (same_node_or_its_all_contexts_clone_p): Same. (cgraph_edge_brings_value_p): Same. (gather_edges_for_value): Same. (create_specialized_node): Same. (find_more_scalar_values_for_callers_subset): Same. (find_more_contexts_for_caller_subset): Same. (copy_plats_to_inter): Same. (intersect_aggregates_with_edge): Same. (find_aggregate_values_for_callers_subset): Same. (cgraph_edge_brings_all_agg_vals_for_node): Same. (decide_about_value): Same. (decide_whether_version_node): Same. (spread_undeadness): Same. (identify_dead_nodes): Same. (ipcp_store_vr_results): Same. * ipa-devirt.c (final_warning_record::grow_type_warnings): Same. * ipa-fnsummary.c (ipa_fn_summary::account_size_time): Same. (redirect_to_unreachable): Same. (edge_set_predicate): Same. (evaluate_conditions_for_known_args): Same. (evaluate_properties_for_edge): Same. (ipa_fn_summary_t::duplicate): Same. (ipa_call_summary_t::duplicate): Same. (dump_ipa_call_summary): Same. (ipa_dump_fn_summary): Same. (eliminated_by_inlining_prob): Same. (set_cond_stmt_execution_predicate): Same. (set_switch_stmt_execution_predicate): Same. (compute_bb_predicates): Same. (will_be_nonconstant_expr_predicate): Same. (phi_result_unknown_predicate): Same. (analyze_function_body): Same. (compute_fn_summary): Same. (estimate_edge_devirt_benefit): Same. (estimate_edge_size_and_time): Same. (estimate_calls_size_and_time): Same. (estimate_node_size_and_time): Same. (remap_edge_change_prob): Same. (remap_edge_summaries): Same. (ipa_merge_fn_summary_after_inlining): Same. (ipa_fn_summary_generate): Same. (inline_read_section): Same. (ipa_fn_summary_read): Same. (ipa_fn_summary_write): Same. * ipa-fnsummary.h: Same. * ipa-hsa.c (ipa_hsa_read_section): Same. * ipa-icf-gimple.c (func_checker::compare_loops): Same. * ipa-icf.c (sem_function::param_used_p): Same. * ipa-inline-analysis.c (do_estimate_edge_time): Same. * ipa-inline.c (edge_badness): Same. (inline_small_functions): Same. * ipa-polymorphic-call.c (ipa_polymorphic_call_context::stream_out): Same. * ipa-predicate.c (predicate::remap_after_duplication): Same. (predicate::remap_after_inlining): Same. (predicate::stream_out): Same. * ipa-predicate.h: Same. * ipa-profile.c (ipa_profile_read_summary): Same. * ipa-prop.c (ipa_get_param_decl_index_1): Same. (count_formal_params): Same. (ipa_dump_param): Same. (ipa_alloc_node_params): Same. (ipa_print_node_jump_functions_for_edge): Same. (ipa_print_node_jump_functions): Same. (ipa_load_from_parm_agg): Same. (get_ancestor_addr_info): Same. (ipa_compute_jump_functions_for_edge): Same. (ipa_analyze_virtual_call_uses): Same. (ipa_analyze_stmt_uses): Same. (ipa_analyze_params_uses_in_bb): Same. (update_jump_functions_after_inlining): Same. (try_decrement_rdesc_refcount): Same. (ipa_impossible_devirt_target): Same. (update_indirect_edges_after_inlining): Same. (combine_controlled_uses_counters): Same. (ipa_edge_args_sum_t::duplicate): Same. (ipa_write_jump_function): Same. (ipa_write_indirect_edge_info): Same. (ipa_write_node_info): Same. (ipa_read_edge_info): Same. (ipa_prop_read_section): Same. (read_replacements_section): Same. * ipa-prop.h (ipa_get_param_count): Same. (ipa_get_param): Same. (ipa_get_type): Same. (ipa_get_param_move_cost): Same. (ipa_set_param_used): Same. (ipa_get_controlled_uses): Same. (ipa_set_controlled_uses): Same. (ipa_get_cs_argument_count): Same. * ipa-pure-const.c (analyze_function): Same. (pure_const_read_summary): Same. * ipa-ref.h: Same. * ipa-reference.c (ipa_reference_read_optimization_summary): Same. * ipa-split.c (test_nonssa_use): Same. (dump_split_point): Same. (dominated_by_forbidden): Same. (split_part_set_ssa_name_p): Same. (find_split_points): Same. * ira-build.c (finish_loop_tree_nodes): Same. (low_pressure_loop_node_p): Same. * ira-color.c (ira_reuse_stack_slot): Same. * ira-int.h: Same. * ira.c (setup_reg_equiv): Same. (print_insn_chain): Same. (ira): Same. * loop-doloop.c (doloop_condition_get): Same. (add_test): Same. (record_reg_sets): Same. (doloop_optimize): Same. * loop-init.c (loop_optimizer_init): Same. (fix_loop_structure): Same. * loop-invariant.c (merge_identical_invariants): Same. (compute_always_reached): Same. (find_exits): Same. (may_assign_reg_p): Same. (find_invariants_bb): Same. (find_invariants_body): Same. (replace_uses): Same. (can_move_invariant_reg): Same. (free_inv_motion_data): Same. (move_single_loop_invariants): Same. (change_pressure): Same. (mark_ref_regs): Same. (calculate_loop_reg_pressure): Same. * loop-iv.c (biv_entry_hasher::equal): Same. (iv_extend_to_rtx_code): Same. (check_iv_ref_table_size): Same. (clear_iv_info): Same. (latch_dominating_def): Same. (iv_get_reaching_def): Same. (iv_constant): Same. (iv_subreg): Same. (iv_extend): Same. (iv_neg): Same. (iv_add): Same. (iv_mult): Same. (get_biv_step): Same. (record_iv): Same. (analyzed_for_bivness_p): Same. (record_biv): Same. (iv_analyze_biv): Same. (iv_analyze_expr): Same. (iv_analyze_def): Same. (iv_analyze_op): Same. (iv_analyze): Same. (iv_analyze_result): Same. (biv_p): Same. (eliminate_implied_conditions): Same. (simplify_using_initial_values): Same. (shorten_into_mode): Same. (canonicalize_iv_subregs): Same. (determine_max_iter): Same. (check_simple_exit): Same. (find_simple_exit): Same. (get_simple_loop_desc): Same. * loop-unroll.c (report_unroll): Same. (decide_unrolling): Same. (unroll_loops): Same. (loop_exit_at_end_p): Same. (decide_unroll_constant_iterations): Same. (unroll_loop_constant_iterations): Same. (compare_and_jump_seq): Same. (unroll_loop_runtime_iterations): Same. (decide_unroll_stupid): Same. (unroll_loop_stupid): Same. (referenced_in_one_insn_in_loop_p): Same. (reset_debug_uses_in_loop): Same. (analyze_iv_to_split_insn): Same. * lra-eliminations.c (lra_debug_elim_table): Same. (setup_can_eliminate): Same. (form_sum): Same. (lra_get_elimination_hard_regno): Same. (lra_eliminate_regs_1): Same. (eliminate_regs_in_insn): Same. (update_reg_eliminate): Same. (init_elimination): Same. (lra_eliminate): Same. * lra-int.h: Same. * lra-lives.c (initiate_live_solver): Same. * lra-remat.c (create_remat_bb_data): Same. * lra-spills.c (lra_spill): Same. * lra.c (lra_set_insn_recog_data): Same. (lra_set_used_insn_alternative_by_uid): Same. (init_reg_info): Same. (expand_reg_info): Same. * lto-cgraph.c (output_symtab): Same. (read_identifier): Same. (get_alias_symbol): Same. (input_node): Same. (input_varpool_node): Same. (input_ref): Same. (input_edge): Same. (input_cgraph_1): Same. (input_refs): Same. (input_symtab): Same. (input_offload_tables): Same. (output_cgraph_opt_summary): Same. (input_edge_opt_summary): Same. (input_cgraph_opt_section): Same. * lto-section-in.c (lto_free_raw_section_data): Same. (lto_create_simple_input_block): Same. (lto_free_function_in_decl_state_for_node): Same. * lto-streamer-in.c (lto_tag_check_set): Same. (lto_location_cache::revert_location_cache): Same. (lto_location_cache::input_location): Same. (lto_input_location): Same. (stream_input_location_now): Same. (lto_input_tree_ref): Same. (lto_input_eh_catch_list): Same. (input_eh_region): Same. (lto_init_eh): Same. (make_new_block): Same. (input_cfg): Same. (fixup_call_stmt_edges): Same. (input_struct_function_base): Same. (input_function): Same. (lto_read_body_or_constructor): Same. (lto_read_tree_1): Same. (lto_read_tree): Same. (lto_input_scc): Same. (lto_input_tree_1): Same. (lto_input_toplevel_asms): Same. (lto_input_mode_table): Same. (lto_reader_init): Same. (lto_data_in_create): Same. * lto-streamer-out.c (output_cfg): Same. * lto-streamer.h: Same. * modulo-sched.c (duplicate_insns_of_cycles): Same. (generate_prolog_epilog): Same. (mark_loop_unsched): Same. (dump_insn_location): Same. (loop_canon_p): Same. (sms_schedule): Same. * omp-expand.c (expand_omp_for_ordered_loops): Same. (expand_omp_for_generic): Same. (expand_omp_for_static_nochunk): Same. (expand_omp_for_static_chunk): Same. (expand_omp_simd): Same. (expand_omp_taskloop_for_inner): Same. (expand_oacc_for): Same. (expand_omp_atomic_pipeline): Same. (mark_loops_in_oacc_kernels_region): Same. * omp-offload.c (oacc_xform_loop): Same. * omp-simd-clone.c (simd_clone_adjust): Same. * optabs-query.c (get_traditional_extraction_insn): Same. * optabs.c (expand_vector_broadcast): Same. (expand_binop_directly): Same. (expand_twoval_unop): Same. (expand_twoval_binop): Same. (expand_unop_direct): Same. (emit_indirect_jump): Same. (emit_conditional_move): Same. (emit_conditional_neg_or_complement): Same. (emit_conditional_add): Same. (vector_compare_rtx): Same. (expand_vec_perm_1): Same. (expand_vec_perm_const): Same. (expand_vec_cond_expr): Same. (expand_vec_series_expr): Same. (maybe_emit_atomic_exchange): Same. (maybe_emit_sync_lock_test_and_set): Same. (expand_atomic_compare_and_swap): Same. (expand_atomic_load): Same. (expand_atomic_store): Same. (maybe_emit_op): Same. (valid_multiword_target_p): Same. (create_integer_operand): Same. (maybe_legitimize_operand_same_code): Same. (maybe_legitimize_operand): Same. (create_convert_operand_from_type): Same. (can_reuse_operands_p): Same. (maybe_legitimize_operands): Same. (maybe_gen_insn): Same. (maybe_expand_insn): Same. (maybe_expand_jump_insn): Same. (expand_insn): Same. * optabs.h (create_expand_operand): Same. (create_fixed_operand): Same. (create_output_operand): Same. (create_input_operand): Same. (create_convert_operand_to): Same. (create_convert_operand_from): Same. * optinfo.h: Same. * poly-int.h: Same. * predict.c (optimize_insn_for_speed_p): Same. (optimize_loop_for_size_p): Same. (optimize_loop_for_speed_p): Same. (optimize_loop_nest_for_speed_p): Same. (get_base_value): Same. (predicted_by_loop_heuristics_p): Same. (predict_extra_loop_exits): Same. (predict_loops): Same. (predict_paths_for_bb): Same. (predict_paths_leading_to): Same. (propagate_freq): Same. (pass_profile::execute): Same. * predict.h: Same. * profile-count.c (profile_count::differs_from_p): Same. (profile_probability::differs_lot_from_p): Same. * profile-count.h: Same. * profile.c (branch_prob): Same. * regrename.c (free_chain_data): Same. (mark_conflict): Same. (create_new_chain): Same. (merge_overlapping_regs): Same. (init_rename_info): Same. (merge_chains): Same. (regrename_analyze): Same. (regrename_do_replace): Same. (scan_rtx_reg): Same. (record_out_operands): Same. (build_def_use): Same. * regrename.h: Same. * reload.h: Same. * reload1.c (init_reload): Same. (maybe_fix_stack_asms): Same. (copy_reloads): Same. (count_pseudo): Same. (count_spilled_pseudo): Same. (find_reg): Same. (find_reload_regs): Same. (select_reload_regs): Same. (spill_hard_reg): Same. (fixup_eh_region_note): Same. (set_reload_reg): Same. (allocate_reload_reg): Same. (compute_reload_subreg_offset): Same. (reload_adjust_reg_for_icode): Same. (emit_input_reload_insns): Same. (emit_output_reload_insns): Same. (do_input_reload): Same. (inherit_piecemeal_p): Same. * rtl.h: Same. * sanopt.c (maybe_get_dominating_check): Same. (maybe_optimize_ubsan_ptr_ifn): Same. (can_remove_asan_check): Same. (maybe_optimize_asan_check_ifn): Same. (sanopt_optimize_walker): Same. * sched-deps.c (add_dependence_list): Same. (chain_to_prev_insn): Same. (add_insn_mem_dependence): Same. (create_insn_reg_set): Same. (maybe_extend_reg_info_p): Same. (sched_analyze_reg): Same. (sched_analyze_1): Same. (get_implicit_reg_pending_clobbers): Same. (chain_to_prev_insn_p): Same. (deps_analyze_insn): Same. (deps_start_bb): Same. (sched_free_deps): Same. (init_deps): Same. (init_deps_reg_last): Same. (free_deps): Same. * sched-ebb.c: Same. * sched-int.h: Same. * sched-rgn.c (add_branch_dependences): Same. (concat_insn_mem_list): Same. (deps_join): Same. (sched_rgn_compute_dependencies): Same. * sel-sched-ir.c (reset_target_context): Same. (copy_deps_context): Same. (init_id_from_df): Same. (has_dependence_p): Same. (change_loops_latches): Same. (bb_top_order_comparator): Same. (make_region_from_loop_preheader): Same. (sel_init_pipelining): Same. (get_loop_nest_for_rgn): Same. (make_regions_from_the_rest): Same. (sel_is_loop_preheader_p): Same. * sel-sched-ir.h (inner_loop_header_p): Same. (get_all_loop_exits): Same. * selftest.h: Same. * sese.c (sese_build_liveouts): Same. (sese_insert_phis_for_liveouts): Same. * sese.h (defined_in_sese_p): Same. * sreal.c (sreal::stream_out): Same. * sreal.h: Same. * streamer-hooks.h: Same. * target-globals.c (save_target_globals): Same. * target-globals.h: Same. * target.def: Same. * target.h: Same. * targhooks.c (default_has_ifunc_p): Same. (default_empty_mask_is_expensive): Same. (default_init_cost): Same. * targhooks.h: Same. * toplev.c: Same. * tree-affine.c (aff_combination_mult): Same. (aff_combination_expand): Same. (aff_combination_constant_multiple_p): Same. * tree-affine.h: Same. * tree-cfg.c (build_gimple_cfg): Same. (replace_loop_annotate_in_block): Same. (replace_uses_by): Same. (remove_bb): Same. (dump_cfg_stats): Same. (gimple_duplicate_sese_region): Same. (gimple_duplicate_sese_tail): Same. (move_block_to_fn): Same. (replace_block_vars_by_duplicates): Same. (move_sese_region_to_fn): Same. (print_loops_bb): Same. (print_loop): Same. (print_loops): Same. (debug): Same. (debug_loops): Same. * tree-cfg.h: Same. * tree-chrec.c (chrec_fold_plus_poly_poly): Same. (chrec_fold_multiply_poly_poly): Same. (chrec_evaluate): Same. (chrec_component_in_loop_num): Same. (reset_evolution_in_loop): Same. (is_multivariate_chrec): Same. (chrec_contains_symbols): Same. (nb_vars_in_chrec): Same. (chrec_convert_1): Same. (chrec_convert_aggressive): Same. * tree-chrec.h: Same. * tree-core.h: Same. * tree-data-ref.c (dump_data_dependence_relation): Same. (canonicalize_base_object_address): Same. (data_ref_compare_tree): Same. (prune_runtime_alias_test_list): Same. (get_segment_min_max): Same. (create_intersect_range_checks): Same. (conflict_fn_no_dependence): Same. (object_address_invariant_in_loop_p): Same. (analyze_ziv_subscript): Same. (analyze_siv_subscript_cst_affine): Same. (analyze_miv_subscript): Same. (analyze_overlapping_iterations): Same. (build_classic_dist_vector_1): Same. (add_other_self_distances): Same. (same_access_functions): Same. (build_classic_dir_vector): Same. (subscript_dependence_tester_1): Same. (subscript_dependence_tester): Same. (access_functions_are_affine_or_constant_p): Same. (get_references_in_stmt): Same. (loop_nest_has_data_refs): Same. (graphite_find_data_references_in_stmt): Same. (find_data_references_in_bb): Same. (get_base_for_alignment): Same. (find_loop_nest_1): Same. (find_loop_nest): Same. * tree-data-ref.h (dr_alignment): Same. (ddr_dependence_level): Same. * tree-if-conv.c (fold_build_cond_expr): Same. (add_to_predicate_list): Same. (add_to_dst_predicate_list): Same. (phi_convertible_by_degenerating_args): Same. (idx_within_array_bound): Same. (all_preds_critical_p): Same. (pred_blocks_visited_p): Same. (predicate_bbs): Same. (build_region): Same. (if_convertible_loop_p_1): Same. (is_cond_scalar_reduction): Same. (predicate_scalar_phi): Same. (remove_conditions_and_labels): Same. (combine_blocks): Same. (version_loop_for_if_conversion): Same. (versionable_outer_loop_p): Same. (ifcvt_local_dce): Same. (tree_if_conversion): Same. (pass_if_conversion::gate): Same. * tree-if-conv.h: Same. * tree-inline.c (maybe_move_debug_stmts_to_successors): Same. * tree-loop-distribution.c (bb_top_order_cmp): Same. (free_rdg): Same. (stmt_has_scalar_dependences_outside_loop): Same. (copy_loop_before): Same. (create_bb_after_loop): Same. (const_with_all_bytes_same): Same. (generate_memset_builtin): Same. (generate_memcpy_builtin): Same. (destroy_loop): Same. (build_rdg_partition_for_vertex): Same. (compute_access_range): Same. (data_ref_segment_size): Same. (latch_dominated_by_data_ref): Same. (compute_alias_check_pairs): Same. (fuse_memset_builtins): Same. (finalize_partitions): Same. (find_seed_stmts_for_distribution): Same. (prepare_perfect_loop_nest): Same. * tree-parloops.c (lambda_transform_legal_p): Same. (loop_parallel_p): Same. (reduc_stmt_res): Same. (add_field_for_name): Same. (create_call_for_reduction_1): Same. (replace_uses_in_bb_by): Same. (transform_to_exit_first_loop_alt): Same. (try_transform_to_exit_first_loop_alt): Same. (transform_to_exit_first_loop): Same. (num_phis): Same. (gen_parallel_loop): Same. (gather_scalar_reductions): Same. (get_omp_data_i_param): Same. (try_create_reduction_list): Same. (oacc_entry_exit_single_gang): Same. (parallelize_loops): Same. * tree-pass.h: Same. * tree-predcom.c (determine_offset): Same. (last_always_executed_block): Same. (split_data_refs_to_components): Same. (suitable_component_p): Same. (valid_initializer_p): Same. (find_looparound_phi): Same. (insert_looparound_copy): Same. (add_looparound_copies): Same. (determine_roots_comp): Same. (predcom_tmp_var): Same. (initialize_root_vars): Same. (initialize_root_vars_store_elim_1): Same. (initialize_root_vars_store_elim_2): Same. (finalize_eliminated_stores): Same. (initialize_root_vars_lm): Same. (remove_stmt): Same. (determine_unroll_factor): Same. (execute_pred_commoning_cbck): Same. (base_names_in_chain_on): Same. (combine_chains): Same. (pcom_stmt_dominates_stmt_p): Same. (try_combine_chains): Same. (prepare_initializers_chain_store_elim): Same. (prepare_initializers_chain): Same. (prepare_initializers): Same. (prepare_finalizers_chain): Same. (prepare_finalizers): Same. (insert_init_seqs): Same. * tree-scalar-evolution.c (loop_phi_node_p): Same. (compute_overall_effect_of_inner_loop): Same. (add_to_evolution_1): Same. (add_to_evolution): Same. (follow_ssa_edge_binary): Same. (follow_ssa_edge_expr): Same. (backedge_phi_arg_p): Same. (follow_ssa_edge_in_condition_phi_branch): Same. (follow_ssa_edge_in_condition_phi): Same. (follow_ssa_edge_inner_loop_phi): Same. (follow_ssa_edge): Same. (analyze_evolution_in_loop): Same. (analyze_initial_condition): Same. (interpret_loop_phi): Same. (interpret_condition_phi): Same. (interpret_rhs_expr): Same. (interpret_expr): Same. (interpret_gimple_assign): Same. (analyze_scalar_evolution_1): Same. (analyze_scalar_evolution): Same. (analyze_scalar_evolution_for_address_of): Same. (get_instantiated_value_entry): Same. (loop_closed_phi_def): Same. (instantiate_scev_name): Same. (instantiate_scev_poly): Same. (instantiate_scev_binary): Same. (instantiate_scev_convert): Same. (instantiate_scev_not): Same. (instantiate_scev_r): Same. (instantiate_scev): Same. (resolve_mixers): Same. (initialize_scalar_evolutions_analyzer): Same. (scev_reset_htab): Same. (scev_reset): Same. (derive_simple_iv_with_niters): Same. (simple_iv_with_niters): Same. (expression_expensive_p): Same. (final_value_replacement_loop): Same. * tree-scalar-evolution.h (block_before_loop): Same. * tree-ssa-address.h: Same. * tree-ssa-dce.c (find_obviously_necessary_stmts): Same. * tree-ssa-dom.c (edge_info::record_simple_equiv): Same. (record_edge_info): Same. * tree-ssa-live.c (var_map_base_fini): Same. (remove_unused_locals): Same. * tree-ssa-live.h: Same. * tree-ssa-loop-ch.c (should_duplicate_loop_header_p): Same. (pass_ch_vect::execute): Same. (pass_ch::process_loop_p): Same. * tree-ssa-loop-im.c (mem_ref_hasher::hash): Same. (movement_possibility): Same. (outermost_invariant_loop): Same. (stmt_cost): Same. (determine_max_movement): Same. (invariantness_dom_walker::before_dom_children): Same. (move_computations): Same. (may_move_till): Same. (force_move_till_op): Same. (force_move_till): Same. (memref_free): Same. (record_mem_ref_loc): Same. (set_ref_stored_in_loop): Same. (mark_ref_stored): Same. (sort_bbs_in_loop_postorder_cmp): Same. (sort_locs_in_loop_postorder_cmp): Same. (analyze_memory_references): Same. (mem_refs_may_alias_p): Same. (find_ref_loc_in_loop_cmp): Same. (rewrite_mem_ref_loc::operator): Same. (first_mem_ref_loc_1::operator): Same. (sm_set_flag_if_changed::operator): Same. (execute_sm_if_changed_flag_set): Same. (execute_sm): Same. (hoist_memory_references): Same. (ref_always_accessed::operator): Same. (refs_independent_p): Same. (record_dep_loop): Same. (ref_indep_loop_p_1): Same. (ref_indep_loop_p): Same. (can_sm_ref_p): Same. (find_refs_for_sm): Same. (loop_suitable_for_sm): Same. (store_motion_loop): Same. (store_motion): Same. (fill_always_executed_in): Same. * tree-ssa-loop-ivcanon.c (constant_after_peeling): Same. (estimated_unrolled_size): Same. (loop_edge_to_cancel): Same. (remove_exits_and_undefined_stmts): Same. (remove_redundant_iv_tests): Same. (unloop_loops): Same. (estimated_peeled_sequence_size): Same. (try_peel_loop): Same. (canonicalize_loop_induction_variables): Same. (canonicalize_induction_variables): Same. * tree-ssa-loop-ivopts.c (iv_inv_expr_hasher::equal): Same. (name_info): Same. (stmt_after_inc_pos): Same. (contains_abnormal_ssa_name_p): Same. (niter_for_exit): Same. (find_bivs): Same. (mark_bivs): Same. (find_givs_in_bb): Same. (find_induction_variables): Same. (find_interesting_uses_cond): Same. (outermost_invariant_loop_for_expr): Same. (idx_find_step): Same. (add_candidate_1): Same. (add_iv_candidate_derived_from_uses): Same. (alloc_use_cost_map): Same. (prepare_decl_rtl): Same. (generic_predict_doloop_p): Same. (computation_cost): Same. (determine_common_wider_type): Same. (get_computation_aff_1): Same. (get_use_type): Same. (determine_group_iv_cost_address): Same. (iv_period): Same. (difference_cannot_overflow_p): Same. (may_eliminate_iv): Same. (determine_set_costs): Same. (cheaper_cost_pair): Same. (compare_cost_pair): Same. (iv_ca_cand_for_group): Same. (iv_ca_recount_cost): Same. (iv_ca_set_remove_invs): Same. (iv_ca_set_no_cp): Same. (iv_ca_set_add_invs): Same. (iv_ca_set_cp): Same. (iv_ca_add_group): Same. (iv_ca_cost): Same. (iv_ca_compare_deps): Same. (iv_ca_delta_reverse): Same. (iv_ca_delta_commit): Same. (iv_ca_cand_used_p): Same. (iv_ca_delta_free): Same. (iv_ca_new): Same. (iv_ca_free): Same. (iv_ca_dump): Same. (iv_ca_extend): Same. (iv_ca_narrow): Same. (iv_ca_prune): Same. (cheaper_cost_with_cand): Same. (iv_ca_replace): Same. (try_add_cand_for): Same. (get_initial_solution): Same. (try_improve_iv_set): Same. (find_optimal_iv_set_1): Same. (create_new_iv): Same. (rewrite_use_compare): Same. (remove_unused_ivs): Same. (determine_scaling_factor): Same. * tree-ssa-loop-ivopts.h: Same. * tree-ssa-loop-manip.c (create_iv): Same. (compute_live_loop_exits): Same. (add_exit_phi): Same. (add_exit_phis): Same. (find_uses_to_rename_use): Same. (find_uses_to_rename_def): Same. (find_uses_to_rename_in_loop): Same. (rewrite_into_loop_closed_ssa): Same. (check_loop_closed_ssa_bb): Same. (split_loop_exit_edge): Same. (ip_end_pos): Same. (ip_normal_pos): Same. (copy_phi_node_args): Same. (gimple_duplicate_loop_to_header_edge): Same. (can_unroll_loop_p): Same. (determine_exit_conditions): Same. (scale_dominated_blocks_in_loop): Same. (niter_for_unrolled_loop): Same. (tree_transform_and_unroll_loop): Same. (rewrite_all_phi_nodes_with_iv): Same. * tree-ssa-loop-manip.h: Same. * tree-ssa-loop-niter.c (number_of_iterations_ne_max): Same. (number_of_iterations_ne): Same. (assert_no_overflow_lt): Same. (assert_loop_rolls_lt): Same. (number_of_iterations_lt): Same. (adjust_cond_for_loop_until_wrap): Same. (tree_simplify_using_condition): Same. (simplify_using_initial_conditions): Same. (simplify_using_outer_evolutions): Same. (loop_only_exit_p): Same. (ssa_defined_by_minus_one_stmt_p): Same. (number_of_iterations_popcount): Same. (number_of_iterations_exit): Same. (find_loop_niter): Same. (finite_loop_p): Same. (chain_of_csts_start): Same. (get_val_for): Same. (loop_niter_by_eval): Same. (derive_constant_upper_bound_ops): Same. (do_warn_aggressive_loop_optimizations): Same. (record_estimate): Same. (get_cst_init_from_scev): Same. (record_nonwrapping_iv): Same. (idx_infer_loop_bounds): Same. (infer_loop_bounds_from_ref): Same. (infer_loop_bounds_from_array): Same. (infer_loop_bounds_from_pointer_arith): Same. (infer_loop_bounds_from_signedness): Same. (bound_index): Same. (discover_iteration_bound_by_body_walk): Same. (maybe_lower_iteration_bound): Same. (estimate_numbers_of_iterations): Same. (estimated_loop_iterations): Same. (estimated_loop_iterations_int): Same. (max_loop_iterations): Same. (max_loop_iterations_int): Same. (likely_max_loop_iterations): Same. (likely_max_loop_iterations_int): Same. (estimated_stmt_executions_int): Same. (max_stmt_executions): Same. (likely_max_stmt_executions): Same. (estimated_stmt_executions): Same. (stmt_dominates_stmt_p): Same. (nowrap_type_p): Same. (loop_exits_before_overflow): Same. (scev_var_range_cant_overflow): Same. (scev_probably_wraps_p): Same. (free_numbers_of_iterations_estimates): Same. * tree-ssa-loop-niter.h: Same. * tree-ssa-loop-prefetch.c (release_mem_refs): Same. (idx_analyze_ref): Same. (analyze_ref): Same. (gather_memory_references_ref): Same. (mark_nontemporal_store): Same. (emit_mfence_after_loop): Same. (may_use_storent_in_loop_p): Same. (mark_nontemporal_stores): Same. (should_unroll_loop_p): Same. (volume_of_dist_vector): Same. (add_subscript_strides): Same. (self_reuse_distance): Same. (insn_to_prefetch_ratio_too_small_p): Same. * tree-ssa-loop-split.c (split_at_bb_p): Same. (patch_loop_exit): Same. (find_or_create_guard_phi): Same. (easy_exit_values): Same. (connect_loop_phis): Same. (connect_loops): Same. (compute_new_first_bound): Same. (split_loop): Same. (tree_ssa_split_loops): Same. * tree-ssa-loop-unswitch.c (tree_ssa_unswitch_loops): Same. (is_maybe_undefined): Same. (tree_may_unswitch_on): Same. (simplify_using_entry_checks): Same. (tree_unswitch_single_loop): Same. (tree_unswitch_loop): Same. (tree_unswitch_outer_loop): Same. (empty_bb_without_guard_p): Same. (used_outside_loop_p): Same. (get_vop_from_header): Same. (hoist_guard): Same. * tree-ssa-loop.c (gate_oacc_kernels): Same. (get_lsm_tmp_name): Same. * tree-ssa-loop.h: Same. * tree-ssa-reassoc.c (add_repeat_to_ops_vec): Same. (build_and_add_sum): Same. (no_side_effect_bb): Same. (get_ops): Same. (linearize_expr): Same. (should_break_up_subtract): Same. (linearize_expr_tree): Same. * tree-ssa-scopedtables.c: Same. * tree-ssa-scopedtables.h: Same. * tree-ssa-structalias.c (condense_visit): Same. (label_visit): Same. (dump_pred_graph): Same. (perform_var_substitution): Same. (move_complex_constraints): Same. (remove_preds_and_fake_succs): Same. * tree-ssa-threadupdate.c (dbds_continue_enumeration_p): Same. (determine_bb_domination_status): Same. (duplicate_thread_path): Same. (thread_through_all_blocks): Same. * tree-ssa-threadupdate.h: Same. * tree-streamer-in.c (streamer_read_string_cst): Same. (input_identifier): Same. (unpack_ts_type_common_value_fields): Same. (unpack_ts_block_value_fields): Same. (unpack_ts_translation_unit_decl_value_fields): Same. (unpack_ts_omp_clause_value_fields): Same. (streamer_read_tree_bitfields): Same. (streamer_alloc_tree): Same. (lto_input_ts_common_tree_pointers): Same. (lto_input_ts_vector_tree_pointers): Same. (lto_input_ts_poly_tree_pointers): Same. (lto_input_ts_complex_tree_pointers): Same. (lto_input_ts_decl_minimal_tree_pointers): Same. (lto_input_ts_decl_common_tree_pointers): Same. (lto_input_ts_decl_non_common_tree_pointers): Same. (lto_input_ts_decl_with_vis_tree_pointers): Same. (lto_input_ts_field_decl_tree_pointers): Same. (lto_input_ts_function_decl_tree_pointers): Same. (lto_input_ts_type_common_tree_pointers): Same. (lto_input_ts_type_non_common_tree_pointers): Same. (lto_input_ts_list_tree_pointers): Same. (lto_input_ts_vec_tree_pointers): Same. (lto_input_ts_exp_tree_pointers): Same. (lto_input_ts_block_tree_pointers): Same. (lto_input_ts_binfo_tree_pointers): Same. (lto_input_ts_constructor_tree_pointers): Same. (lto_input_ts_omp_clause_tree_pointers): Same. (streamer_read_tree_body): Same. * tree-streamer.h: Same. * tree-switch-conversion.c (bit_test_cluster::is_beneficial): Same. * tree-vect-data-refs.c (vect_get_smallest_scalar_type): Same. (vect_analyze_possibly_independent_ddr): Same. (vect_analyze_data_ref_dependence): Same. (vect_compute_data_ref_alignment): Same. (vect_enhance_data_refs_alignment): Same. (vect_analyze_data_ref_access): Same. (vect_check_gather_scatter): Same. (vect_find_stmt_data_reference): Same. (vect_create_addr_base_for_vector_ref): Same. (vect_setup_realignment): Same. (vect_supportable_dr_alignment): Same. * tree-vect-loop-manip.c (rename_variables_in_bb): Same. (adjust_phi_and_debug_stmts): Same. (vect_set_loop_mask): Same. (add_preheader_seq): Same. (vect_maybe_permute_loop_masks): Same. (vect_set_loop_masks_directly): Same. (vect_set_loop_condition_masked): Same. (vect_set_loop_condition_unmasked): Same. (slpeel_duplicate_current_defs_from_edges): Same. (slpeel_add_loop_guard): Same. (slpeel_can_duplicate_loop_p): Same. (create_lcssa_for_virtual_phi): Same. (iv_phi_p): Same. (vect_update_ivs_after_vectorizer): Same. (vect_gen_vector_loop_niters_mult_vf): Same. (slpeel_update_phi_nodes_for_loops): Same. (slpeel_update_phi_nodes_for_guard1): Same. (find_guard_arg): Same. (slpeel_update_phi_nodes_for_guard2): Same. (slpeel_update_phi_nodes_for_lcssa): Same. (vect_do_peeling): Same. (vect_create_cond_for_alias_checks): Same. (vect_loop_versioning): Same. * tree-vect-loop.c (vect_determine_vf_for_stmt): Same. (vect_inner_phi_in_double_reduction_p): Same. (vect_analyze_scalar_cycles_1): Same. (vect_fixup_scalar_cycles_with_patterns): Same. (vect_get_loop_niters): Same. (bb_in_loop_p): Same. (vect_get_max_nscalars_per_iter): Same. (vect_verify_full_masking): Same. (vect_compute_single_scalar_iteration_cost): Same. (vect_analyze_loop_form_1): Same. (vect_analyze_loop_form): Same. (vect_active_double_reduction_p): Same. (vect_analyze_loop_operations): Same. (neutral_op_for_slp_reduction): Same. (vect_is_simple_reduction): Same. (vect_model_reduction_cost): Same. (get_initial_def_for_reduction): Same. (get_initial_defs_for_reduction): Same. (vect_create_epilog_for_reduction): Same. (vectorize_fold_left_reduction): Same. (vectorizable_reduction): Same. (vectorizable_induction): Same. (vectorizable_live_operation): Same. (loop_niters_no_overflow): Same. (vect_get_loop_mask): Same. (vect_transform_loop_stmt): Same. (vect_transform_loop): Same. * tree-vect-patterns.c (vect_reassociating_reduction_p): Same. (vect_determine_precisions): Same. (vect_pattern_recog_1): Same. * tree-vect-slp.c (vect_analyze_slp_instance): Same. * tree-vect-stmts.c (stmt_vectype): Same. (process_use): Same. (vect_init_vector_1): Same. (vect_truncate_gather_scatter_offset): Same. (get_group_load_store_type): Same. (vect_build_gather_load_calls): Same. (vect_get_strided_load_store_ops): Same. (vectorizable_simd_clone_call): Same. (vectorizable_store): Same. (permute_vec_elements): Same. (vectorizable_load): Same. (vect_transform_stmt): Same. (supportable_widening_operation): Same. * tree-vectorizer.c (vec_info::replace_stmt): Same. (vec_info::free_stmt_vec_info): Same. (vect_free_loop_info_assumptions): Same. (vect_loop_vectorized_call): Same. (set_uid_loop_bbs): Same. (vectorize_loops): Same. * tree-vectorizer.h (STMT_VINFO_BB_VINFO): Same. * tree.c (add_tree_to_fld_list): Same. (fld_type_variant_equal_p): Same. (fld_decl_context): Same. (fld_incomplete_type_of): Same. (free_lang_data_in_binfo): Same. (need_assembler_name_p): Same. (find_decls_types_r): Same. (get_eh_types_for_runtime): Same. (find_decls_types_in_eh_region): Same. (find_decls_types_in_node): Same. (assign_assembler_name_if_needed): Same. * value-prof.c (stream_out_histogram_value): Same. * value-prof.h: Same. * var-tracking.c (use_narrower_mode): Same. (prepare_call_arguments): Same. (vt_expand_loc_callback): Same. (resolve_expansions_pending_recursion): Same. (vt_expand_loc): Same. * varasm.c (const_hash_1): Same. (compare_constant): Same. (tree_output_constant_def): Same. (simplify_subtraction): Same. (get_pool_constant): Same. (output_constant_pool_2): Same. (output_constant_pool_1): Same. (mark_constants_in_pattern): Same. (mark_constant_pool): Same. (get_section_anchor): Same. * vr-values.c (compare_range_with_value): Same. (vr_values::extract_range_from_phi_node): Same. * vr-values.h: Same. * web.c (unionfind_union): Same. * wide-int.h: Same. From-SVN: r273311
2019-06-18[Vectorizer] Support masking fold left reductionsAlejandro Martinez1-0/+5
This patch adds support in the vectorizer for masking fold left reductions. This avoids the need to insert a conditional assignement with some identity value. From-SVN: r272407
2019-06-03Fix ICE in vect_slp_analyze_node_operations_1Alejandro Martinez1-1/+1
This patch fixes bug 90681. It was caused by trying to SLP vectorize a non groupped load. We've fixed it by tweaking a bit the implementation: mark masked loads as not vectorizable, but support them as an special case. Then the detect them in the test for normal non-groupped loads that was already there. From-SVN: r271856
2019-05-28Current vectoriser doesn't support masked loads for SLP.Alejandro Martinez1-1/+1
Current vectoriser doesn't support masked loads for SLP. We should add that, to allow things like: void f (int *restrict x, int *restrict y, int *restrict z, int n) { for (int i = 0; i < n; i += 2) { x[i] = y[i] ? z[i] : 1; x[i + 1] = y[i + 1] ? z[i + 1] : 2; } } to be vectorized using contiguous loads rather than LD2 and ST2. This patch was motivated by SVE, but it is completely generic and should apply to any architecture with masked loads. From-SVN: r271704
2019-04-17re PR middle-end/90095 (wrong code with -Os -fno-tree-bit-ccp)Jakub Jelinek1-15/+2
PR middle-end/90095 * internal-fn.c (expand_mul_overflow): Don't set SUBREG_PROMOTED_VAR_P on lowpart SUBREGs. * gcc.dg/pr90095-1.c: New test. * gcc.dg/pr90095-2.c: New test. From-SVN: r270410
2019-01-07re PR c++/85052 (Implement support for clang's __builtin_convertvector)Jakub Jelinek1-0/+9
PR c++/85052 * tree-vect-generic.c: Include insn-config.h and recog.h. (expand_vector_piecewise): Add defaulted ret_type argument, if non-NULL, use that in preference to type for the result type. (expand_vector_parallel): Formatting fix. (do_vec_conversion, do_vec_narrowing_conversion, expand_vector_conversion): New functions. (expand_vector_operations_1): Call expand_vector_conversion for VEC_CONVERT ifn calls. * internal-fn.def (VEC_CONVERT): New internal function. * internal-fn.c (expand_VEC_CONVERT): New function. * fold-const-call.c (fold_const_vec_convert): New function. (fold_const_call): Use it for CFN_VEC_CONVERT. * doc/extend.texi (__builtin_convertvector): Document. c-family/ * c-common.h (enum rid): Add RID_BUILTIN_CONVERTVECTOR. (c_build_vec_convert): Declare. * c-common.c (c_build_vec_convert): New function. c/ * c-parser.c (c_parser_postfix_expression): Parse __builtin_convertvector. cp/ * cp-tree.h (cp_build_vec_convert): Declare. * parser.c (cp_parser_postfix_expression): Parse __builtin_convertvector. * constexpr.c: Include fold-const-call.h. (cxx_eval_internal_function): Handle IFN_VEC_CONVERT. (potential_constant_expression_1): Likewise. * semantics.c (cp_build_vec_convert): New function. * pt.c (tsubst_copy_and_build): Handle CALL_EXPR to IFN_VEC_CONVERT. testsuite/ * c-c++-common/builtin-convertvector-1.c: New test. * c-c++-common/torture/builtin-convertvector-1.c: New test. * g++.dg/ext/builtin-convertvector-1.C: New test. * g++.dg/cpp0x/constexpr-builtin4.C: New test. From-SVN: r267632
2019-01-01Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r267494
2018-10-15[PR87563][AARCH64-SVE]: Don't keep ifcvt loop when COND_<OP> ifn could not ↵Renlin Li1-0/+12
be vectorized. ifcvt will created versioned loop and it will permissively generate scalar COND_<OP> ifn. If in the loop vectorize pass, COND_<OP> could not get vectoized, the if-converted loop should be abandoned when the target doesn't support such ifn. gcc/ 2018-10-12 Renlin Li <renlin.li@arm.com> PR target/87563 * tree-vectorizer.c (try_vectorize_loop_1): Don't use if-conversioned loop when it contains ifn with types not supported by backend. * internal-fn.c (expand_direct_optab_fn): Add an assert. (direct_internal_fn_supported_p): New helper function. * internal-fn.h (direct_internal_fn_supported_p): Declare. gcc/testsuite/ 2018-10-12 Renlin Li <renlin.li@arm.com> PR target/87563 * gcc.target/aarch64/sve/pr87563.c: New. From-SVN: r265172
2018-08-03Handle SLP of call pattern statementsRichard Sandiford1-0/+36
We couldn't vectorise: for (int j = 0; j < n; ++j) { for (int i = 0; i < 16; ++i) a[i] = (b[i] + c[i]) >> 1; a += step; b += step; c += step; } at -O3 because cunrolli unrolled the inner loop and SLP couldn't handle AVG_FLOOR patterns (see also PR86504). The problem was some overly strict checking of pattern statements compared to normal statements in vect_get_and_check_slp_defs: switch (gimple_code (def_stmt)) { case GIMPLE_PHI: case GIMPLE_ASSIGN: break; default: if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported defining stmt:\n"); return -1; } The easy fix would have been to add GIMPLE_CALL to the switch, but I don't think the switch is doing anything useful. We only create pattern statements that the rest of the vectoriser can handle, and the other checks in this function and elsewhere check whether SLP is possible. I'm also not sure why: if (!first && !oprnd_info->first_pattern /* Allow different pattern state for the defs of the first stmt in reduction chains. */ && (oprnd_info->first_dt != vect_reduction_def is necessary. All that should matter is that the statements in the node are "similar enough". It turned out to be quite hard to find a convincing example that used a mixture of pattern and non-pattern statements, so bb-slp-pow-1.c is the best I could come up with. But it does show that the combination of "xi * xi" statements and "pow (xj, 2) -> xj * xj" patterns are handled correctly. The patch therefore just removes the whole if block. The loop also needed commutative swapping to be extended to at least AVG_FLOOR. This gives +3.9% on 525.x264_r at -O3. 2018-08-03 Richard Sandiford <richard.sandiford@arm.com> gcc/ * internal-fn.h (first_commutative_argument): Declare. * internal-fn.c (first_commutative_argument): New function. * tree-vect-slp.c (vect_get_and_check_slp_defs): Remove extra restrictions for pattern statements. Use first_commutative_argument to look for commutative operands in calls to internal functions. gcc/testsuite/ * gcc.dg/vect/bb-slp-over-widen-1.c: Expect AVG_FLOOR to be used on vect_avg_qi targets. * gcc.dg/vect/bb-slp-over-widen-2.c: Likewise. * gcc.dg/vect/bb-slp-pow-1.c: New test. * gcc.dg/vect/vect-avg-15.c: Likewise. From-SVN: r263290
2018-07-12Use conditional internal functions in if-conversionRichard Sandiford1-1/+22
This patch uses IFN_COND_* to vectorise conditionally-executed, potentially-trapping arithmetic, such as most floating-point ops with -ftrapping-math. E.g.: if (cond) { ... x = a + b; ... } becomes: ... x = .COND_ADD (cond, a, b, else_value); ... When this transformation is done on its own, the value of x for !cond isn't important, so else_value is simply the target's preferred_else_value (i.e. the value it can handle the most efficiently). However, the patch also looks for the equivalent of: y = cond ? x : c; in which the "then" value is the result of the conditionally-executed operation and the "else" value "c" is some value that is available at x. In that case we can instead use: x = .COND_ADD (cond, a, b, c); and replace uses of y with uses of x. The patch also looks for: y = !cond ? c : x; which can be transformed in the same way. This involved adding a new utility function inverse_conditions_p, which was already open-coded in a more limited way in match.pd. 2018-07-12 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * fold-const.h (inverse_conditions_p): Declare. * fold-const.c (inverse_conditions_p): New function. * match.pd: Use inverse_conditions_p. Add folds of view_converts that test the inverse condition of a conditional internal function. * internal-fn.h (vectorized_internal_fn_supported_p): Declare. * internal-fn.c (internal_fn_mask_index): Handle conditional internal functions. (vectorized_internal_fn_supported_p): New function. * tree-if-conv.c: Include internal-fn.h and fold-const.h. (any_pred_load_store): Replace with... (need_to_predicate): ...this new variable. (redundant_ssa_names): New variable. (ifcvt_can_use_mask_load_store): Move initial checks to... (ifcvt_can_predicate): ...this new function. Handle tree codes for which a conditional internal function exists. (if_convertible_gimple_assign_stmt_p): Use ifcvt_can_predicate instead of ifcvt_can_use_mask_load_store. Update after variable name change. (predicate_load_or_store): New function, split out from predicate_mem_writes. (check_redundant_cond_expr): New function. (value_available_p): Likewise. (predicate_rhs_code): Likewise. (predicate_mem_writes): Rename to... (predicate_statements): ...this. Use predicate_load_or_store and predicate_rhs_code. (combine_blocks, tree_if_conversion): Update after above name changes. (ifcvt_local_dce): Handle redundant_ssa_names. * tree-vect-patterns.c (vect_recog_mask_conversion_pattern): Handle general conditional functions. * tree-vect-stmts.c (vectorizable_call): Likewise. gcc/testsuite/ * gcc.dg/vect/vect-cond-arith-4.c: New test. * gcc.dg/vect/vect-cond-arith-5.c: Likewise. * gcc.target/aarch64/sve/cond_arith_1.c: Likewise. * gcc.target/aarch64/sve/cond_arith_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_arith_2.c: Likewise. * gcc.target/aarch64/sve/cond_arith_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_arith_3.c: Likewise. * gcc.target/aarch64/sve/cond_arith_3_run.c: Likewise. From-SVN: r262589
2018-07-12Support fused multiply-adds in fully-masked reductionsRichard Sandiford1-0/+56
This patch adds support for fusing a conditional add or subtract with a multiplication, so that we can use fused multiply-add and multiply-subtract operations for fully-masked reductions. E.g. for SVE we vectorise: double res = 0.0; for (int i = 0; i < n; ++i) res += x[i] * y[i]; using a fully-masked loop in which the loop body has the form: res_1 = PHI<0(preheader), res_2(latch)>; avec = .MASK_LOAD (loop_mask, a) bvec = .MASK_LOAD (loop_mask, b) prod = avec * bvec; res_2 = .COND_ADD (loop_mask, res_1, prod, res_1); where the last statement does the equivalent of: res_2 = loop_mask ? res_1 + prod : res_1; (operating elementwise). The point of the patch is to convert the last two statements into: res_s = .COND_FMA (loop_mask, avec, bvec, res_1, res_1); which is equivalent to: res_2 = loop_mask ? fma (avec, bvec, res_1) : res_1; (again operating elementwise). 2018-07-12 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * internal-fn.h (can_interpret_as_conditional_op_p): Declare. * internal-fn.c (can_interpret_as_conditional_op_p): New function. * tree-ssa-math-opts.c (convert_mult_to_fma_1): Handle conditional plus and minus and convert them into IFN_COND_FMA-based sequences. (convert_mult_to_fma): Handle conditional plus and minus. gcc/testsuite/ * gcc.dg/vect/vect-fma-2.c: New test. * gcc.target/aarch64/sve/reduc_4.c: Likewise. * gcc.target/aarch64/sve/reduc_6.c: Likewise. * gcc.target/aarch64/sve/reduc_7.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r262588
2018-07-12Add IFN_COND_FMA functionsRichard Sandiford1-0/+56
This patch adds conditional equivalents of the IFN_FMA built-in functions. Most of it is just a mechanical extension of the binary stuff. 2018-07-12 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * doc/md.texi (cond_fma, cond_fms, cond_fnma, cond_fnms): Document. * optabs.def (cond_fma_optab, cond_fms_optab, cond_fnma_optab) (cond_fnms_optab): New optabs. * internal-fn.def (COND_FMA, COND_FMS, COND_FNMA, COND_FNMS): New internal functions. (FMA): Use DEF_INTERNAL_FLT_FN rather than DEF_INTERNAL_FLT_FLOATN_FN. * internal-fn.h (get_conditional_internal_fn): Declare. (get_unconditional_internal_fn): Likewise. * internal-fn.c (cond_ternary_direct): New macro. (expand_cond_ternary_optab_fn): Likewise. (direct_cond_ternary_optab_supported_p): Likewise. (FOR_EACH_COND_FN_PAIR): Likewise. (get_conditional_internal_fn): New function. (get_unconditional_internal_fn): Likewise. * gimple-match.h (gimple_match_op::MAX_NUM_OPS): Bump to 5. (gimple_match_op::gimple_match_op): Add a new overload for 5 operands. (gimple_match_op::set_op): Likewise. (gimple_resimplify5): Declare. * genmatch.c (decision_tree::gen): Generate simplifications for 5 operands. * gimple-match-head.c (gimple_simplify): Define an overload for 5 operands. Handle calls with 5 arguments in the top-level overload. (convert_conditional_op): Handle conversions from unconditional internal functions to conditional ones. (gimple_resimplify5): New function. (build_call_internal): Pass a fifth operand. (maybe_push_res_to_seq): Likewise. (try_conditional_simplification): Try converting conditional internal functions to unconditional internal functions. Handle 3-operand unconditional forms. * match.pd (UNCOND_TERNARY, COND_TERNARY): Operator lists. Define ternary equivalents of the current rules for binary conditional internal functions. * config/aarch64/aarch64.c (aarch64_preferred_else_value): Handle ternary operations. * config/aarch64/iterators.md (UNSPEC_COND_FMLA, UNSPEC_COND_FMLS) (UNSPEC_COND_FNMLA, UNSPEC_COND_FNMLS): New unspecs. (optab): Handle them. (SVE_COND_FP_TERNARY): New int iterator. (sve_fmla_op, sve_fmad_op): New int attributes. * config/aarch64/aarch64-sve.md (cond_<optab><mode>) (*cond_<optab><mode>_2, *cond_<optab><mode_4) (*cond_<optab><mode>_any): New SVE_COND_FP_TERNARY patterns. gcc/testsuite/ * gcc.dg/vect/vect-cond-arith-3.c: New test. * gcc.target/aarch64/sve/vcond_13.c: Likewise. * gcc.target/aarch64/sve/vcond_13_run.c: Likewise. * gcc.target/aarch64/sve/vcond_14.c: Likewise. * gcc.target/aarch64/sve/vcond_14_run.c: Likewise. * gcc.target/aarch64/sve/vcond_15.c: Likewise. * gcc.target/aarch64/sve/vcond_15_run.c: Likewise. * gcc.target/aarch64/sve/vcond_16.c: Likewise. * gcc.target/aarch64/sve/vcond_16_run.c: Likewise. From-SVN: r262587
2018-07-12Extend tree code folds to IFN_COND_*Richard Sandiford1-20/+34
This patch adds match.pd support for applying normal folds to their IFN_COND_* forms. E.g. the rule: (plus @0 (negate @1)) -> (minus @0 @1) also allows the fold: (IFN_COND_ADD @0 @1 (negate @2) @3) -> (IFN_COND_SUB @0 @1 @2 @3) Actually doing this by direct matches in gimple-match.c would probably lead to combinatorial explosion, so instead, the patch makes gimple_match_op carry a condition under which the operation happens ("cond"), and the value to use when the condition is false ("else_value"). Thus in the example above we'd do the following (a) convert: cond:NULL_TREE (IFN_COND_ADD @0 @1 @4 @3) else_value:NULL_TREE to: cond:@0 (plus @1 @4) else_value:@3 (b) apply gimple_resimplify to (plus @1 @4) (c) reintroduce cond and else_value when constructing the result. Nested operations inherit the condition of the outer operation (so that we don't introduce extra faults) but have a null else_value. If we try to build such an operation, the target gets to choose what else_value it can handle efficiently: obvious choices include one of the operands or a zero constant. (The alternative would be to have some representation for an undefined value, but that seems a bit invasive, and isn't likely to be useful here.) I've made the condition a mandatory part of the gimple_match_op constructor so that it doesn't accidentally get dropped. 2018-07-12 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * target.def (preferred_else_value): New target hook. * doc/tm.texi.in (TARGET_PREFERRED_ELSE_VALUE): New hook. * doc/tm.texi: Regenerate. * targhooks.h (default_preferred_else_value): Declare. * targhooks.c (default_preferred_else_value): New function. * internal-fn.h (conditional_internal_fn_code): Declare. * internal-fn.c (FOR_EACH_CODE_MAPPING): New macro. (get_conditional_internal_fn): Use it. (conditional_internal_fn_code): New function. * gimple-match.h (gimple_match_cond): New struct. (gimple_match_op): Add a cond member function. (gimple_match_op::gimple_match_op): Update all forms to take a gimple_match_cond. * genmatch.c (expr::gen_transform): Use the same condition as res_op for the suboperation, but don't specify a particular else_value. * tree-ssa-sccvn.c (vn_nary_simplify, vn_reference_lookup_3) (visit_nary_op, visit_reference_op_load): Pass gimple_match_cond::UNCOND to the gimple_match_op constructor. * gimple-match-head.c: Include tree-eh.h (convert_conditional_op): New function. (maybe_resimplify_conditional_op): Likewise. (gimple_resimplify1): Call maybe_resimplify_conditional_op. (gimple_resimplify2): Likewise. (gimple_resimplify3): Likewise. (gimple_resimplify4): Likewise. (maybe_push_res_to_seq): Return null for conditional operations. (try_conditional_simplification): New function. (gimple_simplify): Call it. Pass conditions to the gimple_match_op constructor. * match.pd: Fold VEC_COND_EXPRs of an IFN_COND_* call to a new IFN_COND_* call. * config/aarch64/aarch64.c (aarch64_preferred_else_value): New function. (TARGET_PREFERRED_ELSE_VALUE): Redefine. gcc/testsuite/ * gcc.dg/vect/vect-cond-arith-2.c: New test. * gcc.target/aarch64/sve/loop_add_6.c: Likewise. From-SVN: r262586
2018-05-25Add IFN_COND_{MUL,DIV,MOD,RDIV}Richard Sandiford1-0/+6
This patch adds support for conditional multiplication and division. It's mostly mechanical, but a few notes: * The *_optab name and the .md names are the same as the unconditional forms, just with "cond_" added to the front. This means we still have the awkward difference between sdiv and div, etc. * It was easier to retain the difference between integer and FP division in the function names, given that they map to different tree codes (TRUNC_DIV_EXPR and RDIV_EXPR). * SVE has no direct support for IFN_COND_MOD, but it seemed more consistent to add it anyway. * Adding IFN_COND_MUL enables an extra fully-masked reduction in gcc.dg/vect/pr53773.c. * In practice we don't actually use the integer division forms without if-conversion support (added by a later patch). 2018-05-25 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * doc/sourcebuild.texi (vect_double_cond_arith): Include multiplication and division. * doc/md.texi (cond_mul@var{m}, cond_div@var{m}, cond_mod@var{m}) (cond_udiv@var{m}, cond_umod@var{m}): Document. * optabs.def (cond_smul_optab, cond_sdiv_optab, cond_smod_optab) (cond_udiv_optab, cond_umod_optab): New optabs. * internal-fn.def (IFN_COND_MUL, IFN_COND_DIV, IFN_COND_MOD) (IFN_COND_RDIV): New internal functions. * internal-fn.c (get_conditional_internal_fn): Handle TRUNC_DIV_EXPR, TRUNC_MOD_EXPR and RDIV_EXPR. * match.pd (UNCOND_BINARY, COND_BINARY): Handle them. * config/aarch64/iterators.md (UNSPEC_COND_MUL, UNSPEC_COND_DIV): New unspecs. (SVE_INT_BINARY): Include mult. (SVE_COND_FP_BINARY): Include UNSPEC_MUL and UNSPEC_DIV. (optab, sve_int_op): Handle mult. (optab, sve_fp_op, commutative): Handle UNSPEC_COND_MUL and UNSPEC_COND_DIV. * config/aarch64/aarch64-sve.md (cond_<optab><mode>): New pattern for SVE_INT_BINARY_SD. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_double_cond_arith): Include multiplication and division. * gcc.dg/vect/pr53773.c: Do not expect a scalar tail when using fully-masked loops with a fixed vector length. * gcc.dg/vect/vect-cond-arith-1.c: Add multiplication and division tests. * gcc.target/aarch64/sve/vcond_8.c: Likewise. * gcc.target/aarch64/sve/vcond_9.c: Likewise. * gcc.target/aarch64/sve/vcond_12.c: Add multiplication tests. From-SVN: r260713
2018-05-25Add an "else" argument to IFN_COND_* functionsRichard Sandiford1-6/+13
As suggested by Richard B, this patch changes the IFN_COND_* functions so that they take the else value of the ?: operation as a final argument, rather than always using argument 1. All current callers will still use the equivalent of argument 1, so this patch makes the SVE code assert that for now. Later patches add the general case. 2018-05-25 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * doc/md.texi: Update the documentation of the cond_* optabs to mention the new final operand. Fix GET_MODE_NUNITS call. Describe the scalar case too. * internal-fn.def (IFN_EXTRACT_LAST): Change type to fold_left. * internal-fn.c (expand_cond_unary_optab_fn): Expect 3 operands instead of 2. (expand_cond_binary_optab_fn): Expect 4 operands instead of 3. (get_conditional_internal_fn): Update comment. * tree-vect-loop.c (vectorizable_reduction): Pass the original accumulator value as a final argument to conditional functions. * config/aarch64/aarch64-sve.md (cond_<optab><mode>): Turn into a define_expand and add an "else" operand. Assert for now that the else operand is equal to operand 2. Use SVE_INT_BINARY and SVE_COND_FP_BINARY instead of SVE_COND_INT_OP and SVE_COND_FP_OP. (*cond_<optab><mode>): New patterns. * config/aarch64/iterators.md (UNSPEC_COND_SMAX, UNSPEC_COND_UMAX) (UNSPEC_COND_SMIN, UNSPEC_COND_UMIN, UNSPEC_COND_AND, UNSPEC_COND_ORR) (UNSPEC_COND_EOR): Delete. (optab): Remove associated mappings. (SVE_INT_BINARY): New code iterator. (sve_int_op): Remove int attribute and add "minus" to the code attribute. (SVE_COND_INT_OP): Delete. (SVE_COND_FP_OP): Rename to... (SVE_COND_FP_BINARY): ...this. From-SVN: r260707
2018-05-22Handle a null lhs in expand_direct_optab_fn (PR85862)Richard Sandiford1-6/+7
This PR showed that the normal function for expanding directly-mapped internal functions didn't handle the case in which the call was only being kept for its side-effects. 2018-05-22 Richard Sandiford <richard.sandiford@linaro.org> gcc/ PR middle-end/85862 * internal-fn.c (expand_direct_optab_fn): Cope with a null lhs. gcc/testsuite/ PR middle-end/85862 * gcc.dg/torture/pr85862.c: New test. From-SVN: r260504
2018-05-18Replace FMA_EXPR with one internal fn per optabRichard Sandiford1-0/+5
There are four optabs for various forms of fused multiply-add: fma, fms, fnma and fnms. Of these, only fma had a direct gimple representation. For the other three we relied on special pattern- matching during expand, although tree-ssa-math-opts.c did have some code to try to second-guess what expand would do. This patch removes the old FMA_EXPR representation of fma and introduces four new internal functions, one for each optab. IFN_FMA is tied to BUILT_IN_FMA* while the other three are independent directly-mapped internal functions. It's then possible to do the pattern-matching in match.pd and tree-ssa-math-opts.c (via folding) can select the exact FMA-based operation. The BRIG & HSA parts are a best guess, but seem relatively simple. 2018-05-18 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * doc/sourcebuild.texi (scalar_all_fma): Document. * tree.def (FMA_EXPR): Delete. * internal-fn.def (FMA, FMS, FNMA, FNMS): New internal functions. * internal-fn.c (ternary_direct): New macro. (expand_ternary_optab_fn): Likewise. (direct_ternary_optab_supported_p): Likewise. * Makefile.in (build/genmatch.o): Depend on case-fn-macros.h. * builtins.c (fold_builtin_fma): Delete. (fold_builtin_3): Don't call it. * cfgexpand.c (expand_debug_expr): Remove FMA_EXPR handling. * expr.c (expand_expr_real_2): Likewise. * fold-const.c (operand_equal_p): Likewise. (fold_ternary_loc): Likewise. * gimple-pretty-print.c (dump_ternary_rhs): Likewise. * gimple.c (DEFTREECODE): Likewise. * gimplify.c (gimplify_expr): Likewise. * optabs-tree.c (optab_for_tree_code): Likewise. * tree-cfg.c (verify_gimple_assign_ternary): Likewise. * tree-eh.c (operation_could_trap_p): Likewise. (stmt_could_throw_1_p): Likewise. * tree-inline.c (estimate_operator_cost): Likewise. * tree-pretty-print.c (dump_generic_node): Likewise. (op_code_prio): Likewise. * tree-ssa-loop-im.c (stmt_cost): Likewise. * tree-ssa-operands.c (get_expr_operands): Likewise. * tree.c (commutative_ternary_tree_code, add_expr): Likewise. * fold-const-call.h (fold_fma): Delete. * fold-const-call.c (fold_const_call_ssss): Handle CFN_FMS, CFN_FNMA and CFN_FNMS. (fold_fma): Delete. * genmatch.c (combined_fn): New enum. (commutative_ternary_tree_code): Remove FMA_EXPR handling. (commutative_op): New function. (commutate): Use it. Handle more than 2 operands. (dt_operand::gen_gimple_expr): Use commutative_op. (parser::parse_expr): Allow :c to be used with non-binary operators if the commutative operand is known. * gimple-ssa-backprop.c (backprop::process_builtin_call_use): Handle CFN_FMS, CFN_FNMA and CFN_FNMS. (backprop::process_assign_use): Remove FMA_EXPR handling. * hsa-gen.c (gen_hsa_insns_for_operation_assignment): Likewise. (gen_hsa_fma): New function. (gen_hsa_insn_for_internal_fn_call): Use it for IFN_FMA, IFN_FMS, IFN_FNMA and IFN_FNMS. * match.pd: Add folds for IFN_FMS, IFN_FNMA and IFN_FNMS. * gimple-fold.h (follow_all_ssa_edges): Declare. * gimple-fold.c (follow_all_ssa_edges): New function. * tree-ssa-math-opts.c (convert_mult_to_fma_1): Use the gimple_build interface and use follow_all_ssa_edges to fold the result. (convert_mult_to_fma): Use direct_internal_fn_suppoerted_p instead of checking for optabs directly. * config/i386/i386.c (ix86_add_stmt_cost): Recognize FMAs as calls rather than FMA_EXPRs. * config/rs6000/rs6000.c (rs6000_gimple_fold_builtin): Create a call to IFN_FMA instead of an FMA_EXPR. gcc/brig/ * brigfrontend/brig-function.cc (brig_function::get_builtin_for_hsa_opcode): Use BUILT_IN_FMA for BRIG_OPCODE_FMA. (brig_function::get_tree_code_for_hsa_opcode): Treat BUILT_IN_FMA as a call. gcc/c/ * gimple-parser.c (c_parser_gimple_postfix_expression): Remove __FMA_EXPR handlng. gcc/cp/ * constexpr.c (cxx_eval_constant_expression): Remove FMA_EXPR handling. (potential_constant_expression_1): Likewise. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_scalar_all_fma): New proc. * gcc.dg/fma-1.c: New test. * gcc.dg/fma-2.c: Likewise. * gcc.dg/fma-3.c: Likewise. * gcc.dg/fma-4.c: Likewise. * gcc.dg/fma-5.c: Likewise. * gcc.dg/fma-6.c: Likewise. * gcc.dg/fma-7.c: Likewise. * gcc.dg/gimplefe-26.c: Use .FMA instead of __FMA and require scalar_all_fma. * gfortran.dg/reassoc_7.f: Pass -ffp-contract=off. * gfortran.dg/reassoc_8.f: Likewise. * gfortran.dg/reassoc_9.f: Likewise. * gfortran.dg/reassoc_10.f: Likewise. From-SVN: r260348
2018-05-17Gimple FE support for internal functionsRichard Sandiford1-0/+20
This patch gets the gimple FE to parse calls to internal functions. The only non-obvious thing was how the functions should be written to avoid clashes with real function names. One option would be to go the magic number of underscores route, but we already do that for built-in functions, and it would be good to keep them visually distinct. In the end I borrowed the local/internal label convention from asm and used: x = .SQRT (y); 2018-05-17 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * internal-fn.h (lookup_internal_fn): Declare * internal-fn.c (lookup_internal_fn): New function. * gimple.c (gimple_build_call_from_tree): Handle calls to internal functions. * gimple-pretty-print.c (dump_gimple_call): Print "." before internal function names. * tree-pretty-print.c (dump_generic_node): Likewise. * tree-ssa-scopedtables.c (expr_hash_elt::print): Likewise. gcc/c/ * gimple-parser.c: Include internal-fn.h. (c_parser_gimple_statement): Treat a leading CPP_DOT as a call. (c_parser_gimple_call_internal): New function. (c_parser_gimple_postfix_expression): Use it to handle CPP_DOT. Fix typos in comment. gcc/testsuite/ * gcc.dg/gimplefe-28.c: New test. * gcc.dg/asan/use-after-scope-9.c: Adjust expected output for internal function calls. * gcc.dg/goacc/loop-processing-1.c: Likewise. From-SVN: r260316
2018-02-20Fix incorrect TARGET_MEM_REF alignment (PR 84419)Richard Sandiford1-1/+4
expand_call_mem_ref checks for TARGET_MEM_REFs that have compatible type, but it didn't then go on to install the specific type we need, which might have different alignment due to: if (TYPE_ALIGN (type) != align) type = build_aligned_type (type, align); This was causing masked stores to be incorrectly marked as aligned on AVX512. 2018-02-20 Richard Sandiford <richard.sandiford@linaro.org> gcc/ PR tree-optimization/84419 * internal-fn.c (expand_call_mem_ref): Create a TARGET_MEM_REF with the required type if its current type is compatible but different. gcc/testsuite/ PR tree-optimization/84419 * gcc.dg/vect/pr84419.c: New test. From-SVN: r257847
2018-01-13Add support for SVE scatter storesRichard Sandiford1-2/+85
This is mostly a mechanical extension of the previous gather load support to scatter stores. The internal functions in this case are: IFN_SCATTER_STORE (base, offsets, scale, values) IFN_MASK_SCATTER_STORE (base, offsets, scale, values, mask) However, one nonobvious change is to vect_analyze_data_ref_access. If we're treating an access as a gather load or scatter store (i.e. if STMT_VINFO_GATHER_SCATTER_P is true), the existing code would create a dummy data_reference whose step is 0. There's not really much else it could do, since the whole point is that the step isn't predictable from iteration to iteration. We then went into this code in vect_analyze_data_ref_access: /* Allow loads with zero step in inner-loop vectorization. */ if (loop_vinfo && integer_zerop (step)) { GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL; if (!nested_in_vect_loop_p (loop, stmt)) return DR_IS_READ (dr); I.e. we'd take the step literally and assume that this is a load or store to an invariant address. Loads from invariant addresses are supported but stores to them aren't. The code therefore had the effect of disabling all scatter stores. AFAICT this is true of AVX too: although tests like avx512f-scatter-1.c test for the correctness of a scatter-like loop, they don't seem to check whether a scatter instruction is actually used. The patch therefore makes vect_analyze_data_ref_access return true for scatters. We do seem to handle the aliasing correctly; that's tested by other functions, and is symmetrical to the already-working gather case. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/sourcebuild.texi (vect_scatter_store): Document. * optabs.def (scatter_store_optab, mask_scatter_store_optab): New optabs. * doc/md.texi (scatter_store@var{m}, mask_scatter_store@var{m}): Document. * genopinit.c (main): Add supports_vec_scatter_store and supports_vec_scatter_store_cached to target_optabs. * gimple.h (gimple_expr_type): Handle IFN_SCATTER_STORE and IFN_MASK_SCATTER_STORE. * internal-fn.def (SCATTER_STORE, MASK_SCATTER_STORE): New internal functions. * internal-fn.h (internal_store_fn_p): Declare. (internal_fn_stored_value_index): Likewise. * internal-fn.c (scatter_store_direct): New macro. (expand_scatter_store_optab_fn): New function. (direct_scatter_store_optab_supported_p): New macro. (internal_store_fn_p): New function. (internal_gather_scatter_fn_p): Handle IFN_SCATTER_STORE and IFN_MASK_SCATTER_STORE. (internal_fn_mask_index): Likewise. (internal_fn_stored_value_index): New function. (internal_gather_scatter_fn_supported_p): Adjust operand numbers for scatter stores. * optabs-query.h (supports_vec_scatter_store_p): Declare. * optabs-query.c (supports_vec_scatter_store_p): New function. * tree-vectorizer.h (vect_get_store_rhs): Declare. * tree-vect-data-refs.c (vect_analyze_data_ref_access): Return true for scatter stores. (vect_gather_scatter_fn_p): Handle scatter stores too. (vect_check_gather_scatter): Consider using scatter stores if supports_vec_scatter_store_p. * tree-vect-patterns.c (vect_try_gather_scatter_pattern): Handle scatter stores too. * tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use internal_fn_stored_value_index. (check_load_store_masking): Handle scatter stores too. (vect_get_store_rhs): Make public. (vectorizable_call): Use internal_store_fn_p. (vectorizable_store): Handle scatter store internal functions. (vect_transform_stmt): Compare GROUP_STORE_COUNT with GROUP_SIZE when deciding whether the end of the group has been reached. * config/aarch64/aarch64.md (UNSPEC_ST1_SCATTER): New unspec. * config/aarch64/aarch64-sve.md (scatter_store<mode>): New expander. (mask_scatter_store<mode>): New insns. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_scatter_store): New proc. * gcc.dg/vect/pr25413a.c: Expect both loops to be optimized on targets with scatter stores. * gcc.dg/vect/vect-71.c: Restrict XFAIL to targets without scatter stores. * gcc.target/aarch64/sve/mask_scatter_store_1.c: New test. * gcc.target/aarch64/sve/mask_scatter_store_2.c: Likewise. * gcc.target/aarch64/sve/scatter_store_1.c: Likewise. * gcc.target/aarch64/sve/scatter_store_2.c: Likewise. * gcc.target/aarch64/sve/scatter_store_3.c: Likewise. * gcc.target/aarch64/sve/scatter_store_4.c: Likewise. * gcc.target/aarch64/sve/scatter_store_5.c: Likewise. * gcc.target/aarch64/sve/scatter_store_6.c: Likewise. * gcc.target/aarch64/sve/scatter_store_7.c: Likewise. * gcc.target/aarch64/sve/strided_store_1.c: Likewise. * gcc.target/aarch64/sve/strided_store_2.c: Likewise. * gcc.target/aarch64/sve/strided_store_3.c: Likewise. * gcc.target/aarch64/sve/strided_store_4.c: Likewise. * gcc.target/aarch64/sve/strided_store_5.c: Likewise. * gcc.target/aarch64/sve/strided_store_6.c: Likewise. * gcc.target/aarch64/sve/strided_store_7.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256643
2018-01-13Add support for SVE gather loadsRichard Sandiford1-0/+134
This patch adds support for SVE gather loads. It uses the basically the same analysis code as the AVX gather support, but after that there are two major differences: - It uses new internal functions rather than target built-ins. The interface is: IFN_GATHER_LOAD (base, offsets scale) IFN_MASK_GATHER_LOAD (base, offsets scale, mask) which should be reasonably generic. One of the advantages of using internal functions is that other passes can understand what the functions do, but a more immediate advantage is that we can query the underlying target pattern to see which scales it supports. - It uses pattern recognition to convert the offset to the right width, if it was originally narrower than that. This avoids having to do a widening operation as part of the gather expansion itself. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/md.texi (gather_load@var{m}): Document. (mask_gather_load@var{m}): Likewise. * genopinit.c (main): Add supports_vec_gather_load and supports_vec_gather_load_cached to target_optabs. * optabs-tree.c (init_tree_optimization_optabs): Use ggc_cleared_alloc to allocate target_optabs. * optabs.def (gather_load_optab, mask_gather_laod_optab): New optabs. * internal-fn.def (GATHER_LOAD, MASK_GATHER_LOAD): New internal functions. * internal-fn.h (internal_load_fn_p): Declare. (internal_gather_scatter_fn_p): Likewise. (internal_fn_mask_index): Likewise. (internal_gather_scatter_fn_supported_p): Likewise. * internal-fn.c (gather_load_direct): New macro. (expand_gather_load_optab_fn): New function. (direct_gather_load_optab_supported_p): New macro. (direct_internal_fn_optab): New function. (internal_load_fn_p): Likewise. (internal_gather_scatter_fn_p): Likewise. (internal_fn_mask_index): Likewise. (internal_gather_scatter_fn_supported_p): Likewise. * optabs-query.c (supports_at_least_one_mode_p): New function. (supports_vec_gather_load_p): Likewise. * optabs-query.h (supports_vec_gather_load_p): Declare. * tree-vectorizer.h (gather_scatter_info): Add ifn, element_type and memory_type field. (NUM_PATTERNS): Bump to 15. * tree-vect-data-refs.c: Include internal-fn.h. (vect_gather_scatter_fn_p): New function. (vect_describe_gather_scatter_call): Likewise. (vect_check_gather_scatter): Try using internal functions for gather loads. Recognize existing calls to a gather load function. (vect_analyze_data_refs): Consider using gather loads if supports_vec_gather_load_p. * tree-vect-patterns.c (vect_get_load_store_mask): New function. (vect_get_gather_scatter_offset_type): Likewise. (vect_convert_mask_for_vectype): Likewise. (vect_add_conversion_to_patterm): Likewise. (vect_try_gather_scatter_pattern): Likewise. (vect_recog_gather_scatter_pattern): New pattern recognizer. (vect_vect_recog_func_ptrs): Add it. * tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use internal_fn_mask_index and internal_gather_scatter_fn_p. (check_load_store_masking): Take the gather_scatter_info as an argument and handle gather loads. (vect_get_gather_scatter_ops): New function. (vectorizable_call): Check internal_load_fn_p. (vectorizable_load): Likewise. Handle gather load internal functions. (vectorizable_store): Update call to check_load_store_masking. * config/aarch64/aarch64.md (UNSPEC_LD1_GATHER): New unspec. * config/aarch64/iterators.md (SVE_S, SVE_D): New mode iterators. * config/aarch64/predicates.md (aarch64_gather_scale_operand_w) (aarch64_gather_scale_operand_d): New predicates. * config/aarch64/aarch64-sve.md (gather_load<mode>): New expander. (mask_gather_load<mode>): New insns. gcc/testsuite/ * gcc.target/aarch64/sve/gather_load_1.c: New test. * gcc.target/aarch64/sve/gather_load_2.c: Likewise. * gcc.target/aarch64/sve/gather_load_3.c: Likewise. * gcc.target/aarch64/sve/gather_load_4.c: Likewise. * gcc.target/aarch64/sve/gather_load_5.c: Likewise. * gcc.target/aarch64/sve/gather_load_6.c: Likewise. * gcc.target/aarch64/sve/gather_load_7.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_1.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_2.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_3.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_4.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_5.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_6.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256640
2018-01-13Add support for in-order addition reduction using SVE FADDARichard Sandiford1-0/+5
This patch adds support for in-order floating-point addition reductions, which are suitable even in strict IEEE mode. Previously vect_is_simple_reduction would reject any cases that forbid reassociation. The idea is instead to tentatively accept them as "FOLD_LEFT_REDUCTIONs" and only fail later if there is no support for them. Although this patch only handles the particular case of plus and minus on floating-point types, there's no reason in principle why we couldn't handle other cases. The reductions use a new fold_left_plus_optab if available, otherwise they fall back to elementwise additions or subtractions. The vect_force_simple_reduction change makes it easier for parloops to read the type of reduction. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * optabs.def (fold_left_plus_optab): New optab. * doc/md.texi (fold_left_plus_@var{m}): Document. * internal-fn.def (IFN_FOLD_LEFT_PLUS): New internal function. * internal-fn.c (fold_left_direct): Define. (expand_fold_left_optab_fn): Likewise. (direct_fold_left_optab_supported_p): Likewise. * fold-const-call.c (fold_const_fold_left): New function. (fold_const_call): Use it to fold CFN_FOLD_LEFT_PLUS. * tree-parloops.c (valid_reduction_p): New function. (gather_scalar_reductions): Use it. * tree-vectorizer.h (FOLD_LEFT_REDUCTION): New vect_reduction_type. (vect_finish_replace_stmt): Declare. * tree-vect-loop.c (fold_left_reduction_fn): New function. (needs_fold_left_reduction_p): New function, split out from... (vect_is_simple_reduction): ...here. Accept reductions that forbid reassociation, but give them type FOLD_LEFT_REDUCTION. (vect_force_simple_reduction): Also store the reduction type in the assignment's STMT_VINFO_REDUC_TYPE. (vect_model_reduction_cost): Handle FOLD_LEFT_REDUCTION. (merge_with_identity): New function. (vect_expand_fold_left): Likewise. (vectorize_fold_left_reduction): Likewise. (vectorizable_reduction): Handle FOLD_LEFT_REDUCTION. Leave the scalar phi in place for it. Check for target support and reject cases that would reassociate the operation. Defer the transform phase to vectorize_fold_left_reduction. * config/aarch64/aarch64.md (UNSPEC_FADDA): New unspec. * config/aarch64/aarch64-sve.md (fold_left_plus_<mode>): New expander. (*fold_left_plus_<mode>, *pred_fold_left_plus_<mode>): New insns. gcc/testsuite/ * gcc.dg/vect/no-fast-math-vect16.c: Expect the test to pass and check for a message about using in-order reductions. * gcc.dg/vect/pr79920.c: Expect both loops to be vectorized and check for a message about using in-order reductions. * gcc.dg/vect/trapv-vect-reduc-4.c: Expect all three loops to be vectorized and check for a message about using in-order reductions. Expect targets with variable-length vectors to fall back to the fixed-length mininum. * gcc.dg/vect/vect-reduc-6.c: Expect the loop to be vectorized and check for a message about using in-order reductions. * gcc.dg/vect/vect-reduc-in-order-1.c: New test. * gcc.dg/vect/vect-reduc-in-order-2.c: Likewise. * gcc.dg/vect/vect-reduc-in-order-3.c: Likewise. * gcc.dg/vect/vect-reduc-in-order-4.c: Likewise. * gcc.target/aarch64/sve/reduc_strict_1.c: New test. * gcc.target/aarch64/sve/reduc_strict_1_run.c: Likewise. * gcc.target/aarch64/sve/reduc_strict_2.c: Likewise. * gcc.target/aarch64/sve/reduc_strict_2_run.c: Likewise. * gcc.target/aarch64/sve/reduc_strict_3.c: Likewise. * gcc.target/aarch64/sve/slp_13.c: Add floating-point types. * gfortran.dg/vect/vect-8.f90: Expect 22 loops to be vectorized if vect_fold_left_plus. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256639
2018-01-13Add support for conditional reductions using SVE CLASTBRichard Sandiford1-0/+5
This patch uses SVE CLASTB to optimise conditional reductions. It means that we no longer need to maintain a separate index vector to record the most recent valid value, and no longer need to worry about overflow cases. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/md.texi (fold_extract_last_@var{m}): Document. * doc/sourcebuild.texi (vect_fold_extract_last): Likewise. * optabs.def (fold_extract_last_optab): New optab. * internal-fn.def (FOLD_EXTRACT_LAST): New internal function. * internal-fn.c (fold_extract_direct): New macro. (expand_fold_extract_optab_fn): Likewise. (direct_fold_extract_optab_supported_p): Likewise. * tree-vectorizer.h (EXTRACT_LAST_REDUCTION): New vect_reduction_type. * tree-vect-loop.c (vect_model_reduction_cost): Handle EXTRACT_LAST_REDUCTION. (get_initial_def_for_reduction): Do not create an initial vector for EXTRACT_LAST_REDUCTION reductions. (vectorizable_reduction): Leave the scalar phi in place for EXTRACT_LAST_REDUCTIONs. Try using EXTRACT_LAST_REDUCTION ahead of INTEGER_INDUC_COND_REDUCTION. Do not check for an epilogue code for EXTRACT_LAST_REDUCTION and defer the transform phase to vectorizable_condition. * tree-vect-stmts.c (vect_finish_stmt_generation_1): New function, split out from... (vect_finish_stmt_generation): ...here. (vect_finish_replace_stmt): New function. (vectorizable_condition): Handle EXTRACT_LAST_REDUCTION. * config/aarch64/aarch64-sve.md (fold_extract_last_<mode>): New pattern. * config/aarch64/aarch64.md (UNSPEC_CLASTB): New unspec. gcc/testsuite/ * lib/target-supports.exp (check_effective_target_vect_fold_extract_last): New proc. * gcc.dg/vect/pr65947-1.c: Update dump messages. Add markup for fold_extract_last. * gcc.dg/vect/pr65947-2.c: Likewise. * gcc.dg/vect/pr65947-3.c: Likewise. * gcc.dg/vect/pr65947-4.c: Likewise. * gcc.dg/vect/pr65947-5.c: Likewise. * gcc.dg/vect/pr65947-6.c: Likewise. * gcc.dg/vect/pr65947-9.c: Likewise. * gcc.dg/vect/pr65947-10.c: Likewise. * gcc.dg/vect/pr65947-12.c: Likewise. * gcc.dg/vect/pr65947-14.c: Likewise. * gcc.dg/vect/pr80631-1.c: Likewise. * gcc.target/aarch64/sve/clastb_1.c: New test. * gcc.target/aarch64/sve/clastb_1_run.c: Likewise. * gcc.target/aarch64/sve/clastb_2.c: Likewise. * gcc.target/aarch64/sve/clastb_2_run.c: Likewise. * gcc.target/aarch64/sve/clastb_3.c: Likewise. * gcc.target/aarch64/sve/clastb_3_run.c: Likewise. * gcc.target/aarch64/sve/clastb_4.c: Likewise. * gcc.target/aarch64/sve/clastb_4_run.c: Likewise. * gcc.target/aarch64/sve/clastb_5.c: Likewise. * gcc.target/aarch64/sve/clastb_5_run.c: Likewise. * gcc.target/aarch64/sve/clastb_6.c: Likewise. * gcc.target/aarch64/sve/clastb_6_run.c: Likewise. * gcc.target/aarch64/sve/clastb_7.c: Likewise. * gcc.target/aarch64/sve/clastb_7_run.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256633
2018-01-13Add support for vectorising live-out values using SVE LASTBRichard Sandiford1-0/+5
This patch uses the SVE LASTB instruction to optimise cases in which a value produced by the final scalar iteration of a vectorised loop is live outside the loop. Previously this situation would stop us from using a fully-masked loop. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/md.texi (extract_last_@var{m}): Document. * optabs.def (extract_last_optab): New optab. * internal-fn.def (EXTRACT_LAST): New internal function. * internal-fn.c (cond_unary_direct): New macro. (expand_cond_unary_optab_fn): Likewise. (direct_cond_unary_optab_supported_p): Likewise. * tree-vect-loop.c (vectorizable_live_operation): Allow fully-masked loops using EXTRACT_LAST. * config/aarch64/aarch64-sve.md (aarch64_sve_lastb<mode>): Rename to... (extract_last_<mode>): ...this optab. (vec_extract<mode><Vel>): Update accordingly. gcc/testsuite/ * gcc.target/aarch64/sve/live_1.c: New test. * gcc.target/aarch64/sve/live_1_run.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256632
2018-01-13Allow ADDR_EXPRs of TARGET_MEM_REFsRichard Sandiford1-14/+44
This patch allows ADDR_EXPR <TARGET_MEM_REF ...>, which is useful when calling internal functions that take pointers to memory that is conditionally loaded or stored. This is a prerequisite to the following ivopts patch. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * expr.c (expand_expr_addr_expr_1): Handle ADDR_EXPRs of TARGET_MEM_REFs. * gimple-expr.h (is_gimple_addressable: Likewise. * gimple-expr.c (is_gimple_address): Likewise. * internal-fn.c (expand_call_mem_ref): New function. (expand_mask_load_optab_fn): Use it. (expand_mask_store_optab_fn): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256627
2018-01-13Add support for reductions in fully-masked loopsRichard Sandiford1-0/+36
This patch removes the restriction that fully-masked loops cannot have reductions. The key thing here is to make sure that the reduction accumulator doesn't include any values associated with inactive lanes; the patch adds a bunch of conditional binary operations for doing that. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/md.texi (cond_add@var{mode}, cond_sub@var{mode}) (cond_and@var{mode}, cond_ior@var{mode}, cond_xor@var{mode}) (cond_smin@var{mode}, cond_smax@var{mode}, cond_umin@var{mode}) (cond_umax@var{mode}): Document. * optabs.def (cond_add_optab, cond_sub_optab, cond_and_optab) (cond_ior_optab, cond_xor_optab, cond_smin_optab, cond_smax_optab) (cond_umin_optab, cond_umax_optab): New optabs. * internal-fn.def (COND_ADD, COND_SUB, COND_MIN, COND_MAX, COND_AND) (COND_IOR, COND_XOR): New internal functions. * internal-fn.h (get_conditional_internal_fn): Declare. * internal-fn.c (cond_binary_direct): New macro. (expand_cond_binary_optab_fn): Likewise. (direct_cond_binary_optab_supported_p): Likewise. (get_conditional_internal_fn): New function. * tree-vect-loop.c (vectorizable_reduction): Handle fully-masked loops. Cope with reduction statements that are vectorized as calls rather than assignments. * config/aarch64/aarch64-sve.md (cond_<optab><mode>): New insns. * config/aarch64/iterators.md (UNSPEC_COND_ADD, UNSPEC_COND_SUB) (UNSPEC_COND_SMAX, UNSPEC_COND_UMAX, UNSPEC_COND_SMIN) (UNSPEC_COND_UMIN, UNSPEC_COND_AND, UNSPEC_COND_ORR) (UNSPEC_COND_EOR): New unspecs. (optab): Add mappings for them. (SVE_COND_INT_OP, SVE_COND_FP_OP): New int iterators. (sve_int_op, sve_fp_op): New int attributes. gcc/testsuite/ * gcc.dg/vect/pr60482.c: Remove XFAIL for variable-length vectors. * gcc.target/aarch64/sve/reduc_1.c: Expect the loop operations to be predicated. * gcc.target/aarch64/sve/slp_5.c: Check for a fully-masked loop. * gcc.target/aarch64/sve/slp_7.c: Likewise. * gcc.target/aarch64/sve/reduc_5.c: New test. * gcc.target/aarch64/sve/slp_13.c: Likewise. * gcc.target/aarch64/sve/slp_13_run.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256626
2018-01-13Add support for fully-predicated loopsRichard Sandiford1-0/+44
This patch adds support for using a single fully-predicated loop instead of a vector loop and a scalar tail. An SVE WHILELO instruction generates the predicate for each iteration of the loop, given the current scalar iv value and the loop bound. This operation is wrapped up in a new internal function called WHILE_ULT. E.g.: WHILE_ULT (0, 3, { 0, 0, 0, 0}) -> { 1, 1, 1, 0 } WHILE_ULT (UINT_MAX - 1, UINT_MAX, { 0, 0, 0, 0 }) -> { 1, 0, 0, 0 } The third WHILE_ULT argument is needed to make the operation unambiguous: without it, WHILE_ULT (0, 3) for one vector type would seem equivalent to WHILE_ULT (0, 3) for another, even if the types have different numbers of elements. Note that the patch uses "mask" and "fully-masked" instead of "predicate" and "fully-predicated", to follow existing GCC terminology. This patch just handles the simple cases, punting for things like reductions and live-out values. Later patches remove most of these restrictions. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * optabs.def (while_ult_optab): New optab. * doc/md.texi (while_ult@var{m}@var{n}): Document. * internal-fn.def (WHILE_ULT): New internal function. * internal-fn.h (direct_internal_fn_supported_p): New override that takes two types as argument. * internal-fn.c (while_direct): New macro. (expand_while_optab_fn): New function. (convert_optab_supported_p): Likewise. (direct_while_optab_supported_p): New macro. * wide-int.h (wi::udiv_ceil): New function. * tree-vectorizer.h (rgroup_masks): New structure. (vec_loop_masks): New typedef. (_loop_vec_info): Add masks, mask_compare_type, can_fully_mask_p and fully_masked_p. (LOOP_VINFO_CAN_FULLY_MASK_P, LOOP_VINFO_FULLY_MASKED_P) (LOOP_VINFO_MASKS, LOOP_VINFO_MASK_COMPARE_TYPE): New macros. (vect_max_vf): New function. (slpeel_make_loop_iterate_ntimes): Delete. (vect_set_loop_condition, vect_get_loop_mask_type, vect_gen_while) (vect_halve_mask_nunits, vect_double_mask_nunits): Declare. (vect_record_loop_mask, vect_get_loop_mask): Likewise. * tree-vect-loop-manip.c: Include tree-ssa-loop-niter.h, internal-fn.h, stor-layout.h and optabs-query.h. (vect_set_loop_mask): New function. (add_preheader_seq): Likewise. (add_header_seq): Likewise. (interleave_supported_p): Likewise. (vect_maybe_permute_loop_masks): Likewise. (vect_set_loop_masks_directly): Likewise. (vect_set_loop_condition_masked): Likewise. (vect_set_loop_condition_unmasked): New function, split out from slpeel_make_loop_iterate_ntimes. (slpeel_make_loop_iterate_ntimes): Rename to.. (vect_set_loop_condition): ...this. Use vect_set_loop_condition_masked for fully-masked loops and vect_set_loop_condition_unmasked otherwise. (vect_do_peeling): Update call accordingly. (vect_gen_vector_loop_niters): Use VF as the step for fully-masked loops. * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Initialize mask_compare_type, can_fully_mask_p and fully_masked_p. (release_vec_loop_masks): New function. (_loop_vec_info): Use it to free the loop masks. (can_produce_all_loop_masks_p): New function. (vect_get_max_nscalars_per_iter): Likewise. (vect_verify_full_masking): Likewise. (vect_analyze_loop_2): Save LOOP_VINFO_CAN_FULLY_MASK_P around retries, and free the mask rgroups before retrying. Check loop-wide reasons for disallowing fully-masked loops. Make the final decision about whether use a fully-masked loop or not. (vect_estimate_min_profitable_iters): Do not assume that peeling for the number of iterations will be needed for fully-masked loops. (vectorizable_reduction): Disable fully-masked loops. (vectorizable_live_operation): Likewise. (vect_halve_mask_nunits): New function. (vect_double_mask_nunits): Likewise. (vect_record_loop_mask): Likewise. (vect_get_loop_mask): Likewise. (vect_transform_loop): Handle the case in which the final loop iteration might handle a partial vector. Call vect_set_loop_condition instead of slpeel_make_loop_iterate_ntimes. * tree-vect-stmts.c: Include tree-ssa-loop-niter.h and gimple-fold.h. (check_load_store_masking): New function. (prepare_load_store_mask): Likewise. (vectorizable_store): Handle fully-masked loops. (vectorizable_load): Likewise. (supportable_widening_operation): Use vect_halve_mask_nunits for booleans. (supportable_narrowing_operation): Likewise vect_double_mask_nunits. (vect_gen_while): New function. * config/aarch64/aarch64.md (umax<mode>3): New expander. (aarch64_uqdec<mode>): New insn. gcc/testsuite/ * gcc.dg/tree-ssa/cunroll-10.c: Disable vectorization. * gcc.dg/tree-ssa/peel1.c: Likewise. * gcc.dg/vect/vect-load-lanes-peeling-1.c: Remove XFAIL for variable-length vectors. * gcc.target/aarch64/sve/vcond_6.c: XFAIL test for AND. * gcc.target/aarch64/sve/vec_bool_cmp_1.c: Expect BIC instead of NOT. * gcc.target/aarch64/sve/slp_1.c: Check for a fully-masked loop. * gcc.target/aarch64/sve/slp_2.c: Likewise. * gcc.target/aarch64/sve/slp_3.c: Likewise. * gcc.target/aarch64/sve/slp_4.c: Likewise. * gcc.target/aarch64/sve/slp_6.c: Likewise. * gcc.target/aarch64/sve/slp_8.c: New test. * gcc.target/aarch64/sve/slp_8_run.c: Likewise. * gcc.target/aarch64/sve/slp_9.c: Likewise. * gcc.target/aarch64/sve/slp_9_run.c: Likewise. * gcc.target/aarch64/sve/slp_10.c: Likewise. * gcc.target/aarch64/sve/slp_10_run.c: Likewise. * gcc.target/aarch64/sve/slp_11.c: Likewise. * gcc.target/aarch64/sve/slp_11_run.c: Likewise. * gcc.target/aarch64/sve/slp_12.c: Likewise. * gcc.target/aarch64/sve/slp_12_run.c: Likewise. * gcc.target/aarch64/sve/ld1r_2.c: Likewise. * gcc.target/aarch64/sve/ld1r_2_run.c: Likewise. * gcc.target/aarch64/sve/while_1.c: Likewise. * gcc.target/aarch64/sve/while_2.c: Likewise. * gcc.target/aarch64/sve/while_3.c: Likewise. * gcc.target/aarch64/sve/while_4.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256625
2018-01-13Add support for masked load/store_lanesRichard Sandiford1-8/+26
This patch adds support for vectorising groups of IFN_MASK_LOADs and IFN_MASK_STOREs using conditional load/store-lanes instructions. This requires new internal functions to represent the result (IFN_MASK_{LOAD,STORE}_LANES), as well as associated optabs. The normal IFN_{LOAD,STORE}_LANES functions are const operations that logically just perform the permute: the load or store is encoded as a MEM operand to the call statement. In contrast, the IFN_MASK_{LOAD,STORE}_LANES functions use the same kind of interface as IFN_MASK_{LOAD,STORE}, since the memory is only conditionally accessed. The AArch64 patterns were added as part of the main LD[234]/ST[234] patch. 2018-01-13 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/md.texi (vec_mask_load_lanes@var{m}@var{n}): Document. (vec_mask_store_lanes@var{m}@var{n}): Likewise. * optabs.def (vec_mask_load_lanes_optab): New optab. (vec_mask_store_lanes_optab): Likewise. * internal-fn.def (MASK_LOAD_LANES): New internal function. (MASK_STORE_LANES): Likewise. * internal-fn.c (mask_load_lanes_direct): New macro. (mask_store_lanes_direct): Likewise. (expand_mask_load_optab_fn): Handle masked operations. (expand_mask_load_lanes_optab_fn): New macro. (expand_mask_store_optab_fn): Handle masked operations. (expand_mask_store_lanes_optab_fn): New macro. (direct_mask_load_lanes_optab_supported_p): Likewise. (direct_mask_store_lanes_optab_supported_p): Likewise. * tree-vectorizer.h (vect_store_lanes_supported): Take a masked_p parameter. (vect_load_lanes_supported): Likewise. * tree-vect-data-refs.c (strip_conversion): New function. (can_group_stmts_p): Likewise. (vect_analyze_data_ref_accesses): Use it instead of checking for a pair of assignments. (vect_store_lanes_supported): Take a masked_p parameter. (vect_load_lanes_supported): Likewise. * tree-vect-loop.c (vect_analyze_loop_2): Update calls to vect_store_lanes_supported and vect_load_lanes_supported. * tree-vect-slp.c (vect_analyze_slp_instance): Likewise. * tree-vect-stmts.c (get_group_load_store_type): Take a masked_p parameter. Don't allow gaps for masked accesses. Use vect_get_store_rhs. Update calls to vect_store_lanes_supported and vect_load_lanes_supported. (get_load_store_type): Take a masked_p parameter and update call to get_group_load_store_type. (vectorizable_store): Update call to get_load_store_type. Handle IFN_MASK_STORE_LANES. (vectorizable_load): Update call to get_load_store_type. Handle IFN_MASK_LOAD_LANES. gcc/testsuite/ * gcc.dg/vect/vect-ooo-group-1.c: New test. * gcc.target/aarch64/sve/mask_struct_load_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_1_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_2.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_2_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_3.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_3_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_4.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_5.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_6.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_7.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_8.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_1_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_3.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_3_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_4.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256620
2018-01-03Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r256169
2018-01-03poly_int: expand_vector_ubsan_overflowRichard Sandiford1-7/+10
This patch makes expand_vector_ubsan_overflow cope with a polynomial number of elements. 2018-01-03 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * internal-fn.c (expand_vector_ubsan_overflow): Handle polynomial numbers of elements. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r256148
2017-12-15re PR sanitizer/83388 (reference statement index not found error with ↵Richard Biener1-0/+8
-fsanitize=null) 2017-12-15 Richard Biener <rguenther@suse.de> PR lto/83388 * internal-fn.def (IFN_NOP): Add. * internal-fn.c (expand_NOP): Do nothing. * lto-streamer-in.c (input_function): Instead of removing sanitizer calls replace them with IFN_NOP calls. * gcc.dg/lto/pr83388_0.c: New testcase. From-SVN: r255694
2017-11-30re PR target/83210 (__builtin_mul_overflow() generates suboptimal code when ↵Jakub Jelinek1-0/+43
exactly one argument is the constant 2) PR target/83210 * internal-fn.c (expand_mul_overflow): Optimize unsigned multiplication by power of 2 constant into two shifts + comparison. * gcc.target/i386/pr83210.c: New test. From-SVN: r255269