Age | Commit message (Collapse) | Author | Files | Lines |
|
As suggested by Richard B., this patch makes vect_is_simple_use check
whether a defining statement has been replaced by a pattern statement,
and if so returns the pattern statement instead.
The reason for doing this is that the main patch for PR85694
makes over_widening handle more general cases. These over-widened
patterns can still be useful when matching later statements;
e.g. an overwidened MULT_EXPR could be the input to a DOT_PROD_EXPR.
The patch doesn't do anything with the STMT_VINFO_IN_PATTERN_P checks
in vect_recog_over_widening_pattern or vect_recog_widen_shift_pattern
since later patches rewrite them anyway.
Doing this fixed an XFAIL in vect-reduc-dot-u16b.c.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop.c (vectorizable_reduction): Assert that the
phi is not a pattern statement and has not been replaced by
a pattern statement.
* tree-vect-patterns.c (type_conversion_p): Don't check
STMT_VINFO_IN_PATTERN_P.
(vect_recog_vector_vector_shift_pattern): Likewise.
(vect_recog_dot_prod_pattern): Expect vect_is_simple_use to return
the pattern statement rather than the original statement; check
directly for a WIDEN_MULT_EXPR here.
* tree-vect-slp.c (vect_get_and_check_slp_defs): Expect
vect_is_simple_use to return the pattern statement rather
than the original statement; use is_pattern_stmt_p to check
for such a pattern statement.
* tree-vect-stmts.c (process_use): Expect vect_is_simple_use
to return the pattern statement rather than the original statement;
don't do the same transformation here.
(vect_is_simple_use): If the defining statement has been replaced
by a pattern statement, return the pattern statement instead.
Remove the corresponding (local) transformation from the vectype
overload.
gcc/testsuite/
* gcc.dg/vect/vect-reduc-dot-u16b.c: Remove xfail and update the
test for vectorization along the lines described in the comment.
From-SVN: r262273
|
|
As suggested by Richard B., this patch reorders the arguments to
vect_is_simple_use so that def_stmt comes last and is optional.
Many callers can then drop it, making it more obvious which of
the remaining calls would be affected by the next patch.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_is_simple_use): Move the gimple ** to the
end and default to null.
* tree-vect-loop.c (vect_create_epilog_for_reduction)
(vectorizable_reduction): Update calls accordingly, dropping the
gimple ** argument if the passed-back statement isn't needed.
* tree-vect-patterns.c (vect_get_internal_def, type_conversion_p)
(vect_recog_rotate_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.
(vect_mask_constant_operand_p): Likewise.
* tree-vect-stmts.c (is_simple_and_all_uses_invariant, process_use):
(vect_model_simple_cost, vect_get_vec_def_for_operand): Likewise.
(get_group_load_store_type, get_load_store_type): Likewise.
(vect_check_load_store_mask, vect_check_store_rhs): Likewise.
(vectorizable_call, vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion, vectorizable_assignment): Likewise.
(vectorizable_shift, vectorizable_operation): Likewise.
(vectorizable_store, vect_is_simple_cond): Likewise.
(vectorizable_condition, vectorizable_comparison): Likewise.
(get_same_sized_vectype, vect_get_mask_type_for_stmt): Likewise.
(vect_is_simple_use): Rename the def_stmt argument to def_stmt_out
and move it to the end. Cope with null def_stmt_outs.
From-SVN: r262272
|
|
2018-06-21 Richard Biener <rguenther@suse.de>
* tree-data-ref.c (dr_step_indicator): Handle NULL DR_STEP.
* tree-vect-data-refs.c (vect_analyze_possibly_independent_ddr):
Avoid calling vect_mark_for_runtime_alias_test with gathers or scatters.
(vect_analyze_data_ref_dependence): Re-order checks to deal with
NULL DR_STEP.
(vect_record_base_alignments): Do not record base alignment
for gathers or scatters.
(vect_compute_data_ref_alignment): Drop return value that is always
true. Bail out early for gathers or scatters.
(vect_enhance_data_refs_alignment): Bail out early for gathers
or scatters.
(vect_find_same_alignment_drs): Likewise.
(vect_analyze_data_refs_alignment): Remove dead code.
(vect_slp_analyze_and_verify_node_alignment): Likewise.
(vect_analyze_data_refs): For possible gathers or scatters do
not create an alternate DR, just check their possible validity
and mark them. Adjust DECL_NONALIASED handling to not rely
on DR_BASE_ADDRESS.
* tree-vect-loop-manip.c (vect_update_inits_of_drs): Do not
update inits of gathers or scatters.
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern):
Also copy gather/scatter flag to pattern vinfo.
From-SVN: r261834
|
|
This patch makes pattern recognisers do their own checking for vector
types and target support. Previously some recognisers did this
themselves and some left it to vect_pattern_recog_1.
Doing this means we can get rid of the type_in argument, which was
ignored if the recogniser did its own checking. It also means
we create fewer junk statements.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (NUM_PATTERNS, vect_recog_func_ptr): Move to
tree-vect-patterns.c.
* tree-vect-patterns.c (vect_supportable_direct_optab_p): New function.
(vect_recog_dot_prod_pattern): Use it. Remove the type_in argument.
(vect_recog_sad_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
(vect_recog_pow_pattern): Likewise. Check for a null vectype.
(vect_recog_widen_shift_pattern): Remove the type_in argument.
(vect_recog_rotate_pattern): Likewise.
(vect_recog_mult_pattern): Likewise.
(vect_recog_vector_vector_shift_pattern): Likewise.
(vect_recog_divmod_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
(vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
(vect_recog_widen_mult_pattern): Likewise. Check for a null vectype.
(vect_recog_over_widening_pattern): Likewise.
(vect_recog_gather_scatter_pattern): Likewise.
(vect_recog_func_ptr): Move from tree-vectorizer.h
(vect_vect_recog_func_ptrs): Move further down the file.
(vect_recog_func): Likewise. Remove the third argument.
(NUM_PATTERNS): Define based on vect_vect_recog_func_ptrs.
(vect_pattern_recog_1): Expect the pattern function to do any
necessary target tests. Also expect it to provide a vector type.
Remove the type_in handling.
From-SVN: r261791
|
|
This message is a long write-up for a patch that simply adds a common
routine for printing the "vector_foo_pattern: detected:" messages.
The reason for doing this is that some routines check for target support
themselves and some leave it to vect_pattern_recog_1. Those that leave
it to vect_pattern_recog_1 currently print these "detected:" messages if
the statements have the right form, even if the pattern is eventually
discarded. IMO that's useful, and a lot of existing scan tests rely on it.
However, a later patch makes patterns do their own testing, and stops
them creating pattern statements until the tests have passed. This means
(a) they need to print the "detected:" message earlier and (b) the pattern
statement won't be around to print.
The patch therefore makes all routines print the original statement
rather than the pattern one. That information isn't obvious otherwise,
whereas vect_pattern_recog_1 already prints the pattern statement
in the case of a successful match. This also avoids the previous
situation in which a routine could print "detected:" and then
silently bail out before saying what had been detected.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_pattern_detected): New function.
(vect_recog_dot_prod_patternm, vect_recog_sad_pattern)
(vect_recog_widen_mult_pattern, vect_recog_widen_sum_pattern)
(vect_recog_over_widening_pattern, vect_recog_widen_shift_pattern
(vect_recog_rotate_pattern, vect_recog_vector_vector_shift_pattern)
(vect_recog_mult_pattern, vect_recog_divmod_pattern)
(vect_recog_mixed_size_cond_pattern, vect_recog_bool_pattern)
(vect_recog_mask_conversion_pattern)
(vect_try_gather_scatter_pattern): Likewise.
From-SVN: r261790
|
|
This patch adds a helper for pattern code that wants to find an
internal (vectorisable) definition of an SSA name.
A later patch will make more use of this, and alter the definition.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_get_internal_def): New function.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern)
(vect_recog_vector_vector_shift_pattern, check_bool_pattern)
(search_type_for_mask_1): Use it.
From-SVN: r261789
|
|
vect_recog_dot_prod_pattern and vect_recog_sad_pattern both checked
whether the statement passed in had already been recognised as a
WIDEN_SUM_EXPR pattern. That isn't possible (any more?), since the
first recognised pattern wins, and since vect_recog_widen_sum_pattern
never matches a later statement than the one it's given.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Remove
redundant WIDEN_SUM_EXPR handling.
(vect_recog_sad_pattern): Likewise.
From-SVN: r261788
|
|
tree-vect-patterns.c checked that operands to primitive arithmetic ops
are compatible with each other and with the result. The checks date
back years and have long been redundant with verify_gimple_stmt.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Remove
redundant check that the types of a PLUS_EXPR or MULT_EXPR agree.
(vect_recog_sad_pattern): Likewise PLUS_EXPR, ABS_EXPR and MINUS_EXPR.
(vect_recog_widen_mult_pattern): Likewise MULT_EXPR.
(vect_recog_widen_sum_pattern): Likewise PLUS_EXPR.
From-SVN: r261787
|
|
A pattern's PATTERN_DEF_SEQ was attached to both the original statement
and the main pattern statement, which made it harder to update later.
This patch attaches it to just the original statement. In practice,
anything that cared had ready access to both.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (_stmt_vec_info): Note above pattern_def_seq
that the sequence is attached to the original statement rather
than the pattern statement.
* tree-vect-loop.c (vect_determine_vf_for_stmt): Take the
PATTERN_DEF_SEQ from the original statement rather than
the main pattern statement.
* tree-vect-stmts.c (free_stmt_vec_info): Likewise.
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Likewise.
(vect_mark_pattern_stmts): Don't copy the PATTERN_DEF_SEQ.
From-SVN: r261785
|
|
2018-06-19 Richard Biener <rguenther@suse.de>
PR tree-optimization/86179
* tree-vect-patterns.c (vect_pattern_recog_1): Clean up
after failed recognition.
* gcc.dg/pr86179.c: New testcase.
From-SVN: r261731
|
|
gcc/ChangeLog:
* tree-vect-data-refs.c (vect_analyze_data_ref_dependences):
Replace dump_printf_loc call with DUMP_VECT_SCOPE.
(vect_slp_analyze_instance_dependence): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_analyze_data_refs_alignment): Likewise.
(vect_slp_analyze_and_verify_instance_alignment
(vect_analyze_data_ref_accesses): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
(vect_analyze_data_refs): Likewise.
* tree-vect-loop-manip.c (vect_update_inits_of_drs): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(vect_analyze_scalar_cycles_1): Likewise.
(vect_get_loop_niters): Likewise.
(vect_analyze_loop_form_1): Likewise.
(vect_update_vf_for_slp): Likewise.
(vect_analyze_loop_operations): Likewise.
(vect_analyze_loop): Likewise.
(vectorizable_induction): Likewise.
(vect_transform_loop): Likewise.
* tree-vect-patterns.c (vect_pattern_recog): Likewise.
* tree-vect-slp.c (vect_analyze_slp): Likewise.
(vect_make_slp_decision): Likewise.
(vect_detect_hybrid_slp): Likewise.
(vect_slp_analyze_operations): Likewise.
(vect_slp_bb): Likewise.
* tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
(vectorizable_bswap): Likewise.
(vectorizable_call): Likewise.
(vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_assignment): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
* tree-vectorizer.h (DUMP_VECT_SCOPE): New macro.
From-SVN: r261710
|
|
gcc.target/aarch64/vect-abs-compile.c - "abs" vectorization fails for char/short types)
gcc/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
PR middle-end/64946
* cfgexpand.c (expand_debug_expr): Hande ABSU_EXPR.
* config/i386/i386.c (ix86_add_stmt_cost): Likewise.
* dojump.c (do_jump): Likewise.
* expr.c (expand_expr_real_2): Check operand type's sign.
* fold-const.c (const_unop): Handle ABSU_EXPR.
(fold_abs_const): Likewise.
* gimple-pretty-print.c (dump_unary_rhs): Likewise.
* gimple-ssa-backprop.c (backprop::process_assign_use): Likesie.
(strip_sign_op_1): Likesise.
* match.pd: Add new pattern to generate ABSU_EXPR.
* optabs-tree.c (optab_for_tree_code): Handle ABSU_EXPR.
* tree-cfg.c (verify_gimple_assign_unary): Likewise.
* tree-eh.c (operation_could_trap_helper_p): Likewise.
* tree-inline.c (estimate_operator_cost): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-vect-patterns.c (vect_recog_sad_pattern): Likewise.
* tree.def (ABSU_EXPR): New.
gcc/c-family/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* c-common.c (c_common_truthvalue_conversion): Handle ABSU_EXPR.
gcc/c/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* c-typeck.c (build_unary_op): Handle ABSU_EXPR;
* gimple-parser.c (c_parser_gimple_statement): Likewise.
(c_parser_gimple_unary_expression): Likewise.
gcc/cp/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* constexpr.c (potential_constant_expression_1): Handle ABSU_EXPR.
* cp-gimplify.c (cp_fold): Likewise.
gcc/testsuite/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
PR middle-end/64946
* gcc.dg/absu.c: New test.
* gcc.dg/gimplefe-29.c: New test.
* gcc.target/aarch64/pr64946.c: New test.
From-SVN: r261681
|
|
vector type of the intermediate stmt.
2018-06-13 Richard Biener <rguenther@suse.de>
* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern):
Properly set vector type of the intermediate stmt.
* tree-vect-stmts.c (vectorizable_operation): The destination
var always has vectype_out type.
From-SVN: r261553
|
|
2018-06-01 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_dr_stmt): New function.
(vect_get_load_cost): Adjust.
(vect_get_store_cost): Likewise.
* tree-vect-data-refs.c (vect_analyze_data_ref_dependence):
Use vect_dr_stmt instead of DR_SMTT.
(vect_record_base_alignments): Likewise.
(vect_calculate_target_alignment): Likewise.
(vect_compute_data_ref_alignment): Likewise and make static.
(vect_update_misalignment_for_peel): Likewise.
(vect_verify_datarefs_alignment): Likewise.
(vector_alignment_reachable_p): Likewise.
(vect_get_data_access_cost): Likewise. Pass down
vinfo to vect_get_load_cost/vect_get_store_cost instead of DR.
(vect_get_peeling_costs_all_drs): Likewise.
(vect_peeling_hash_get_lowest_cost): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_find_same_alignment_drs): Likewise.
(vect_analyze_data_refs_alignment): Likewise.
(vect_analyze_group_access_1): Likewise.
(vect_analyze_group_access): Likewise.
(vect_analyze_data_ref_access): Likewise.
(vect_analyze_data_ref_accesses): Likewise.
(vect_vfa_segment_size): Likewise.
(vect_small_gap_p): Likewise.
(vectorizable_with_step_bound_p): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
(vect_analyze_data_refs): Likewise.
(vect_supportable_dr_alignment): Likewise.
* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
(vect_gen_prolog_loop_niters): Likewise.
* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Do not
modify DR_STMT.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
* tree-vect-stmts.c (vect_model_store_cost): Pass stmt_info
to vect_get_store_cost.
(vect_get_store_cost): Get stmt_info instead of DR.
(vect_model_load_cost): Pass stmt_info to vect_get_load_cost.
(vect_get_load_cost): Get stmt_info instead of DR.
From-SVN: r261062
|
|
vect_recog_divmod_pattern currently bails out if the target has
native support for integer division, but I think in practice
it's always going to be better to open-code it anyway, just as
we usually open-code scalar divisions by constants.
I think the only currently affected targets are MIPS MSA and
powerpcspe (which is currently marked obsolete). For:
void
foo (int *x)
{
for (int i = 0; i < 100; ++i)
x[i] /= 2;
}
the MSA port previously preferred to use division for powers of 2:
.set noreorder
bnz.w $w1,1f
div_s.w $w0,$w0,$w1
break 7
.set reorder
1:
(or just the div_s.w for -mno-check-zero-division), but after the patch
it open-codes them using shifts:
clt_s.w $w1,$w0,$w2
subv.w $w0,$w0,$w1
srai.w $w0,$w0,1
MSA doesn't define a high-part pattern, so it still uses a division
instruction for the non-power-of-2 case.
Richard B pointed out that this would disable SLP of division by
different amounts, but I think in practice that's a price worth paying,
since the current cost model can't really tell whether using a general
vector division is better than using open-coded scalar divisions.
The fix would be either to support SLP of mixed open-coded divisions
or to improve the cost model and try SLP again without the patterns.
The patch adds an XFAILed test for this.
2018-05-23 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* tree-vect-patterns.c: Include predict.h.
(vect_recog_divmod_pattern): Restrict check for division support
to when optimizing for size.
gcc/testsuite/
* gcc.dg/vect/bb-slp-div-1.c: New XFAILed test.
From-SVN: r260711
|
|
2018-05-25 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (STMT_VINFO_GROUP_*, GROUP_*): Remove.
(DR_GROUP_*): New, assert we have non-NULL ->data_ref_info.
(REDUC_GROUP_*): New, assert we have NULL ->data_ref_info.
(STMT_VINFO_GROUPED_ACCESS): Adjust.
* tree-vect-data-refs.c (everywhere): Adjust users.
* tree-vect-loop.c (everywhere): Likewise.
* tree-vect-slp.c (everywhere): Likewise.
* tree-vect-stmts.c (everywhere): Likewise.
* tree-vect-patterns.c (vect_reassociating_reduction_p): Likewise.
From-SVN: r260709
|
|
2018-05-01 Tom de Vries <tom@codesourcery.com>
PR other/83786
* vec.h (VEC_ORDERED_REMOVE_IF, VEC_ORDERED_REMOVE_IF_FROM_TO): Define.
* vec.c (test_ordered_remove_if): New function.
(vec_c_tests): Call test_ordered_remove_if.
* dwarf2cfi.c (connect_traces): Use VEC_ORDERED_REMOVE_IF_FROM_TO.
* lto-streamer-out.c (prune_offload_funcs): Use VEC_ORDERED_REMOVE_IF.
* tree-vect-patterns.c (vect_pattern_recog_1): Use
VEC_ORDERED_REMOVE_IF.
From-SVN: r259808
|
|
This patch prevents pattern-matching of fold-left SLP reduction chains,
which the previous patch for 83965 didn't handle properly. It only
stops the last statement in the group from being matched, but that's
enough to cause the group to be dissolved later.
A better fix would be to put all the information about the reduction
on the the first statement in the reduction chain, so that every
statement in the group can tell what the group is doing. That doesn't
seem like stage 4 material though.
2018-02-26 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tree-optimization/83965
* tree-vect-patterns.c (vect_reassociating_reduction_p): Assume
that grouped statements are part of a reduction chain. Return
true if the statement is not marked as a reduction itself but
is part of a group.
(vect_recog_dot_prod_pattern): Don't check whether the statement
is part of a group here.
(vect_recog_sad_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
gcc/testsuite/
PR tree-optimization/83965
* gcc.dg/vect/pr83965-2.c: New test.
From-SVN: r257995
|
|
gcc/omp-simd-clone.c:1612)
PR tree-optimization/84452
* tree-vect-patterns.c (vect_recog_pow_pattern): Don't call
expand_simd_clones if targetm.simd_clone.compute_vecsize_and_simdlen
is NULL.
* gcc.dg/pr84452.c: New test.
From-SVN: r257819
|
|
PR middle-end/84309
* match.pd (pow(C,x) -> exp(log(C)*x)): Optimize instead into
exp2(log2(C)*x) if C is a power of 2 and c99 runtime is available.
* generic-match-head.c (canonicalize_math_after_vectorization_p): New
inline function.
* gimple-match-head.c (canonicalize_math_after_vectorization_p): New
inline function.
* omp-simd-clone.h: New file.
* omp-simd-clone.c: Include omp-simd-clone.h.
(expand_simd_clones): No longer static.
* tree-vect-patterns.c: Include fold-const-call.h, attribs.h,
cgraph.h and omp-simd-clone.h.
(vect_recog_pow_pattern): Optimize pow(C,x) to exp(log(C)*x).
(vect_recog_widen_shift_pattern): Formatting fix.
(vect_pattern_recog_1): Don't check optab for calls.
* gcc.dg/pr84309.c: New test.
* gcc.target/i386/pr84309.c: New test.
From-SVN: r257617
|
|
In this PR we recognised a PLUS_EXPR as a fold-left reduction,
then applied pattern matching to convert it to a WIDEN_SUM_EXPR.
We need to keep the original code in this case since we implement
the reduction using scalar rather than vector operations.
2018-01-23 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tree-optimization/83965
* tree-vect-patterns.c (vect_reassociating_reduction_p): New function.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern): Use it
instead of checking only for a reduction.
(vect_recog_widen_sum_pattern): Likewise.
gcc/testsuite/
PR tree-optimization/83965
* gcc.dg/vect/pr83965.c: New test.
From-SVN: r256976
|
|
This is mostly a mechanical extension of the previous gather load
support to scatter stores. The internal functions in this case are:
IFN_SCATTER_STORE (base, offsets, scale, values)
IFN_MASK_SCATTER_STORE (base, offsets, scale, values, mask)
However, one nonobvious change is to vect_analyze_data_ref_access.
If we're treating an access as a gather load or scatter store
(i.e. if STMT_VINFO_GATHER_SCATTER_P is true), the existing code
would create a dummy data_reference whose step is 0. There's not
really much else it could do, since the whole point is that the
step isn't predictable from iteration to iteration. We then
went into this code in vect_analyze_data_ref_access:
/* Allow loads with zero step in inner-loop vectorization. */
if (loop_vinfo && integer_zerop (step))
{
GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
if (!nested_in_vect_loop_p (loop, stmt))
return DR_IS_READ (dr);
I.e. we'd take the step literally and assume that this is a load
or store to an invariant address. Loads from invariant addresses
are supported but stores to them aren't.
The code therefore had the effect of disabling all scatter stores.
AFAICT this is true of AVX too: although tests like avx512f-scatter-1.c
test for the correctness of a scatter-like loop, they don't seem to
check whether a scatter instruction is actually used.
The patch therefore makes vect_analyze_data_ref_access return true
for scatters. We do seem to handle the aliasing correctly;
that's tested by other functions, and is symmetrical to the
already-working gather case.
2018-01-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* doc/sourcebuild.texi (vect_scatter_store): Document.
* optabs.def (scatter_store_optab, mask_scatter_store_optab): New
optabs.
* doc/md.texi (scatter_store@var{m}, mask_scatter_store@var{m}):
Document.
* genopinit.c (main): Add supports_vec_scatter_store and
supports_vec_scatter_store_cached to target_optabs.
* gimple.h (gimple_expr_type): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
* internal-fn.def (SCATTER_STORE, MASK_SCATTER_STORE): New internal
functions.
* internal-fn.h (internal_store_fn_p): Declare.
(internal_fn_stored_value_index): Likewise.
* internal-fn.c (scatter_store_direct): New macro.
(expand_scatter_store_optab_fn): New function.
(direct_scatter_store_optab_supported_p): New macro.
(internal_store_fn_p): New function.
(internal_gather_scatter_fn_p): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
(internal_fn_mask_index): Likewise.
(internal_fn_stored_value_index): New function.
(internal_gather_scatter_fn_supported_p): Adjust operand numbers
for scatter stores.
* optabs-query.h (supports_vec_scatter_store_p): Declare.
* optabs-query.c (supports_vec_scatter_store_p): New function.
* tree-vectorizer.h (vect_get_store_rhs): Declare.
* tree-vect-data-refs.c (vect_analyze_data_ref_access): Return
true for scatter stores.
(vect_gather_scatter_fn_p): Handle scatter stores too.
(vect_check_gather_scatter): Consider using scatter stores if
supports_vec_scatter_store_p.
* tree-vect-patterns.c (vect_try_gather_scatter_pattern): Handle
scatter stores too.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_stored_value_index.
(check_load_store_masking): Handle scatter stores too.
(vect_get_store_rhs): Make public.
(vectorizable_call): Use internal_store_fn_p.
(vectorizable_store): Handle scatter store internal functions.
(vect_transform_stmt): Compare GROUP_STORE_COUNT with GROUP_SIZE
when deciding whether the end of the group has been reached.
* config/aarch64/aarch64.md (UNSPEC_ST1_SCATTER): New unspec.
* config/aarch64/aarch64-sve.md (scatter_store<mode>): New expander.
(mask_scatter_store<mode>): New insns.
gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_scatter_store):
New proc.
* gcc.dg/vect/pr25413a.c: Expect both loops to be optimized on
targets with scatter stores.
* gcc.dg/vect/vect-71.c: Restrict XFAIL to targets without scatter
stores.
* gcc.target/aarch64/sve/mask_scatter_store_1.c: New test.
* gcc.target/aarch64/sve/mask_scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_1.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_3.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_4.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_5.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_6.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_7.c: Likewise.
* gcc.target/aarch64/sve/strided_store_1.c: Likewise.
* gcc.target/aarch64/sve/strided_store_2.c: Likewise.
* gcc.target/aarch64/sve/strided_store_3.c: Likewise.
* gcc.target/aarch64/sve/strided_store_4.c: Likewise.
* gcc.target/aarch64/sve/strided_store_5.c: Likewise.
* gcc.target/aarch64/sve/strided_store_6.c: Likewise.
* gcc.target/aarch64/sve/strided_store_7.c: Likewise.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256643
|
|
This patch adds support for SVE gather loads. It uses the basically
the same analysis code as the AVX gather support, but after that
there are two major differences:
- It uses new internal functions rather than target built-ins.
The interface is:
IFN_GATHER_LOAD (base, offsets scale)
IFN_MASK_GATHER_LOAD (base, offsets scale, mask)
which should be reasonably generic. One of the advantages of
using internal functions is that other passes can understand what
the functions do, but a more immediate advantage is that we can
query the underlying target pattern to see which scales it supports.
- It uses pattern recognition to convert the offset to the right width,
if it was originally narrower than that. This avoids having to do
a widening operation as part of the gather expansion itself.
2018-01-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* doc/md.texi (gather_load@var{m}): Document.
(mask_gather_load@var{m}): Likewise.
* genopinit.c (main): Add supports_vec_gather_load and
supports_vec_gather_load_cached to target_optabs.
* optabs-tree.c (init_tree_optimization_optabs): Use
ggc_cleared_alloc to allocate target_optabs.
* optabs.def (gather_load_optab, mask_gather_laod_optab): New optabs.
* internal-fn.def (GATHER_LOAD, MASK_GATHER_LOAD): New internal
functions.
* internal-fn.h (internal_load_fn_p): Declare.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* internal-fn.c (gather_load_direct): New macro.
(expand_gather_load_optab_fn): New function.
(direct_gather_load_optab_supported_p): New macro.
(direct_internal_fn_optab): New function.
(internal_load_fn_p): Likewise.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* optabs-query.c (supports_at_least_one_mode_p): New function.
(supports_vec_gather_load_p): Likewise.
* optabs-query.h (supports_vec_gather_load_p): Declare.
* tree-vectorizer.h (gather_scatter_info): Add ifn, element_type
and memory_type field.
(NUM_PATTERNS): Bump to 15.
* tree-vect-data-refs.c: Include internal-fn.h.
(vect_gather_scatter_fn_p): New function.
(vect_describe_gather_scatter_call): Likewise.
(vect_check_gather_scatter): Try using internal functions for
gather loads. Recognize existing calls to a gather load function.
(vect_analyze_data_refs): Consider using gather loads if
supports_vec_gather_load_p.
* tree-vect-patterns.c (vect_get_load_store_mask): New function.
(vect_get_gather_scatter_offset_type): Likewise.
(vect_convert_mask_for_vectype): Likewise.
(vect_add_conversion_to_patterm): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
(vect_recog_gather_scatter_pattern): New pattern recognizer.
(vect_vect_recog_func_ptrs): Add it.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_mask_index and internal_gather_scatter_fn_p.
(check_load_store_masking): Take the gather_scatter_info as an
argument and handle gather loads.
(vect_get_gather_scatter_ops): New function.
(vectorizable_call): Check internal_load_fn_p.
(vectorizable_load): Likewise. Handle gather load internal
functions.
(vectorizable_store): Update call to check_load_store_masking.
* config/aarch64/aarch64.md (UNSPEC_LD1_GATHER): New unspec.
* config/aarch64/iterators.md (SVE_S, SVE_D): New mode iterators.
* config/aarch64/predicates.md (aarch64_gather_scale_operand_w)
(aarch64_gather_scale_operand_d): New predicates.
* config/aarch64/aarch64-sve.md (gather_load<mode>): New expander.
(mask_gather_load<mode>): New insns.
gcc/testsuite/
* gcc.target/aarch64/sve/gather_load_1.c: New test.
* gcc.target/aarch64/sve/gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/gather_load_7.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_1.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256640
|
|
This patch allows us to recognise:
... = bool1 != bool2 ? x : y
as equivalent to:
bool tmp = bool1 ^ bool2;
... = tmp ? x : y
For the latter we were already able to find the natural number
of vector units for tmp based on the types that feed bool1 and
bool2, whereas with the former we would simply treat bool1 and
bool2 as vectorised 8-bit values, possibly requiring them to
be packed and unpacked from their natural width.
This is used by a later SVE patch.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern): When
handling COND_EXPRs with boolean comparisons, try to find a better
basis for the mask type than the boolean itself.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256207
|
|
This patch changes GET_MODE_BITSIZE from an unsigned short
to a poly_uint16.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (mode_to_bits): Return a poly_uint16 rather than an
unsigned short.
(GET_MODE_BITSIZE): Return a constant if ONLY_FIXED_SIZE_MODES,
or if measurement_type is polynomial.
* calls.c (shift_return_value): Treat GET_MODE_BITSIZE as polynomial.
* combine.c (make_extraction): Likewise.
* dse.c (find_shift_sequence): Likewise.
* dwarf2out.c (mem_loc_descriptor): Likewise.
* expmed.c (store_integral_bit_field, extract_bit_field_1): Likewise.
(extract_bit_field, extract_low_bits): Likewise.
* expr.c (convert_move, convert_modes, emit_move_insn_1): Likewise.
(optimize_bitfield_assignment_op, expand_assignment): Likewise.
(store_expr_with_bounds, store_field, expand_expr_real_1): Likewise.
* fold-const.c (optimize_bit_field_compare, merge_ranges): Likewise.
* gimple-fold.c (optimize_atomic_compare_exchange_p): Likewise.
* reload.c (find_reloads): Likewise.
* reload1.c (alter_reg): Likewise.
* stor-layout.c (bitwise_mode_for_mode, compute_record_mode): Likewise.
* targhooks.c (default_secondary_memory_needed_mode): Likewise.
* tree-if-conv.c (predicate_mem_writes): Likewise.
* tree-ssa-strlen.c (handle_builtin_memcmp): Likewise.
* tree-vect-patterns.c (adjust_bool_pattern): Likewise.
* tree-vect-stmts.c (vectorizable_simd_clone_call): Likewise.
* valtrack.c (dead_debug_insert_temp): Likewise.
* varasm.c (mergeable_constant_section): Likewise.
* config/sh/sh.h (LOCAL_ALIGNMENT): Use as_a <fixed_size_mode>.
gcc/ada/
* gcc-interface/misc.c (enumerate_modes): Treat GET_MODE_BITSIZE
as polynomial.
gcc/c-family/
* c-ubsan.c (ubsan_instrument_shift): Treat GET_MODE_BITSIZE
as polynomial.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256200
|
|
This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64. The value is
encoded in the 10-bit precision field and was previously always stored
as a simple log2 value. The challenge was to use this 10 bits to
encode the number of elements in variable-length vectors, so that
we didn't need to increase the size of the tree.
In practice the number of vector elements should always have the form
N + N * X (where X is the runtime value), and as for constant-length
vectors, N must be a power of 2 (even though X itself might not be).
The patch therefore uses the low 8 bits to encode log2(N) and bit
8 to select between constant-length and variable-length vectors.
Targets without variable-length vectors continue to use the old scheme.
A new valid_vector_subparts_p function tests whether a given number
of elements can be encoded. This is false for the vector modes that
represent an LD3 or ST3 vector triple (which we want to treat as arrays
of vectors rather than single vectors).
Most of the patch is mechanical; previous patches handled the changes
that weren't entirely straightforward.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (TYPE_VECTOR_SUBPARTS): Turn into a function and handle
polynomial numbers of units.
(SET_TYPE_VECTOR_SUBPARTS): Likewise.
(valid_vector_subparts_p): New function.
(build_vector_type): Remove temporary shim and take the number
of units as a poly_uint64 rather than an int.
(build_opaque_vector_type): Take the number of units as a
poly_uint64 rather than an int.
* tree.c (build_vector_from_ctor): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(type_hash_canon_hash, type_cache_hasher::equal): Likewise.
(uniform_vector_p, vector_type_mode, build_vector): Likewise.
(build_vector_from_val): If the number of units is variable,
use build_vec_duplicate_cst for constant operands and
VEC_DUPLICATE_EXPR otherwise.
(make_vector_type): Remove temporary is_constant ().
(build_vector_type, build_opaque_vector_type): Take the number of
units as a poly_uint64 rather than an int.
(check_vector_cst): Handle polynomial TYPE_VECTOR_SUBPARTS and
VECTOR_CST_NELTS.
* cfgexpand.c (expand_debug_expr): Likewise.
* expr.c (count_type_elements, categorize_ctor_elements_1): Likewise.
(store_constructor, expand_expr_real_1): Likewise.
(const_scalar_mask_from_tree): Likewise.
* fold-const-call.c (fold_const_reduction): Likewise.
* fold-const.c (const_binop, const_unop, fold_convert_const): Likewise.
(operand_equal_p, fold_vec_perm, fold_ternary_loc): Likewise.
(native_encode_vector, vec_cst_ctor_to_array): Likewise.
(fold_relational_const): Likewise.
(native_interpret_vector): Likewise. Change the size from an
int to an unsigned int.
* gimple-fold.c (gimple_fold_stmt_to_constant_1): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(gimple_fold_indirect_ref, gimple_build_vector): Likewise.
(gimple_build_vector_from_val): Use VEC_DUPLICATE_EXPR when
duplicating a non-constant operand into a variable-length vector.
* hsa-brig.c (hsa_op_immed::emit_to_buffer): Handle polynomial
TYPE_VECTOR_SUBPARTS and VECTOR_CST_NELTS.
* ipa-icf.c (sem_variable::equals): Likewise.
* match.pd: Likewise.
* omp-simd-clone.c (simd_clone_subparts): Likewise.
* print-tree.c (print_node): Likewise.
* stor-layout.c (layout_type): Likewise.
* targhooks.c (default_builtin_vectorization_cost): Likewise.
* tree-cfg.c (verify_gimple_comparison): Likewise.
(verify_gimple_assign_binary): Likewise.
(verify_gimple_assign_ternary): Likewise.
(verify_gimple_assign_single): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-ssa-forwprop.c (simplify_vector_constructor): Likewise.
(simplify_bitfield_ref, is_combined_permutation_identity): Likewise.
* tree-vect-data-refs.c (vect_permute_store_chain): Likewise.
(vect_grouped_load_supported, vect_permute_load_chain): Likewise.
(vect_shift_permute_load_chain): Likewise.
* tree-vect-generic.c (nunits_for_known_piecewise_op): Likewise.
(expand_vector_condition, optimize_vector_constructor): Likewise.
(lower_vec_perm, get_compute_type): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(get_initial_defs_for_reduction, vect_transform_loop): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-slp.c (vect_supported_load_permutation_p): Likewise.
(vect_get_constant_vectors, vect_transform_slp_perm_load): Likewise.
* tree-vect-stmts.c (perm_mask_for_reverse): Likewise.
(get_group_load_store_type, vectorizable_mask_load_store): Likewise.
(vectorizable_bswap, simd_clone_subparts, vectorizable_assignment)
(vectorizable_shift, vectorizable_operation, vectorizable_store)
(vectorizable_load, vect_is_simple_cond, vectorizable_comparison)
(supportable_widening_operation): Likewise.
(supportable_narrowing_operation): Likewise.
* tree-vector-builder.c (tree_vector_builder::binary_encoded_nelts):
Likewise.
* varasm.c (output_constant): Likewise.
gcc/ada/
* gcc-interface/utils.c (gnat_types_compatible_p): Handle
polynomial TYPE_VECTOR_SUBPARTS.
gcc/brig/
* brigfrontend/brig-to-generic.cc (get_unsigned_int_type): Handle
polynomial TYPE_VECTOR_SUBPARTS.
* brigfrontend/brig-util.h (gccbrig_type_vector_subparts): Likewise.
gcc/c-family/
* c-common.c (vector_types_convertible_p, c_build_vec_perm_expr)
(convert_vector_to_array_for_subscript): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(c_common_type_for_mode): Check valid_vector_subparts_p.
* c-pretty-print.c (pp_c_initializer_list): Handle polynomial
VECTOR_CST_NELTS.
gcc/c/
* c-typeck.c (comptypes_internal, build_binary_op): Handle polynomial
TYPE_VECTOR_SUBPARTS.
gcc/cp/
* constexpr.c (cxx_eval_array_reference): Handle polynomial
VECTOR_CST_NELTS.
(cxx_fold_indirect_ref): Handle polynomial TYPE_VECTOR_SUBPARTS.
* call.c (build_conditional_expr_1): Likewise.
* decl.c (cp_finish_decomp): Likewise.
* mangle.c (write_type): Likewise.
* typeck.c (structural_comptypes): Likewise.
(cp_build_binary_op): Likewise.
* typeck2.c (process_init_constructor_array): Likewise.
gcc/fortran/
* trans-types.c (gfc_type_for_mode): Check valid_vector_subparts_p.
gcc/lto/
* lto-lang.c (lto_type_for_mode): Check valid_vector_subparts_p.
* lto.c (hash_canonical_type): Handle polynomial TYPE_VECTOR_SUBPARTS.
gcc/go/
* go-lang.c (go_langhook_type_for_mode): Check valid_vector_subparts_p.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256197
|
|
From-SVN: r256169
|
|
2017-12-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/81303
* tree-vect-stmts.c (vect_is_simple_cond): For invariant
conditions try to create a comparison vector type matching
the data vector type.
(vectorizable_condition): Adjust.
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern):
Leave invariant conditions alone in case we can vectorize those.
* gcc.target/i386/vectorize9.c: New testcase.
* gcc.target/i386/vectorize10.c: New testcase.
From-SVN: r255497
|
|
The wide_int routines allow things like:
wi::add (t, 1)
to add 1 to an INTEGER_CST T in its native precision. But we also have:
wi::to_offset (t) // Treat T as an offset_int
wi::to_widest (t) // Treat T as a widest_int
Recently we also gained:
wi::to_wide (t, prec) // Treat T as a wide_int in preccision PREC
This patch therefore requires:
wi::to_wide (t)
when operating on INTEGER_CSTs in their native precision. This is
just as efficient, and makes it clearer that a deliberate choice is
being made to treat the tree as a wide_int in its native precision.
This also removes the inconsistency that
a) INTEGER_CSTs in their native precision can be used without an accessor
but must use wi:: functions instead of C++ operators
b) the other forms need an explicit accessor but the result can be used
with C++ operators.
It also helps with SVE, where there's the additional possibility
that the tree could be a runtime value.
2017-10-10 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* wide-int.h (wide_int_ref_storage): Make host_dependent_precision
a template parameter.
(WIDE_INT_REF_FOR): Update accordingly.
* tree.h (wi::int_traits <const_tree>): Delete.
(wi::tree_to_widest_ref, wi::tree_to_offset_ref): New typedefs.
(wi::to_widest, wi::to_offset): Use them. Expand commentary.
(wi::tree_to_wide_ref): New typedef.
(wi::to_wide): New function.
* calls.c (get_size_range): Use wi::to_wide when operating on
trees as wide_ints.
* cgraph.c (cgraph_node::create_thunk): Likewise.
* config/i386/i386.c (ix86_data_alignment): Likewise.
(ix86_local_alignment): Likewise.
* dbxout.c (stabstr_O): Likewise.
* dwarf2out.c (add_scalar_info, gen_enumeration_type_die): Likewise.
* expr.c (const_vector_from_tree): Likewise.
* fold-const-call.c (host_size_t_cst_p, fold_const_call_1): Likewise.
* fold-const.c (may_negate_without_overflow_p, negate_expr_p)
(fold_negate_expr_1, int_const_binop_1, const_binop)
(fold_convert_const_int_from_real, optimize_bit_field_compare)
(all_ones_mask_p, sign_bit_p, unextend, extract_muldiv_1)
(fold_div_compare, fold_single_bit_test, fold_plusminus_mult_expr)
(pointer_may_wrap_p, expr_not_equal_to, fold_binary_loc)
(fold_ternary_loc, multiple_of_p, fold_negate_const, fold_abs_const)
(fold_not_const, round_up_loc): Likewise.
* gimple-fold.c (gimple_fold_indirect_ref): Likewise.
* gimple-ssa-warn-alloca.c (alloca_call_type_by_arg): Likewise.
(alloca_call_type): Likewise.
* gimple.c (preprocess_case_label_vec_for_gimple): Likewise.
* godump.c (go_output_typedef): Likewise.
* graphite-sese-to-poly.c (tree_int_to_gmp): Likewise.
* internal-fn.c (get_min_precision): Likewise.
* ipa-cp.c (ipcp_store_vr_results): Likewise.
* ipa-polymorphic-call.c
(ipa_polymorphic_call_context::ipa_polymorphic_call_context): Likewise.
* ipa-prop.c (ipa_print_node_jump_functions_for_edge): Likewise.
(ipa_modify_call_arguments): Likewise.
* match.pd: Likewise.
* omp-low.c (scan_omp_1_op, lower_omp_ordered_clauses): Likewise.
* print-tree.c (print_node_brief, print_node): Likewise.
* stmt.c (expand_case): Likewise.
* stor-layout.c (layout_type): Likewise.
* tree-affine.c (tree_to_aff_combination): Likewise.
* tree-cfg.c (group_case_labels_stmt): Likewise.
* tree-data-ref.c (dr_analyze_indices): Likewise.
(prune_runtime_alias_test_list): Likewise.
* tree-dump.c (dequeue_and_dump): Likewise.
* tree-inline.c (remap_gimple_op_r, copy_tree_body_r): Likewise.
* tree-predcom.c (is_inv_store_elimination_chain): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-scalar-evolution.c (iv_can_overflow_p): Likewise.
(simple_iv_with_niters): Likewise.
* tree-ssa-address.c (addr_for_mem_ref): Likewise.
* tree-ssa-ccp.c (ccp_finalize, evaluate_stmt): Likewise.
* tree-ssa-loop-ivopts.c (constant_multiple_of): Likewise.
* tree-ssa-loop-niter.c (split_to_var_and_offset)
(refine_value_range_using_guard, number_of_iterations_ne_max)
(number_of_iterations_lt_to_ne, number_of_iterations_lt)
(get_cst_init_from_scev, record_nonwrapping_iv)
(scev_var_range_cant_overflow): Likewise.
* tree-ssa-phiopt.c (minmax_replacement): Likewise.
* tree-ssa-pre.c (compute_avail): Likewise.
* tree-ssa-sccvn.c (vn_reference_fold_indirect): Likewise.
(vn_reference_maybe_forwprop_address, valueized_wider_op): Likewise.
* tree-ssa-structalias.c (get_constraint_for_ptr_offset): Likewise.
* tree-ssa-uninit.c (is_pred_expr_subset_of): Likewise.
* tree-ssanames.c (set_nonzero_bits, get_nonzero_bits): Likewise.
* tree-switch-conversion.c (collect_switch_conv_info, array_value_type)
(dump_case_nodes, try_switch_expansion): Likewise.
* tree-vect-loop-manip.c (vect_gen_vector_loop_niters): Likewise.
(vect_do_peeling): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.
* tree-vect-stmts.c (vectorizable_load): Likewise.
* tree-vrp.c (compare_values_warnv, vrp_int_const_binop): Likewise.
(zero_nonzero_bits_from_vr, ranges_from_anti_range): Likewise.
(extract_range_from_binary_expr_1, adjust_range_with_scev): Likewise.
(overflow_comparison_p_1, register_edge_assert_for_2): Likewise.
(is_masked_range_test, find_switch_asserts, maybe_set_nonzero_bits)
(vrp_evaluate_conditional_warnv_with_ops, intersect_ranges): Likewise.
(range_fits_type_p, two_valued_val_range_p, vrp_finalize): Likewise.
(evrp_dom_walker::before_dom_children): Likewise.
* tree.c (cache_integer_cst, real_value_from_int_cst, integer_zerop)
(integer_all_onesp, integer_pow2p, integer_nonzerop, tree_log2)
(tree_floor_log2, tree_ctz, mem_ref_offset, tree_int_cst_sign_bit)
(tree_int_cst_sgn, get_unwidened, int_fits_type_p): Likewise.
(get_type_static_bounds, num_ending_zeros, drop_tree_overflow)
(get_range_pos_neg): Likewise.
* ubsan.c (ubsan_expand_ptr_ifn): Likewise.
* config/darwin.c (darwin_mergeable_constant_section): Likewise.
* config/aarch64/aarch64.c (aapcs_vfp_sub_candidate): Likewise.
* config/arm/arm.c (aapcs_vfp_sub_candidate): Likewise.
* config/avr/avr.c (avr_fold_builtin): Likewise.
* config/bfin/bfin.c (bfin_local_alignment): Likewise.
* config/msp430/msp430.c (msp430_attr): Likewise.
* config/nds32/nds32.c (nds32_insert_attributes): Likewise.
* config/powerpcspe/powerpcspe-c.c
(altivec_resolve_overloaded_builtin): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_aggregate_candidate)
(rs6000_expand_ternop_builtin): Likewise.
* config/rs6000/rs6000-c.c
(altivec_resolve_overloaded_builtin): Likewise.
* config/rs6000/rs6000.c (rs6000_aggregate_candidate): Likewise.
(rs6000_expand_ternop_builtin): Likewise.
* config/s390/s390.c (s390_handle_hotpatch_attribute): Likewise.
gcc/ada/
* gcc-interface/decl.c (annotate_value): Use wi::to_wide when
operating on trees as wide_ints.
gcc/c/
* c-parser.c (c_parser_cilk_clause_vectorlength): Use wi::to_wide when
operating on trees as wide_ints.
* c-typeck.c (build_c_cast, c_finish_omp_clauses): Likewise.
(c_tree_equal): Likewise.
gcc/c-family/
* c-ada-spec.c (dump_generic_ada_node): Use wi::to_wide when
operating on trees as wide_ints.
* c-common.c (pointer_int_sum): Likewise.
* c-pretty-print.c (pp_c_integer_constant): Likewise.
* c-warn.c (match_case_to_enum_1): Likewise.
(c_do_switch_warnings): Likewise.
(maybe_warn_shift_overflow): Likewise.
gcc/cp/
* cvt.c (ignore_overflows): Use wi::to_wide when
operating on trees as wide_ints.
* decl.c (check_array_designated_initializer): Likewise.
* mangle.c (write_integer_cst): Likewise.
* semantics.c (cp_finish_omp_clause_depend_sink): Likewise.
gcc/fortran/
* target-memory.c (gfc_interpret_logical): Use wi::to_wide when
operating on trees as wide_ints.
* trans-const.c (gfc_conv_tree_to_mpz): Likewise.
* trans-expr.c (gfc_conv_cst_int_power): Likewise.
* trans-intrinsic.c (trans_this_image): Likewise.
(gfc_conv_intrinsic_bound): Likewise.
(conv_intrinsic_cobound): Likewise.
gcc/lto/
* lto.c (compare_tree_sccs_1): Use wi::to_wide when
operating on trees as wide_ints.
gcc/objc/
* objc-act.c (objc_decl_method_attributes): Use wi::to_wide when
operating on trees as wide_ints.
From-SVN: r253595
|
|
2017-09-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/82285
* tree-vect-patterns.c (vect_recog_bool_pattern): Also handle
enumeral types.
* gcc.dg/torture/pr82285.c: New testcase.
From-SVN: r253146
|
|
VECTOR_MODE_P check.
* tree-vect-patterns.c (vect_pattern_recog_1): Use VECTOR_TYPE_P instead
of VECTOR_MODE_P check.
* tree-vect-stmts.c (get_vectype_for_scalar_type_and_size): Allow single
element vector types.
Co-Authored-By: Richard Biener <rguenther@suse.de>
From-SVN: r251538
|
|
This patch adds a SCALAR_TYPE_MODE macro, along the same lines as
SCALAR_INT_TYPE_MODE and SCALAR_FLOAT_TYPE_MODE. It also adds
two instances of as_a <scalar_mode> to c_common_type, when converting
an unsigned fixed-point SCALAR_TYPE_MODE to the equivalent signed mode.
2017-08-30 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_TYPE_MODE): New macro.
* expr.c (expand_expr_addr_expr_1): Use it.
(expand_expr_real_2): Likewise.
* fold-const.c (fold_convert_const_fixed_from_fixed): Likeise.
(fold_convert_const_fixed_from_int): Likewise.
(fold_convert_const_fixed_from_real): Likewise.
(native_encode_fixed): Likewise
(native_encode_complex): Likewise
(native_encode_vector): Likewise.
(native_interpret_fixed): Likewise.
(native_interpret_real): Likewise.
(native_interpret_complex): Likewise.
(native_interpret_vector): Likewise.
* omp-simd-clone.c (simd_clone_adjust_return_type): Likewise.
(simd_clone_adjust_argument_types): Likewise.
(simd_clone_init_simd_arrays): Likewise.
(simd_clone_adjust): Likewise.
* stor-layout.c (layout_type): Likewise.
* tree.c (build_minus_one_cst): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-inline.c (estimate_move_cost): Likewise.
* tree-ssa-math-opts.c (convert_plusminus_to_widen): Likewise.
* tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise.
(vectorizable_reduction): Likewise.
* tree-vect-patterns.c (vect_recog_widen_mult_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
(check_bool_pattern): Likewise.
(adjust_bool_pattern): Likewise.
(search_type_for_mask_1): Likewise.
* tree-vect-slp.c (vect_schedule_slp_instance): Likewise.
* tree-vect-stmts.c (vectorizable_conversion): Likewise.
(vectorizable_load): Likewise.
(vectorizable_store): Likewise.
* ubsan.c (ubsan_encode_value): Likewise.
* varasm.c (output_constant): Likewise.
gcc/c-family/
* c-lex.c (interpret_fixed): Use SCALAR_TYPE_MODE.
* c-common.c (c_build_vec_perm_expr): Likewise.
gcc/c/
* c-typeck.c (build_binary_op): Use SCALAR_TYPE_MODE.
(c_common_type): Likewise. Use as_a <scalar_mode> when setting
m1 and m2 to the signed equivalent of a fixed-point
SCALAR_TYPE_MODE.
gcc/cp/
* typeck.c (cp_build_binary_op): Use SCALAR_TYPE_MODE.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r251516
|
|
This patch adds a SCALAR_INT_TYPE_MODE macro that asserts
that the type has a scalar integer mode and returns it as
a scalar_int_mode.
2017-08-30 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_INT_TYPE_MODE): New macro.
* builtins.c (expand_builtin_signbit): Use it.
* cfgexpand.c (expand_debug_expr): Likewise.
* dojump.c (do_jump): Likewise.
(do_compare_and_jump): Likewise.
* dwarf2cfi.c (expand_builtin_init_dwarf_reg_sizes): Likewise.
* expmed.c (make_tree): Likewise.
* expr.c (expand_expr_real_2): Likewise.
(expand_expr_real_1): Likewise.
(try_casesi): Likewise.
* fold-const-call.c (fold_const_call_ss): Likewise.
* fold-const.c (unextend): Likewise.
(extract_muldiv_1): Likewise.
(fold_single_bit_test): Likewise.
(native_encode_int): Likewise.
(native_encode_string): Likewise.
(native_interpret_int): Likewise.
* gimple-fold.c (gimple_fold_builtin_memset): Likewise.
* internal-fn.c (expand_addsub_overflow): Likewise.
(expand_neg_overflow): Likewise.
(expand_mul_overflow): Likewise.
(expand_arith_overflow): Likewise.
* match.pd: Likewise.
* stor-layout.c (layout_type): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-ssa-math-opts.c (convert_mult_to_widen): Likewise.
* tree-ssanames.c (get_range_info): Likewise.
* tree-switch-conversion.c (array_value_type) Likewise.
* tree-vect-patterns.c (vect_recog_rotate_pattern): Likewise.
(vect_recog_divmod_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
* tree-vrp.c (extract_range_basic): Likewise.
(simplify_float_conversion_using_ranges): Likewise.
* tree.c (int_fits_type_p): Likewise.
* ubsan.c (instrument_bool_enum_load): Likewise.
* varasm.c (mergeable_string_section): Likewise.
(narrowing_initializer_constant_valid_p): Likewise.
(output_constant): Likewise.
gcc/cp/
* cvt.c (cp_convert_to_pointer): Use SCALAR_INT_TYPE_MODE.
gcc/fortran/
* target-memory.c (size_integer): Use SCALAR_INT_TYPE_MODE.
(size_logical): Likewise.
gcc/objc/
* objc-encoding.c (encode_type): Use SCALAR_INT_TYPE_MODE.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r251486
|
|
This patch sets the nothrow flag for various calls to internal functions
that are not inherently NOTHROW (and so can't be declared that way in
internal-fn.def) but that are used in contexts that can guarantee
NOTHROWness.
2017-08-29 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* gimplify.c (gimplify_call_expr): Copy the nothrow flag to
calls to internal functions.
(gimplify_modify_expr): Likewise.
* tree-call-cdce.c (use_internal_fn): Likewise.
* tree-ssa-math-opts.c (pass_cse_reciprocals::execute): Likewise.
(convert_to_divmod): Set the nothrow flag.
* tree-if-conv.c (predicate_mem_writes): Likewise.
* tree-vect-stmts.c (vectorizable_mask_load_store): Likewise.
(vectorizable_call): Likewise.
(vectorizable_store): Likewise.
(vectorizable_load): Likewise.
* tree-vect-patterns.c (vect_recog_pow_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
From-SVN: r251401
|
|
...to replace instances of:
TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))
These conditions would need to be rewritten with variable-sized
modes anyway.
2017-08-21 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* tree.h (type_has_mode_precision_p): New function.
* convert.c (convert_to_integer_1): Use it.
* expr.c (expand_expr_real_2): Likewise.
(expand_expr_real_1): Likewise.
* fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
* match.pd: Likewise.
* tree-ssa-forwprop.c (simplify_rotate): Likewise.
* tree-ssa-math-opts.c (convert_mult_to_fma): Likewise.
* tree-tailcall.c (process_assignment): Likewise.
* tree-vect-loop.c (vectorizable_reduction): Likewise.
* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
(vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
* tree-vect-stmts.c (vectorizable_conversion): Likewise.
(vectorizable_assignment): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
* tree-vrp.c (register_edge_assert_for_2): Likewise.
From-SVN: r251231
|
|
This patch replaces the individual stmt_vinfo dr_* fields with
an innermost_loop_behavior, so that the changes in later patches
get picked up automatically. It also adds a helper function for
getting the behavior of a data reference wrt the vectorised loop.
2017-07-03 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* tree-vectorizer.h (_stmt_vec_info): Replace individual dr_*
fields with dr_wrt_vec_loop.
(STMT_VINFO_DR_BASE_ADDRESS, STMT_VINFO_DR_INIT, STMT_VINFO_DR_OFFSET)
(STMT_VINFO_DR_STEP, STMT_VINFO_DR_ALIGNED_TO): Update accordingly.
(STMT_VINFO_DR_WRT_VEC_LOOP): New macro.
(vect_dr_behavior): New function.
(vect_create_addr_base_for_vector_ref): Remove loop parameter.
* tree-vect-data-refs.c (vect_compute_data_ref_alignment): Use
vect_dr_behavior. Use a step_preserves_misalignment_p boolean to
track whether the step preserves the misalignment.
(vect_create_addr_base_for_vector_ref): Remove loop parameter.
Use vect_dr_behavior.
(vect_setup_realignment): Update call accordingly.
(vect_create_data_ref_ptr): Likewise. Use vect_dr_behavior.
* tree-vect-loop-manip.c (vect_gen_prolog_loop_niters): Update
call to vect_create_addr_base_for_vector_ref.
(vect_create_cond_for_align_checks): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Copy
STMT_VINFO_DR_WRT_VEC_LOOP as a block.
(vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-stmts.c (compare_step_with_zero): Use vect_dr_behavior.
(new_stmt_vec_info): Remove redundant zeroing.
From-SVN: r249911
|
|
verify_gimple failed)
PR tree-optimization/79284
* tree-vectorizer.h (VECT_SCALAR_BOOLEAN_TYPE_P): Define.
* tree-vect-stmts.c (vect_get_vec_def_for_operand,
vectorizable_mask_load_store, vectorizable_operation,
vect_is_simple_cond, get_same_sized_vectype): Use it instead
of comparing TREE_CODE of a type against BOOLEAN_TYPE.
* tree-vect-patterns.c (check_bool_pattern, search_type_for_mask_1,
vect_recog_bool_pattern, vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-slp.c (vect_get_constant_vectors): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
Remove redundant gimple_code (stmt) == GIMPLE_ASSIGN test after
is_gimple_assign (stmt). Replace another such test with
is_gimple_assign (stmt).
testsuite/
* gcc.c-torture/compile/pr79284.c: New test.
From-SVN: r245214
|
|
From-SVN: r243994
|
|
PR target/78102
* optabs.def (vcondeq_optab, vec_cmpeq_optab): New optabs.
* optabs.c (expand_vec_cond_expr): For comparison codes
EQ_EXPR and NE_EXPR, attempt vcondeq_optab as fallback.
(expand_vec_cmp_expr): For comparison codes
EQ_EXPR and NE_EXPR, attempt vec_cmpeq_optab as fallback.
* optabs-tree.h (expand_vec_cmp_expr_p, expand_vec_cond_expr_p):
Add enum tree_code argument.
* optabs-query.h (get_vec_cmp_eq_icode, get_vcond_eq_icode): New
inline functions.
* optabs-tree.c (expand_vec_cmp_expr_p): Add CODE argument. For
CODE EQ_EXPR or NE_EXPR, attempt to use vec_cmpeq_optab as
fallback.
(expand_vec_cond_expr_p): Add CODE argument. For CODE EQ_EXPR or
NE_EXPR, attempt to use vcondeq_optab as fallback.
* tree-vect-generic.c (expand_vector_comparison,
expand_vector_divmod, expand_vector_condition): Adjust
expand_vec_cmp_expr_p and expand_vec_cond_expr_p callers.
* tree-vect-stmts.c (vectorizable_condition,
vectorizable_comparison): Likewise.
* tree-vect-patterns.c (vect_recog_mixed_size_cond_pattern,
check_bool_pattern, search_type_for_mask_1): Likewise.
* expr.c (do_store_flag): Likewise.
* doc/md.texi (@code{vec_cmpeq@var{m}@var{n}},
@code{vcondeq@var{m}@var{n}}): Document.
* config/i386/sse.md (vec_cmpeqv2div2di, vcondeq<VI8F_128:mode>v2di):
New expanders.
testsuite/
* gcc.target/i386/pr78102.c: New test.
From-SVN: r241525
|
|
* hwint.h (least_bit_hwi, pow2_or_zerop, pow2p_hwi, ctz_or_zero):
New.
* hwint.c (exact_log2): Use pow2p_hwi.
(ctz_hwi, ffs_hwi): Use least_bit_hwi.
* alias.c (memrefs_conflict_p): Use pow2_or_zerop.
* builtins.c (get_object_alignment_2, get_object_alignment)
(get_pointer_alignment, fold_builtin_atomic_always_lock_free): Use
least_bit_hwi.
* calls.c (compute_argument_addresses, store_one_arg): Use
least_bit_hwi.
* cfgexpand.c (expand_one_stack_var_at): Use least_bit_hwi.
* combine.c (force_to_mode): Use least_bit_hwi.
* emit-rtl.c (set_mem_attributes_minus_bitpos, adjust_address_1):
Use least_bit_hwi.
* expmed.c (synth_mult, expand_divmod): Use ctz_or_zero, ctz_hwi.
(init_expmed_one_conv): Use pow2p_hwi.
* fold-const.c (round_up_loc, round_down_loc): Use pow2_or_zerop.
(fold_binary_loc): Use pow2p_hwi.
* function.c (assign_parm_find_stack_rtl): Use least_bit_hwi.
* gimple-fold.c (gimple_fold_builtin_memory_op): Use pow2p_hwi.
* gimple-ssa-strength-reduction.c (replace_ref): Use least_bit_hwi.
* hsa-gen.c (gen_hsa_addr_with_align, hsa_bitmemref_alignment):
Use least_bit_hwi.
* ipa-cp.c (ipcp_alignment_lattice::meet_with_1): Use least_bit_hwi.
* ipa-prop.c (ipa_modify_call_arguments): Use least_bit_hwi.
* omp-low.c (oacc_loop_fixed_partitions)
(oacc_loop_auto_partitions): Use least_bit_hwi.
* rtlanal.c (nonzero_bits1): Use ctz_or_zero.
* stor-layout.c (place_field): Use least_bit_hwi.
* tree-pretty-print.c (dump_generic_node): Use pow2p_hwi.
* tree-sra.c (build_ref_for_offset): Use least_bit_hwi.
* tree-ssa-ccp.c (ccp_finalize): Use least_bit_hwi.
* tree-ssa-math-opts.c (bswap_replace): Use least_bit_hwi.
* tree-ssa-strlen.c (handle_builtin_memcmp): Use pow2p_hwi.
* tree-vect-data-refs.c (vect_analyze_group_access_1)
(vect_grouped_store_supported, vect_grouped_load_supported)
(vect_permute_load_chain, vect_shift_permute_load_chain)
(vect_transform_grouped_load): Use pow2p_hwi.
* tree-vect-generic.c (expand_vector_divmod): Use ctz_or_zero.
* tree-vect-patterns.c (vect_recog_divmod_pattern): Use ctz_or_zero.
* tree-vect-stmts.c (vectorizable_mask_load_store): Use
least_bit_hwi.
* tsan.c (instrument_expr): Use least_bit_hwi.
* var-tracking.c (negative_power_of_two_p): Use pow2_or_zerop.
From-SVN: r240194
|
|
PR tree-optimization/72866
* tree-vect-patterns.c (search_type_for_mask): Turn into
a small wrapper, move all code to ...
(search_type_for_mask_1): ... this new function. Add caching
and adjust recursive calls.
* gcc.dg/vect/pr72866.c: New test.
From-SVN: r239856
|
|
* cse.c: Use HOST_WIDE_INT_M1 instead of ~(HOST_WIDE_INT) 0.
* combine.c: Use HOST_WIDE_INT_M1U instead of
~(unsigned HOST_WIDE_INT) 0.
* double-int.h: Ditto.
* dse.c: Ditto.
* dwarf2asm.c:Ditto.
* expmed.c: Ditto.
* genmodes.c: Ditto.
* match.pd: Ditto.
* read-rtl.c: Ditto.
* tree-ssa-loop-ivopts.c: Ditto.
* tree-ssa-loop-prefetch.c: Ditto.
* tree-vect-generic.c: Ditto.
* tree-vect-patterns.c: Ditto.
* tree.c: Ditto.
From-SVN: r238529
|
|
* builtins.c: Use HOST_WIDE_INT_1 instead of (HOST_WIDE_INT) 1,
HOST_WIDE_INT_1U instead of (unsigned HOST_WIDE_INT) 1,
HOST_WIDE_INT_M1 instead of (HOST_WIDE_INT) -1 and
HOST_WIDE_INT_M1U instead of (unsigned HOST_WIDE_INT) -1.
* combine.c: Ditto.
* cse.c: Ditto.
* dojump.c: Ditto.
* double-int.c: Ditto.
* dse.c: Ditto.
* dwarf2out.c: Ditto.
* expmed.c: Ditto.
* expr.c: Ditto.
* fold-const.c: Ditto.
* function.c: Ditto.
* fwprop.c: Ditto.
* genmodes.c: Ditto.
* hwint.c: Ditto.
* hwint.h: Ditto.
* ifcvt.c: Ditto.
* loop-doloop.c: Ditto.
* loop-invariant.c: Ditto.
* loop-iv.c: Ditto.
* match.pd: Ditto.
* optabs.c: Ditto.
* real.c: Ditto.
* reload.c: Ditto.
* rtlanal.c: Ditto.
* simplify-rtx.c: Ditto.
* stor-layout.c: Ditto.
* toplev.c: Ditto.
* tree-ssa-loop-ivopts.c: Ditto.
* tree-vect-generic.c: Ditto.
* tree-vect-patterns.c: Ditto.
* tree.c: Ditto.
* tree.h: Ditto.
* ubsan.c: Ditto.
* varasm.c: Ditto.
* wide-int-print.cc: Ditto.
* wide-int.cc: Ditto.
* wide-int.h: Ditto.
From-SVN: r238481
|
|
mult-by-constant
PR target/65951
PR tree-optimization/70923
* tree-vect-patterns.c: Include mult-synthesis.h.
(target_supports_mult_synth_alg): New function.
(synth_lshift_by_additions): Likewise.
(apply_binop_and_append_stmt): Likewise.
(vect_synth_mult_by_constant): Likewise.
(target_has_vecop_for_code): Likewise.
(vect_recog_mult_pattern): Use above functions to synthesize vector
multiplication by integer constants.
* gcc.dg/vect/vect-mult-const-pattern-1.c: New test.
* gcc.dg/vect/vect-mult-const-pattern-2.c: Likewise.
* gcc.dg/vect/pr65951.c: Likewise.
* gcc.dg/vect/vect-iv-9.c: Remove ! vect_int_mult-specific scan.
From-SVN: r238340
|
|
ivybridge and westmere targets)
gcc/
PR middle-end/71488
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern): Support
comparison of boolean vectors.
* tree-vect-stmts.c (vectorizable_comparison): Vectorize comparison
of boolean vectors using bitwise operations.
gcc/testsuite/
PR middle-end/71488
* g++.dg/pr71488.C: New test.
* gcc.dg/vect/vect-bool-cmp.c: New test.
From-SVN: r237706
|
|
2016-06-08 Alan Hayward <alan.hayward@arm.com>
gcc/
* tree-vect-data-refs.c (vect_analyze_data_refs): Remove debug newline.
* tree-vect-loop-manip.c (slpeel_make_loop_iterate_ntimes): likewise.
(vect_can_advance_ivs_p): likewise.
(vect_update_ivs_after_vectorizer): likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): likewise.
(vect_analyze_scalar_cycles_1): likewise.
(vect_analyze_loop_operations): likewise.
(report_vect_op): likewise.
(vect_is_slp_reduction): likewise.
(vect_is_simple_reduction): likewise.
(get_initial_def_for_induction): likewise.
(vect_transform_loop): likewise.
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): likewise.
(vect_recog_sad_pattern): likewise.
(vect_recog_widen_sum_pattern): likewise.
(vect_recog_widening_pattern): likewise.
(vect_recog_divmod_pattern): likewise.
* tree-vect-slp.c (vect-build-slp_tree_1): likewise.
(vect_analyze_slp_instance): likewise.
(vect_transform_slp_perm_load): likewise.
(vect_schedule_slp_instance): likewise.
From-SVN: r237198
|
|
2016-06-01 Richard Biener <rguenther@suse.de>
PR tree-optimization/71261
* tree-vect-patterns.c (check_bool_pattern): Gather a hash-set
of stmts successfully put in the bool pattern. Remove
single-use restriction.
(adjust_bool_pattern_cast): Add cast at the use site via the
pattern def sequence.
(adjust_bool_pattern): Remove recursion, maintain a hash-map
of patterned defs. Use the pattern def seqence instead of
multiple independent patterns.
(sort_after_uid): New qsort compare function.
(adjust_bool_stmts): New function to process stmts in the bool
pattern in IL order.
(vect_recog_bool_pattern): Adjust.
* tree-if-conv.c (ifcvt_split_def_stmt): Remove.
(ifcvt_walk_pattern_tree): Likewise.
(stmt_is_root_of_bool_pattern): Likewise.
(ifcvt_repair_bool_pattern): Likewise.
(tree_if_conversion): Do not call ifcvt_repair_bool_pattern.
* gcc.dg/torture/vect-bool-1.c: New testcase.
From-SVN: r236989
|
|
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern):
Initialize a variable with default value.
From-SVN: r236203
|
|
x86_64-linux-gnu in "tree_operand_check")
PR tree-optimization/70916
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern): Give up
if COND_EXPR rhs1 is neither SSA_NAME nor COMPARISON_CLASS_P.
* gcc.c-torture/compile/pr70916.c: New test.
From-SVN: r235814
|
|
-march=skylake-avx512.)
PR tree-optimization/70354
* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern): If
oprnd0 is wider than oprnd1 and there is a cast from the wider
type to oprnd1, mask it with the mask of the narrower type.
* gcc.dg/vect/pr70354-1.c: New test.
* gcc.dg/vect/pr70354-2.c: New test.
* gcc.target/i386/avx2-pr70354-1.c: New test.
* gcc.target/i386/avx2-pr70354-2.c: New test.
From-SVN: r234417
|