Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds a vec_info replacement for vinfo_for_stmt. The main
difference is that the new routine can cope with arbitrary statements,
so there's no need to call vect_stmt_in_region_p first.
The patch only converts calls that are still needed at the end of the
series. Later patches get rid of most other calls to vinfo_for_stmt.
2018-07-31 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vec_info::lookup_stmt): Declare.
* tree-vectorizer.c (vec_info::lookup_stmt): New function.
* tree-vect-loop.c (vect_determine_vf_for_stmt): Use it instead
of vinfo_for_stmt.
(vect_determine_vectorization_factor, vect_analyze_scalar_cycles_1)
(vect_compute_single_scalar_iteration_cost, vect_analyze_loop_form)
(vect_update_vf_for_slp, vect_analyze_loop_operations)
(vect_is_slp_reduction, vectorizable_induction)
(vect_transform_loop_stmt, vect_transform_loop): Likewise.
* tree-vect-patterns.c (vect_init_pattern_stmt):
(vect_determine_min_output_precision_1, vect_determine_precisions)
(vect_pattern_recog): Likewise.
* tree-vect-stmts.c (vect_analyze_stmt, vect_transform_stmt): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_density_test): Likewise.
* config/rs6000/rs6000.c (rs6000_density_test): Likewise.
* tree-vect-slp.c (vect_detect_hybrid_slp_stmts): Likewise.
(vect_detect_hybrid_slp_1, vect_detect_hybrid_slp_2)
(vect_detect_hybrid_slp): Likewise. Change the walk_stmt_info
info field from a loop to a loop_vec_info.
From-SVN: r263122
|
|
This patch adds a vec_info function for allocating and setting
stmt_vec_infos. It's the start of a long process of removing
the global stmt_vec_info array.
2018-07-31 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (stmt_vec_info): Move typedef earlier in file.
(vec_info::add_stmt): Declare.
* tree-vectorizer.c (vec_info::add_stmt): New function.
* tree-vect-data-refs.c (vect_create_data_ref_ptr): Use it.
* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Likewise.
(vect_create_epilog_for_reduction, vectorizable_reduction): Likewise.
(vectorizable_induction): Likewise.
* tree-vect-slp.c (_bb_vec_info::_bb_vec_info): Likewise.
* tree-vect-stmts.c (vect_finish_stmt_generation_1): Likewise.
(vectorizable_simd_clone_call, vectorizable_store): Likewise.
(vectorizable_load): Likewise.
* tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
(vect_recog_bool_pattern, vect_recog_mask_conversion_pattern)
(vect_recog_gather_scatter_pattern): Likewise.
(append_pattern_def_seq): Likewise. Remove a check that is
performed by add_stmt itself.
From-SVN: r263121
|
|
2018-07-18 Richard Biener <rguenther@suse.de>
PR tree-optimization/86557
* tree-vect-patterns.c (vect_recog_divmod_pattern): Also handle
EXACT_DIV_EXPR.
From-SVN: r262854
|
|
This patch uses IFN_COND_* to vectorise conditionally-executed,
potentially-trapping arithmetic, such as most floating-point
ops with -ftrapping-math. E.g.:
if (cond) { ... x = a + b; ... }
becomes:
...
x = .COND_ADD (cond, a, b, else_value);
...
When this transformation is done on its own, the value of x for
!cond isn't important, so else_value is simply the target's
preferred_else_value (i.e. the value it can handle the most
efficiently).
However, the patch also looks for the equivalent of:
y = cond ? x : c;
in which the "then" value is the result of the conditionally-executed
operation and the "else" value "c" is some value that is available at x.
In that case we can instead use:
x = .COND_ADD (cond, a, b, c);
and replace uses of y with uses of x.
The patch also looks for:
y = !cond ? c : x;
which can be transformed in the same way. This involved adding a new
utility function inverse_conditions_p, which was already open-coded
in a more limited way in match.pd.
2018-07-12 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* fold-const.h (inverse_conditions_p): Declare.
* fold-const.c (inverse_conditions_p): New function.
* match.pd: Use inverse_conditions_p. Add folds of view_converts
that test the inverse condition of a conditional internal function.
* internal-fn.h (vectorized_internal_fn_supported_p): Declare.
* internal-fn.c (internal_fn_mask_index): Handle conditional
internal functions.
(vectorized_internal_fn_supported_p): New function.
* tree-if-conv.c: Include internal-fn.h and fold-const.h.
(any_pred_load_store): Replace with...
(need_to_predicate): ...this new variable.
(redundant_ssa_names): New variable.
(ifcvt_can_use_mask_load_store): Move initial checks to...
(ifcvt_can_predicate): ...this new function. Handle tree codes
for which a conditional internal function exists.
(if_convertible_gimple_assign_stmt_p): Use ifcvt_can_predicate
instead of ifcvt_can_use_mask_load_store. Update after variable
name change.
(predicate_load_or_store): New function, split out from
predicate_mem_writes.
(check_redundant_cond_expr): New function.
(value_available_p): Likewise.
(predicate_rhs_code): Likewise.
(predicate_mem_writes): Rename to...
(predicate_statements): ...this. Use predicate_load_or_store
and predicate_rhs_code.
(combine_blocks, tree_if_conversion): Update after above name changes.
(ifcvt_local_dce): Handle redundant_ssa_names.
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern): Handle
general conditional functions.
* tree-vect-stmts.c (vectorizable_call): Likewise.
gcc/testsuite/
* gcc.dg/vect/vect-cond-arith-4.c: New test.
* gcc.dg/vect/vect-cond-arith-5.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_1.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_1_run.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_2.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_2_run.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_3.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_3_run.c: Likewise.
From-SVN: r262589
|
|
The PR85694 series added a vectype argument to append_pattern_def_seq.
This patch makes more callers use it.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_rotate_pattern)
(vect_recog_vector_vector_shift_pattern, vect_recog_divmod_pattern)
(vect_recog_mixed_size_cond_pattern, adjust_bool_pattern_cast)
(adjust_bool_pattern, vect_recog_bool_pattern): Pass the vector
type to append_pattern_def_seq instead of creating a stmt_vec_info
directly.
(build_mask_conversion): Likewise. Remove vinfo argument.
(vect_add_conversion_to_patterm): Likewise, renaming to...
(vect_add_conversion_to_pattern): ...this.
(vect_recog_mask_conversion_pattern): Update call to
build_mask_conversion. Pass the vector type to
append_pattern_def_seq here too.
(vect_recog_gather_scatter_pattern): Update call to
vect_add_conversion_to_pattern.
From-SVN: r262338
|
|
Various recognisers set PATTERN_DEF_SEQ to null before adding
statements to it, but it should always be null at that point anyway.
This patch asserts for that in vect_pattern_recog_1 and removes
the redundant code.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (new_pattern_def_seq): Delete.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern)
(vect_recog_widen_op_pattern, vect_recog_over_widening_pattern)
(vect_recog_rotate_pattern, vect_synth_mult_by_constant): Don't set
STMT_VINFO_PATTERN_DEF_SEQ to null here.
(vect_recog_pow_pattern, vect_recog_vector_vector_shift_pattern)
(vect_recog_mixed_size_cond_pattern, vect_recog_bool_pattern): Use
append_pattern_def_seq instead of new_pattern_def_seq.
(vect_recog_divmod_pattern): Do both of the above.
(vect_pattern_recog_1): Assert that STMT_VINO_PATTERN_DEF_SEQ
is null.
From-SVN: r262337
|
|
The PR85694 series removed the only cases in which a pattern recogniser
could attach patterns to more than one statement. I think it would be
better to avoid adding any new instances of that, since it interferes
with the normal matching order.
This patch therefore switches the interface back to passing a single
statement instead of a vector. It also gets rid of the clearing of
STMT_VINFO_RELATED_STMT on failure, since no recognisers use it now.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_dot_prod_pattern):
(vect_recog_sad_pattern, vect_recog_widen_op_pattern)
(vect_recog_widen_mult_pattern, vect_recog_pow_pattern):
(vect_recog_widen_sum_pattern, vect_recog_over_widening_pattern)
(vect_recog_average_pattern, vect_recog_cast_forwprop_pattern)
(vect_recog_widen_shift_pattern, vect_recog_rotate_pattern)
(vect_recog_vector_vector_shift_pattern, vect_synth_mult_by_constant)
(vect_recog_mult_pattern, vect_recog_divmod_pattern)
(vect_recog_mixed_size_cond_pattern, vect_recog_bool_pattern)
(vect_recog_mask_conversion_pattern): Replace vec<gimple *>
parameter with a single stmt_vec_info.
(vect_recog_func_ptr): Likewise.
(vect_recog_gather_scatter_pattern): Likewise, folding in...
(vect_try_gather_scatter_pattern): ...this.
(vect_pattern_recog_1): Remove stmts_to_replace and just pass
the stmt_vec_info of the statement to be matched. Don't clear
STMT_VINFO_RELATED_STMT.
(vect_pattern_recog): Update call accordingly.
From-SVN: r262336
|
|
This patch adds detection of average instructions:
a = (((wide) b + (wide) c) >> 1);
--> a = (wide) .AVG_FLOOR (b, c);
a = (((wide) b + (wide) c + 1) >> 1);
--> a = (wide) .AVG_CEIL (b, c);
in cases where users of "a" need only the low half of the result,
making the cast to (wide) redundant. The heavy lifting was done by
earlier patches.
This showed up another problem in vectorizable_call: if the call is a
pattern definition statement rather than the main pattern statement,
the type of vectorised call might be different from the type of the
original statement.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR tree-optimization/85694
* doc/md.texi (avgM3_floor, uavgM3_floor, avgM3_ceil)
(uavgM3_ceil): Document new optabs.
* doc/sourcebuild.texi (vect_avg_qi): Document new target selector.
* internal-fn.def (IFN_AVG_FLOOR, IFN_AVG_CEIL): New internal
functions.
* optabs.def (savg_floor_optab, uavg_floor_optab, savg_ceil_optab)
(savg_ceil_optab): New optabs.
* tree-vect-patterns.c (vect_recog_average_pattern): New function.
(vect_vect_recog_func_ptrs): Add it.
* tree-vect-stmts.c (vectorizable_call): Get the type of the zero
constant directly from the associated lhs.
gcc/testsuite/
PR tree-optimization/85694
* lib/target-supports.exp (check_effective_target_vect_avg_qi): New
proc.
* gcc.dg/vect/vect-avg-1.c: New test.
* gcc.dg/vect/vect-avg-2.c: Likewise.
* gcc.dg/vect/vect-avg-3.c: Likewise.
* gcc.dg/vect/vect-avg-4.c: Likewise.
* gcc.dg/vect/vect-avg-5.c: Likewise.
* gcc.dg/vect/vect-avg-6.c: Likewise.
* gcc.dg/vect/vect-avg-7.c: Likewise.
* gcc.dg/vect/vect-avg-8.c: Likewise.
* gcc.dg/vect/vect-avg-9.c: Likewise.
* gcc.dg/vect/vect-avg-10.c: Likewise.
* gcc.dg/vect/vect-avg-11.c: Likewise.
* gcc.dg/vect/vect-avg-12.c: Likewise.
* gcc.dg/vect/vect-avg-13.c: Likewise.
* gcc.dg/vect/vect-avg-14.c: Likewise.
From-SVN: r262335
|
|
The main over-widening patch can introduce quite a few extra casts,
and in many cases those casts simply "tap into" an intermediate
point in an existing extension. E.g. if we have:
unsigned char a;
int ax = (int) a;
and a later operation using ax is shortened to "unsigned short",
we would need:
unsigned short ax' = (unsigned short) a;
The a->ax extension requires one set of unpacks to get to unsigned
short and another set of unpacks to get to int. The first set are
then duplicated for ax'. If both ax and ax' are needed, the a->ax'
extension would end up counting twice during cost calculations.
This patch rewrites the original:
int ax = (int) a;
into a pattern:
unsigned short ax' = (unsigned short) a;
int ax = (int) ax';
so that each extension only counts once.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_split_statement): New function.
(vect_convert_input): Use it to try to split an existing cast.
gcc/testsuite/
* gcc.dg/vect/vect-over-widen-5.c: Test that the extensions
get split into two for use by the over-widening pattern.
* gcc.dg/vect/vect-over-widen-6.c: Likewise.
* gcc.dg/vect/vect-over-widen-7.c: Likewise.
* gcc.dg/vect/vect-over-widen-8.c: Likewise.
* gcc.dg/vect/vect-over-widen-9.c: Likewise.
* gcc.dg/vect/vect-over-widen-10.c: Likewise.
* gcc.dg/vect/vect-over-widen-11.c: Likewise.
* gcc.dg/vect/vect-over-widen-12.c: Likewise.
* gcc.dg/vect/vect-over-widen-13.c: Likewise.
* gcc.dg/vect/vect-over-widen-14.c: Likewise.
* gcc.dg/vect/vect-over-widen-15.c: Likewise.
* gcc.dg/vect/vect-over-widen-16.c: Likewise.
* gcc.dg/vect/vect-over-widen-22.c: New test.
From-SVN: r262334
|
|
This patch is the main part of PR85694. The aim is to recognise at least:
signed char *a, *b, *c;
...
for (int i = 0; i < 2048; i++)
c[i] = (a[i] + b[i]) >> 1;
as an over-widening pattern, since the addition and shift can be done
on shorts rather than ints. However, it ended up being a lot more
general than that.
The current over-widening pattern detection is limited to a few simple
cases: logical ops with immediate second operands, and shifts by a
constant. These cases are enough for common pixel-format conversion
and can be detected in a peephole way.
The loop above requires two generalisations of the current code: support
for addition as well as logical ops, and support for non-constant second
operands. These are harder to detect in the same peephole way, so the
patch tries to take a more global approach.
The idea is to get information about the minimum operation width
in two ways:
(1) by using the range information attached to the SSA_NAMEs
(effectively a forward walk, since the range info is
context-independent).
(2) by back-propagating the number of output bits required by
users of the result.
As explained in the comments, there's a balance to be struck between
narrowing an individual operation and fitting in with the surrounding
code. The approach is pretty conservative: if we could narrow an
operation to N bits without changing its semantics, it's OK to do that if:
- no operations later in the chain require more than N bits; or
- all internally-defined inputs are extended from N bits or fewer,
and at least one of them is single-use.
See the comments for the rationale.
I didn't bother adding STMT_VINFO_* wrappers for the new fields
since the code seemed more readable without.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* poly-int.h (print_hex): New function.
* dumpfile.h (dump_dec, dump_hex): Declare.
* dumpfile.c (dump_dec, dump_hex): New poly_wide_int functions.
* tree-vectorizer.h (_stmt_vec_info): Add min_output_precision,
min_input_precision, operation_precision and operation_sign.
* tree-vect-patterns.c (vect_get_range_info): New function.
(vect_same_loop_or_bb_p, vect_single_imm_use)
(vect_operation_fits_smaller_type): Delete.
(vect_look_through_possible_promotion): Add an optional
single_use_p parameter.
(vect_recog_over_widening_pattern): Rewrite to use new
stmt_vec_info infomration. Handle one operation at a time.
(vect_recog_cast_forwprop_pattern, vect_narrowable_type_p)
(vect_truncatable_operation_p, vect_set_operation_type)
(vect_set_min_input_precision): New functions.
(vect_determine_min_output_precision_1): Likewise.
(vect_determine_min_output_precision): Likewise.
(vect_determine_precisions_from_range): Likewise.
(vect_determine_precisions_from_users): Likewise.
(vect_determine_stmt_precisions, vect_determine_precisions): Likewise.
(vect_vect_recog_func_ptrs): Put over_widening first.
Add cast_forwprop.
(vect_pattern_recog): Call vect_determine_precisions.
gcc/testsuite/
* gcc.dg/vect/vect-widen-mult-u8-u32.c: Check specifically for a
widen_mult pattern.
* gcc.dg/vect/vect-over-widen-1.c: Update the scan tests for new
over-widening messages.
* gcc.dg/vect/vect-over-widen-1-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-2.c: Likewise.
* gcc.dg/vect/vect-over-widen-2-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-3.c: Likewise.
* gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-4.c: Likewise.
* gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
* gcc.dg/vect/bb-slp-over-widen-1.c: New test.
* gcc.dg/vect/bb-slp-over-widen-2.c: Likewise.
* gcc.dg/vect/vect-over-widen-5.c: Likewise.
* gcc.dg/vect/vect-over-widen-6.c: Likewise.
* gcc.dg/vect/vect-over-widen-7.c: Likewise.
* gcc.dg/vect/vect-over-widen-8.c: Likewise.
* gcc.dg/vect/vect-over-widen-9.c: Likewise.
* gcc.dg/vect/vect-over-widen-10.c: Likewise.
* gcc.dg/vect/vect-over-widen-11.c: Likewise.
* gcc.dg/vect/vect-over-widen-12.c: Likewise.
* gcc.dg/vect/vect-over-widen-13.c: Likewise.
* gcc.dg/vect/vect-over-widen-14.c: Likewise.
* gcc.dg/vect/vect-over-widen-15.c: Likewise.
* gcc.dg/vect/vect-over-widen-16.c: Likewise.
* gcc.dg/vect/vect-over-widen-17.c: Likewise.
* gcc.dg/vect/vect-over-widen-18.c: Likewise.
* gcc.dg/vect/vect-over-widen-19.c: Likewise.
* gcc.dg/vect/vect-over-widen-20.c: Likewise.
* gcc.dg/vect/vect-over-widen-21.c: Likewise.
From-SVN: r262333
|
|
r262275 allowed pattern matching on pattern statements. Testing for
SVE on more benchmarks showed a case where this interacted badly
with 14/n.
The new over-widening detection could narrow a COND_EXPR A to another
COND_EXPR B, which mixed_size_cond could then match. This was working
as expected. However, we left B (now dead) in the pattern definition
sequence with a non-null PATTERN_DEF_SEQ. mask_conversion also
matched B, and unlike most recognisers, didn't clear PATTERN_DEF_SEQ
before adding statements to it. This meant that the statements
created by mixed_size_cond appeared in two supposedy separate
sequences, causing much confusion.
This patch removes pattern statements that are replaced by further
pattern statements. As a belt-and-braces fix, it also nullifies
PATTERN_DEF_SEQ on failure, in the same way Richard B. did recently
for RELATED_STMT.
I have patches to clean up the PATTERN_DEF_SEQ handling, but they
only apply after the complete PR85694 sequence, whereas this needs
to go in before 14/n.
2018-07-03 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_mark_pattern_stmts): Remove pattern
statements that have been replaced by further pattern statements.
(vect_pattern_recog_1): Clear STMT_VINFO_PATTERN_DEF_SEQ on failure.
gcc/testsuite/
* gcc.dg/vect/vect-mixed-size-cond-1.c: New test.
From-SVN: r262332
|
|
Noticed by Christophe on arm-none-linux-gnueabihf.
2018-07-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_widen_shift_pattern): Fix typo
in dump string.
From-SVN: r262308
|
|
vect_recog_rotate_pattern had code to prevent operations
on invariants being vectorised unnecessarily:
if (dt == vect_external_def
&& TREE_CODE (oprnd1) == SSA_NAME
&& is_a <loop_vec_info> (vinfo))
{
struct loop *loop = as_a <loop_vec_info> (vinfo)->loop;
ext_def = loop_preheader_edge (loop);
if (!SSA_NAME_IS_DEFAULT_DEF (oprnd1))
{
basic_block bb = gimple_bb (SSA_NAME_DEF_STMT (oprnd1));
if (bb == NULL
|| !dominated_by_p (CDI_DOMINATORS, ext_def->dest, bb))
ext_def = NULL;
}
}
[..]
if (ext_def)
{
basic_block new_bb
= gsi_insert_on_edge_immediate (ext_def, def_stmt);
gcc_assert (!new_bb);
}
This patch reuses the same idea for casts of invariants created
during widening optimisations.
One hitch was that vect_loop_versioning asserted that the vector loop
preheader was still empty, although the cfg transformation it's doing
should be correct either way.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_get_external_def_edge): New function,
split out from...
(vect_recog_rotate_pattern): ...here.
(vect_convert_input): Try to insert casts of invariants in the
preheader.
* tree-vect-loop-manip.c (vect_loop_versioning): Don't require the
preheader to be empty.
gcc/testsuite/
* gcc.dg/vect/vect-widen-mult-extern-1.c: New test.
From-SVN: r262277
|
|
This patch adds helper functions for detecting widened operations and
generalises the existing code to handle more cases.
One of the main changes is to recognise multi-stage type conversions,
which are possible even in the original IR and can also occur as a
result of earlier pattern matching (especially after the main
over-widening patch). E.g. for:
unsigned int res = 0;
for (__INTPTR_TYPE__ i = 0; i < N; ++i)
{
int av = a[i];
int bv = b[i];
short diff = av - bv;
unsigned short abs = diff < 0 ? -diff : diff;
res += abs;
}
we have:
_9 = _7 - _8;
diff_20 = (short int) _9;
_10 = (int) diff_20;
_11 = ABS_EXPR <_10>;
where the first cast establishes the sign of the promotion done
by the second cast.
vect_recog_sad_pattern didn't handle this kind of intermediate promotion
between the MINUS_EXPR and the ABS_EXPR. Sign extensions and casts from
unsigned to signed are both OK there. Unsigned promotions aren't, and
need to be rejected, but should have been folded away earlier anyway.
Also, the dot_prod and widen_sum patterns both required the promotions
to be from one signedness to the same signedness, rather than say signed
char to unsigned int. That shouldn't be necessary, since it's only the
sign of the input to the promotion that matters. Nothing requires the
narrow and wide types in a DOT_PROD_EXPR or WIDEN_SUM_EXPR to have the
same sign (and IMO that's a good thing).
Fixing these fixed an XFAIL in gcc.dg/vect/vect-widen-mult-sum.c.
vect_widened_op_tree is a bit more general than the current patch needs,
since it copes with a tree of operations rather than a single statement.
This is used by the later average-detection patch.
The patch also uses a common routine to handle both the WIDEN_MULT_EXPR
and WIDEN_LSHIFT_EXPR patterns. I hope this could be reused for other
similar operations in future.
Also, the patch means we recognise the index calculations in
vect-mult-const-pattern*.c as widening multiplications, whereas the
scan test was expecting them to be recognised as mult patterns instead.
The patch makes the tests check specifically for the multiplication we
care about.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (append_pattern_def_seq): Take an optional
vector type. If given, install it in the new statement's
STMT_VINFO_VECTYPE.
(vect_element_precision): New function.
(vect_unpromoted_value): New struct.
(vect_unpromoted_value::vect_unpromoted_value): New function.
(vect_unpromoted_value::set_op): Likewise.
(vect_look_through_possible_promotion): Likewise.
(vect_joust_widened_integer, vect_joust_widened_type): Likewise.
(vect_widened_op_tree, vect_convert_input): Likewise.
(vect_convert_inputs, vect_convert_output): Likewise.
(vect_recog_dot_prod_pattern): Use vect_look_through_possible_promotion
to handle the optional cast of the multiplication result and
vect_widened_op_tree to detect the widened multiplication itself.
Do not require the input and output of promotion casts to have
the same sign, but base the signedness of the operation on the
input rather than the result. If the pattern includes two
promotions, check that those promotions have the same sign.
Do not restrict the MULT_EXPR handling to a double-width result;
handle quadruple-width results and wider. Use vect_convert_inputs
to convert the inputs to the common type.
(vect_recog_sad_pattern): Use vect_look_through_possible_promotion
to handle the optional cast of the ABS result. Also allow a sign
change or a sign extension between the ABS and MINUS.
Use vect_widened_op_tree to detect the widened subtraction and use
vect_convert_inputs to convert the inputs to the common type.
(vect_handle_widen_op_by_const): Delete.
(vect_recog_widen_op_pattern): New function.
(vect_recog_widen_mult_pattern): Use it.
(vect_recog_widen_shift_pattern): Likewise.
(vect_recog_widen_sum_pattern): Use
vect_look_through_possible_promotion to handle the promoted
PLUS_EXPR operand.
gcc/testsuite/
* gcc.dg/vect/vect-widen-mult-sum.c: Remove xfail.
* gcc.dg/vect/no-scevccp-outer-6.c: Don't match widened multiplications
by 4 in the computation of a[i].
* gcc.dg/vect/vect-mult-const-pattern-1.c: Test specifically for the
main multiplication constant.
* gcc.dg/vect/vect-mult-const-pattern-2.c: Likewise.
* gcc.dg/vect/vect-widen-mult-const-s16.c: Likewise.
* gcc.dg/vect/vect-widen-mult-const-u16.c: Likewise. Expect the
pattern to cast the result to int.
* gcc.dg/vect/vect-reduc-dot-1.c: New test.
* gcc.dg/vect/vect-reduc-dot-2.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-3.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-4.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-5.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-6.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-7.c: Likewise.
* gcc.dg/vect/vect-reduc-dot-8.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-1.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-2.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-3.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-4.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-5.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-6.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-7.c: Likewise.
* gcc.dg/vect/vect-reduc-sad-8.c: Likewise.
* gcc.dg/vect/vect-widen-mult-1.c: Likewise.
* gcc.dg/vect/vect-widen-mult-2.c: Likewise.
* gcc.dg/vect/vect-widen-mult-3.c: Likewise.
* gcc.dg/vect/vect-widen-mult-4.c: Likewise.
From-SVN: r262276
|
|
Although the first pattern match wins in the sense that no later
function can match the *old* gimple statement, it still seems worth
letting them match the *new* gimple statements, just like we would if
the original IR had included that sequence from the outset.
This is mostly true after the later patch for PR85694, where e.g. we
could recognise:
signed char a;
int ap = (int) a;
int res = ap * 3;
as the pattern:
short ap' = (short) a;
short res = ap' * 3; // S1: definition statement
int res = (int) res; // S2: pattern statement
and then apply the mult pattern to "ap' * 3". The patch needs to
come first (without its own test cases) so that the main over-widening
patch doesn't regress anything.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* gimple-iterator.c (gsi_for_stmt): Add a new overload that takes
the containing gimple_seq *.
* gimple-iterator.h (gsi_for_stmt): Declare it.
* tree-vect-patterns.c (vect_recog_dot_prod_pattern)
(vect_recog_sad_pattern, vect_recog_widen_sum_pattern)
(vect_recog_widen_shift_pattern, vect_recog_rotate_pattern)
(vect_recog_vector_vector_shift_pattern, vect_recog_divmod_pattern)
(vect_recog_mask_conversion_pattern): Remove STMT_VINFO_IN_PATTERN_P
checks.
(vect_init_pattern_stmt, vect_set_pattern_stmt): New functions,
split out from...
(vect_mark_pattern_stmts): ...here. Handle cases in which the
statement being replaced is part of an existing pattern
definition sequence, inserting the new pattern statements before
the original one.
(vect_pattern_recog_1): Don't return a bool. If the statement
is already part of a pattern, instead apply pattern matching
to the pattern definition statements. Don't clear the
STMT_VINFO_RELATED_STMT if is_pattern_stmt_p.
(vect_pattern_recog): Don't break after the first match;
continue processing the pattern definition statements instead.
Don't bail out for STMT_VINFO_IN_PATTERN_P here.
From-SVN: r262275
|
|
This patch adds an overload of vect_reassociating_reduction_p
that checks for a vectorizable associative reduction,
since the check was duplicated in three functions.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_reassociating_reduction_p): New function.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern)
(vect_recog_widen_sum_pattern): Use it.
From-SVN: r262274
|
|
As suggested by Richard B., this patch makes vect_is_simple_use check
whether a defining statement has been replaced by a pattern statement,
and if so returns the pattern statement instead.
The reason for doing this is that the main patch for PR85694
makes over_widening handle more general cases. These over-widened
patterns can still be useful when matching later statements;
e.g. an overwidened MULT_EXPR could be the input to a DOT_PROD_EXPR.
The patch doesn't do anything with the STMT_VINFO_IN_PATTERN_P checks
in vect_recog_over_widening_pattern or vect_recog_widen_shift_pattern
since later patches rewrite them anyway.
Doing this fixed an XFAIL in vect-reduc-dot-u16b.c.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop.c (vectorizable_reduction): Assert that the
phi is not a pattern statement and has not been replaced by
a pattern statement.
* tree-vect-patterns.c (type_conversion_p): Don't check
STMT_VINFO_IN_PATTERN_P.
(vect_recog_vector_vector_shift_pattern): Likewise.
(vect_recog_dot_prod_pattern): Expect vect_is_simple_use to return
the pattern statement rather than the original statement; check
directly for a WIDEN_MULT_EXPR here.
* tree-vect-slp.c (vect_get_and_check_slp_defs): Expect
vect_is_simple_use to return the pattern statement rather
than the original statement; use is_pattern_stmt_p to check
for such a pattern statement.
* tree-vect-stmts.c (process_use): Expect vect_is_simple_use
to return the pattern statement rather than the original statement;
don't do the same transformation here.
(vect_is_simple_use): If the defining statement has been replaced
by a pattern statement, return the pattern statement instead.
Remove the corresponding (local) transformation from the vectype
overload.
gcc/testsuite/
* gcc.dg/vect/vect-reduc-dot-u16b.c: Remove xfail and update the
test for vectorization along the lines described in the comment.
From-SVN: r262273
|
|
As suggested by Richard B., this patch reorders the arguments to
vect_is_simple_use so that def_stmt comes last and is optional.
Many callers can then drop it, making it more obvious which of
the remaining calls would be affected by the next patch.
2018-06-30 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_is_simple_use): Move the gimple ** to the
end and default to null.
* tree-vect-loop.c (vect_create_epilog_for_reduction)
(vectorizable_reduction): Update calls accordingly, dropping the
gimple ** argument if the passed-back statement isn't needed.
* tree-vect-patterns.c (vect_get_internal_def, type_conversion_p)
(vect_recog_rotate_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.
(vect_mask_constant_operand_p): Likewise.
* tree-vect-stmts.c (is_simple_and_all_uses_invariant, process_use):
(vect_model_simple_cost, vect_get_vec_def_for_operand): Likewise.
(get_group_load_store_type, get_load_store_type): Likewise.
(vect_check_load_store_mask, vect_check_store_rhs): Likewise.
(vectorizable_call, vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion, vectorizable_assignment): Likewise.
(vectorizable_shift, vectorizable_operation): Likewise.
(vectorizable_store, vect_is_simple_cond): Likewise.
(vectorizable_condition, vectorizable_comparison): Likewise.
(get_same_sized_vectype, vect_get_mask_type_for_stmt): Likewise.
(vect_is_simple_use): Rename the def_stmt argument to def_stmt_out
and move it to the end. Cope with null def_stmt_outs.
From-SVN: r262272
|
|
2018-06-21 Richard Biener <rguenther@suse.de>
* tree-data-ref.c (dr_step_indicator): Handle NULL DR_STEP.
* tree-vect-data-refs.c (vect_analyze_possibly_independent_ddr):
Avoid calling vect_mark_for_runtime_alias_test with gathers or scatters.
(vect_analyze_data_ref_dependence): Re-order checks to deal with
NULL DR_STEP.
(vect_record_base_alignments): Do not record base alignment
for gathers or scatters.
(vect_compute_data_ref_alignment): Drop return value that is always
true. Bail out early for gathers or scatters.
(vect_enhance_data_refs_alignment): Bail out early for gathers
or scatters.
(vect_find_same_alignment_drs): Likewise.
(vect_analyze_data_refs_alignment): Remove dead code.
(vect_slp_analyze_and_verify_node_alignment): Likewise.
(vect_analyze_data_refs): For possible gathers or scatters do
not create an alternate DR, just check their possible validity
and mark them. Adjust DECL_NONALIASED handling to not rely
on DR_BASE_ADDRESS.
* tree-vect-loop-manip.c (vect_update_inits_of_drs): Do not
update inits of gathers or scatters.
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern):
Also copy gather/scatter flag to pattern vinfo.
From-SVN: r261834
|
|
This patch makes pattern recognisers do their own checking for vector
types and target support. Previously some recognisers did this
themselves and some left it to vect_pattern_recog_1.
Doing this means we can get rid of the type_in argument, which was
ignored if the recogniser did its own checking. It also means
we create fewer junk statements.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (NUM_PATTERNS, vect_recog_func_ptr): Move to
tree-vect-patterns.c.
* tree-vect-patterns.c (vect_supportable_direct_optab_p): New function.
(vect_recog_dot_prod_pattern): Use it. Remove the type_in argument.
(vect_recog_sad_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
(vect_recog_pow_pattern): Likewise. Check for a null vectype.
(vect_recog_widen_shift_pattern): Remove the type_in argument.
(vect_recog_rotate_pattern): Likewise.
(vect_recog_mult_pattern): Likewise.
(vect_recog_vector_vector_shift_pattern): Likewise.
(vect_recog_divmod_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
(vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
(vect_recog_widen_mult_pattern): Likewise. Check for a null vectype.
(vect_recog_over_widening_pattern): Likewise.
(vect_recog_gather_scatter_pattern): Likewise.
(vect_recog_func_ptr): Move from tree-vectorizer.h
(vect_vect_recog_func_ptrs): Move further down the file.
(vect_recog_func): Likewise. Remove the third argument.
(NUM_PATTERNS): Define based on vect_vect_recog_func_ptrs.
(vect_pattern_recog_1): Expect the pattern function to do any
necessary target tests. Also expect it to provide a vector type.
Remove the type_in handling.
From-SVN: r261791
|
|
This message is a long write-up for a patch that simply adds a common
routine for printing the "vector_foo_pattern: detected:" messages.
The reason for doing this is that some routines check for target support
themselves and some leave it to vect_pattern_recog_1. Those that leave
it to vect_pattern_recog_1 currently print these "detected:" messages if
the statements have the right form, even if the pattern is eventually
discarded. IMO that's useful, and a lot of existing scan tests rely on it.
However, a later patch makes patterns do their own testing, and stops
them creating pattern statements until the tests have passed. This means
(a) they need to print the "detected:" message earlier and (b) the pattern
statement won't be around to print.
The patch therefore makes all routines print the original statement
rather than the pattern one. That information isn't obvious otherwise,
whereas vect_pattern_recog_1 already prints the pattern statement
in the case of a successful match. This also avoids the previous
situation in which a routine could print "detected:" and then
silently bail out before saying what had been detected.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_pattern_detected): New function.
(vect_recog_dot_prod_patternm, vect_recog_sad_pattern)
(vect_recog_widen_mult_pattern, vect_recog_widen_sum_pattern)
(vect_recog_over_widening_pattern, vect_recog_widen_shift_pattern
(vect_recog_rotate_pattern, vect_recog_vector_vector_shift_pattern)
(vect_recog_mult_pattern, vect_recog_divmod_pattern)
(vect_recog_mixed_size_cond_pattern, vect_recog_bool_pattern)
(vect_recog_mask_conversion_pattern)
(vect_try_gather_scatter_pattern): Likewise.
From-SVN: r261790
|
|
This patch adds a helper for pattern code that wants to find an
internal (vectorisable) definition of an SSA name.
A later patch will make more use of this, and alter the definition.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_get_internal_def): New function.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern)
(vect_recog_vector_vector_shift_pattern, check_bool_pattern)
(search_type_for_mask_1): Use it.
From-SVN: r261789
|
|
vect_recog_dot_prod_pattern and vect_recog_sad_pattern both checked
whether the statement passed in had already been recognised as a
WIDEN_SUM_EXPR pattern. That isn't possible (any more?), since the
first recognised pattern wins, and since vect_recog_widen_sum_pattern
never matches a later statement than the one it's given.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Remove
redundant WIDEN_SUM_EXPR handling.
(vect_recog_sad_pattern): Likewise.
From-SVN: r261788
|
|
tree-vect-patterns.c checked that operands to primitive arithmetic ops
are compatible with each other and with the result. The checks date
back years and have long been redundant with verify_gimple_stmt.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Remove
redundant check that the types of a PLUS_EXPR or MULT_EXPR agree.
(vect_recog_sad_pattern): Likewise PLUS_EXPR, ABS_EXPR and MINUS_EXPR.
(vect_recog_widen_mult_pattern): Likewise MULT_EXPR.
(vect_recog_widen_sum_pattern): Likewise PLUS_EXPR.
From-SVN: r261787
|
|
A pattern's PATTERN_DEF_SEQ was attached to both the original statement
and the main pattern statement, which made it harder to update later.
This patch attaches it to just the original statement. In practice,
anything that cared had ready access to both.
2018-06-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (_stmt_vec_info): Note above pattern_def_seq
that the sequence is attached to the original statement rather
than the pattern statement.
* tree-vect-loop.c (vect_determine_vf_for_stmt): Take the
PATTERN_DEF_SEQ from the original statement rather than
the main pattern statement.
* tree-vect-stmts.c (free_stmt_vec_info): Likewise.
* tree-vect-patterns.c (vect_recog_dot_prod_pattern): Likewise.
(vect_mark_pattern_stmts): Don't copy the PATTERN_DEF_SEQ.
From-SVN: r261785
|
|
2018-06-19 Richard Biener <rguenther@suse.de>
PR tree-optimization/86179
* tree-vect-patterns.c (vect_pattern_recog_1): Clean up
after failed recognition.
* gcc.dg/pr86179.c: New testcase.
From-SVN: r261731
|
|
gcc/ChangeLog:
* tree-vect-data-refs.c (vect_analyze_data_ref_dependences):
Replace dump_printf_loc call with DUMP_VECT_SCOPE.
(vect_slp_analyze_instance_dependence): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_analyze_data_refs_alignment): Likewise.
(vect_slp_analyze_and_verify_instance_alignment
(vect_analyze_data_ref_accesses): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
(vect_analyze_data_refs): Likewise.
* tree-vect-loop-manip.c (vect_update_inits_of_drs): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(vect_analyze_scalar_cycles_1): Likewise.
(vect_get_loop_niters): Likewise.
(vect_analyze_loop_form_1): Likewise.
(vect_update_vf_for_slp): Likewise.
(vect_analyze_loop_operations): Likewise.
(vect_analyze_loop): Likewise.
(vectorizable_induction): Likewise.
(vect_transform_loop): Likewise.
* tree-vect-patterns.c (vect_pattern_recog): Likewise.
* tree-vect-slp.c (vect_analyze_slp): Likewise.
(vect_make_slp_decision): Likewise.
(vect_detect_hybrid_slp): Likewise.
(vect_slp_analyze_operations): Likewise.
(vect_slp_bb): Likewise.
* tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
(vectorizable_bswap): Likewise.
(vectorizable_call): Likewise.
(vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_assignment): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
* tree-vectorizer.h (DUMP_VECT_SCOPE): New macro.
From-SVN: r261710
|
|
gcc.target/aarch64/vect-abs-compile.c - "abs" vectorization fails for char/short types)
gcc/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
PR middle-end/64946
* cfgexpand.c (expand_debug_expr): Hande ABSU_EXPR.
* config/i386/i386.c (ix86_add_stmt_cost): Likewise.
* dojump.c (do_jump): Likewise.
* expr.c (expand_expr_real_2): Check operand type's sign.
* fold-const.c (const_unop): Handle ABSU_EXPR.
(fold_abs_const): Likewise.
* gimple-pretty-print.c (dump_unary_rhs): Likewise.
* gimple-ssa-backprop.c (backprop::process_assign_use): Likesie.
(strip_sign_op_1): Likesise.
* match.pd: Add new pattern to generate ABSU_EXPR.
* optabs-tree.c (optab_for_tree_code): Handle ABSU_EXPR.
* tree-cfg.c (verify_gimple_assign_unary): Likewise.
* tree-eh.c (operation_could_trap_helper_p): Likewise.
* tree-inline.c (estimate_operator_cost): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-vect-patterns.c (vect_recog_sad_pattern): Likewise.
* tree.def (ABSU_EXPR): New.
gcc/c-family/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* c-common.c (c_common_truthvalue_conversion): Handle ABSU_EXPR.
gcc/c/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* c-typeck.c (build_unary_op): Handle ABSU_EXPR;
* gimple-parser.c (c_parser_gimple_statement): Likewise.
(c_parser_gimple_unary_expression): Likewise.
gcc/cp/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
* constexpr.c (potential_constant_expression_1): Handle ABSU_EXPR.
* cp-gimplify.c (cp_fold): Likewise.
gcc/testsuite/ChangeLog:
2018-06-16 Kugan Vivekanandarajah <kuganv@linaro.org>
PR middle-end/64946
* gcc.dg/absu.c: New test.
* gcc.dg/gimplefe-29.c: New test.
* gcc.target/aarch64/pr64946.c: New test.
From-SVN: r261681
|
|
vector type of the intermediate stmt.
2018-06-13 Richard Biener <rguenther@suse.de>
* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern):
Properly set vector type of the intermediate stmt.
* tree-vect-stmts.c (vectorizable_operation): The destination
var always has vectype_out type.
From-SVN: r261553
|
|
2018-06-01 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_dr_stmt): New function.
(vect_get_load_cost): Adjust.
(vect_get_store_cost): Likewise.
* tree-vect-data-refs.c (vect_analyze_data_ref_dependence):
Use vect_dr_stmt instead of DR_SMTT.
(vect_record_base_alignments): Likewise.
(vect_calculate_target_alignment): Likewise.
(vect_compute_data_ref_alignment): Likewise and make static.
(vect_update_misalignment_for_peel): Likewise.
(vect_verify_datarefs_alignment): Likewise.
(vector_alignment_reachable_p): Likewise.
(vect_get_data_access_cost): Likewise. Pass down
vinfo to vect_get_load_cost/vect_get_store_cost instead of DR.
(vect_get_peeling_costs_all_drs): Likewise.
(vect_peeling_hash_get_lowest_cost): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_find_same_alignment_drs): Likewise.
(vect_analyze_data_refs_alignment): Likewise.
(vect_analyze_group_access_1): Likewise.
(vect_analyze_group_access): Likewise.
(vect_analyze_data_ref_access): Likewise.
(vect_analyze_data_ref_accesses): Likewise.
(vect_vfa_segment_size): Likewise.
(vect_small_gap_p): Likewise.
(vectorizable_with_step_bound_p): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
(vect_analyze_data_refs): Likewise.
(vect_supportable_dr_alignment): Likewise.
* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
(vect_gen_prolog_loop_niters): Likewise.
* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Do not
modify DR_STMT.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
* tree-vect-stmts.c (vect_model_store_cost): Pass stmt_info
to vect_get_store_cost.
(vect_get_store_cost): Get stmt_info instead of DR.
(vect_model_load_cost): Pass stmt_info to vect_get_load_cost.
(vect_get_load_cost): Get stmt_info instead of DR.
From-SVN: r261062
|
|
vect_recog_divmod_pattern currently bails out if the target has
native support for integer division, but I think in practice
it's always going to be better to open-code it anyway, just as
we usually open-code scalar divisions by constants.
I think the only currently affected targets are MIPS MSA and
powerpcspe (which is currently marked obsolete). For:
void
foo (int *x)
{
for (int i = 0; i < 100; ++i)
x[i] /= 2;
}
the MSA port previously preferred to use division for powers of 2:
.set noreorder
bnz.w $w1,1f
div_s.w $w0,$w0,$w1
break 7
.set reorder
1:
(or just the div_s.w for -mno-check-zero-division), but after the patch
it open-codes them using shifts:
clt_s.w $w1,$w0,$w2
subv.w $w0,$w0,$w1
srai.w $w0,$w0,1
MSA doesn't define a high-part pattern, so it still uses a division
instruction for the non-power-of-2 case.
Richard B pointed out that this would disable SLP of division by
different amounts, but I think in practice that's a price worth paying,
since the current cost model can't really tell whether using a general
vector division is better than using open-coded scalar divisions.
The fix would be either to support SLP of mixed open-coded divisions
or to improve the cost model and try SLP again without the patterns.
The patch adds an XFAILed test for this.
2018-05-23 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* tree-vect-patterns.c: Include predict.h.
(vect_recog_divmod_pattern): Restrict check for division support
to when optimizing for size.
gcc/testsuite/
* gcc.dg/vect/bb-slp-div-1.c: New XFAILed test.
From-SVN: r260711
|
|
2018-05-25 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (STMT_VINFO_GROUP_*, GROUP_*): Remove.
(DR_GROUP_*): New, assert we have non-NULL ->data_ref_info.
(REDUC_GROUP_*): New, assert we have NULL ->data_ref_info.
(STMT_VINFO_GROUPED_ACCESS): Adjust.
* tree-vect-data-refs.c (everywhere): Adjust users.
* tree-vect-loop.c (everywhere): Likewise.
* tree-vect-slp.c (everywhere): Likewise.
* tree-vect-stmts.c (everywhere): Likewise.
* tree-vect-patterns.c (vect_reassociating_reduction_p): Likewise.
From-SVN: r260709
|
|
2018-05-01 Tom de Vries <tom@codesourcery.com>
PR other/83786
* vec.h (VEC_ORDERED_REMOVE_IF, VEC_ORDERED_REMOVE_IF_FROM_TO): Define.
* vec.c (test_ordered_remove_if): New function.
(vec_c_tests): Call test_ordered_remove_if.
* dwarf2cfi.c (connect_traces): Use VEC_ORDERED_REMOVE_IF_FROM_TO.
* lto-streamer-out.c (prune_offload_funcs): Use VEC_ORDERED_REMOVE_IF.
* tree-vect-patterns.c (vect_pattern_recog_1): Use
VEC_ORDERED_REMOVE_IF.
From-SVN: r259808
|
|
This patch prevents pattern-matching of fold-left SLP reduction chains,
which the previous patch for 83965 didn't handle properly. It only
stops the last statement in the group from being matched, but that's
enough to cause the group to be dissolved later.
A better fix would be to put all the information about the reduction
on the the first statement in the reduction chain, so that every
statement in the group can tell what the group is doing. That doesn't
seem like stage 4 material though.
2018-02-26 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tree-optimization/83965
* tree-vect-patterns.c (vect_reassociating_reduction_p): Assume
that grouped statements are part of a reduction chain. Return
true if the statement is not marked as a reduction itself but
is part of a group.
(vect_recog_dot_prod_pattern): Don't check whether the statement
is part of a group here.
(vect_recog_sad_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
gcc/testsuite/
PR tree-optimization/83965
* gcc.dg/vect/pr83965-2.c: New test.
From-SVN: r257995
|
|
gcc/omp-simd-clone.c:1612)
PR tree-optimization/84452
* tree-vect-patterns.c (vect_recog_pow_pattern): Don't call
expand_simd_clones if targetm.simd_clone.compute_vecsize_and_simdlen
is NULL.
* gcc.dg/pr84452.c: New test.
From-SVN: r257819
|
|
PR middle-end/84309
* match.pd (pow(C,x) -> exp(log(C)*x)): Optimize instead into
exp2(log2(C)*x) if C is a power of 2 and c99 runtime is available.
* generic-match-head.c (canonicalize_math_after_vectorization_p): New
inline function.
* gimple-match-head.c (canonicalize_math_after_vectorization_p): New
inline function.
* omp-simd-clone.h: New file.
* omp-simd-clone.c: Include omp-simd-clone.h.
(expand_simd_clones): No longer static.
* tree-vect-patterns.c: Include fold-const-call.h, attribs.h,
cgraph.h and omp-simd-clone.h.
(vect_recog_pow_pattern): Optimize pow(C,x) to exp(log(C)*x).
(vect_recog_widen_shift_pattern): Formatting fix.
(vect_pattern_recog_1): Don't check optab for calls.
* gcc.dg/pr84309.c: New test.
* gcc.target/i386/pr84309.c: New test.
From-SVN: r257617
|
|
In this PR we recognised a PLUS_EXPR as a fold-left reduction,
then applied pattern matching to convert it to a WIDEN_SUM_EXPR.
We need to keep the original code in this case since we implement
the reduction using scalar rather than vector operations.
2018-01-23 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
PR tree-optimization/83965
* tree-vect-patterns.c (vect_reassociating_reduction_p): New function.
(vect_recog_dot_prod_pattern, vect_recog_sad_pattern): Use it
instead of checking only for a reduction.
(vect_recog_widen_sum_pattern): Likewise.
gcc/testsuite/
PR tree-optimization/83965
* gcc.dg/vect/pr83965.c: New test.
From-SVN: r256976
|
|
This is mostly a mechanical extension of the previous gather load
support to scatter stores. The internal functions in this case are:
IFN_SCATTER_STORE (base, offsets, scale, values)
IFN_MASK_SCATTER_STORE (base, offsets, scale, values, mask)
However, one nonobvious change is to vect_analyze_data_ref_access.
If we're treating an access as a gather load or scatter store
(i.e. if STMT_VINFO_GATHER_SCATTER_P is true), the existing code
would create a dummy data_reference whose step is 0. There's not
really much else it could do, since the whole point is that the
step isn't predictable from iteration to iteration. We then
went into this code in vect_analyze_data_ref_access:
/* Allow loads with zero step in inner-loop vectorization. */
if (loop_vinfo && integer_zerop (step))
{
GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
if (!nested_in_vect_loop_p (loop, stmt))
return DR_IS_READ (dr);
I.e. we'd take the step literally and assume that this is a load
or store to an invariant address. Loads from invariant addresses
are supported but stores to them aren't.
The code therefore had the effect of disabling all scatter stores.
AFAICT this is true of AVX too: although tests like avx512f-scatter-1.c
test for the correctness of a scatter-like loop, they don't seem to
check whether a scatter instruction is actually used.
The patch therefore makes vect_analyze_data_ref_access return true
for scatters. We do seem to handle the aliasing correctly;
that's tested by other functions, and is symmetrical to the
already-working gather case.
2018-01-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* doc/sourcebuild.texi (vect_scatter_store): Document.
* optabs.def (scatter_store_optab, mask_scatter_store_optab): New
optabs.
* doc/md.texi (scatter_store@var{m}, mask_scatter_store@var{m}):
Document.
* genopinit.c (main): Add supports_vec_scatter_store and
supports_vec_scatter_store_cached to target_optabs.
* gimple.h (gimple_expr_type): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
* internal-fn.def (SCATTER_STORE, MASK_SCATTER_STORE): New internal
functions.
* internal-fn.h (internal_store_fn_p): Declare.
(internal_fn_stored_value_index): Likewise.
* internal-fn.c (scatter_store_direct): New macro.
(expand_scatter_store_optab_fn): New function.
(direct_scatter_store_optab_supported_p): New macro.
(internal_store_fn_p): New function.
(internal_gather_scatter_fn_p): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
(internal_fn_mask_index): Likewise.
(internal_fn_stored_value_index): New function.
(internal_gather_scatter_fn_supported_p): Adjust operand numbers
for scatter stores.
* optabs-query.h (supports_vec_scatter_store_p): Declare.
* optabs-query.c (supports_vec_scatter_store_p): New function.
* tree-vectorizer.h (vect_get_store_rhs): Declare.
* tree-vect-data-refs.c (vect_analyze_data_ref_access): Return
true for scatter stores.
(vect_gather_scatter_fn_p): Handle scatter stores too.
(vect_check_gather_scatter): Consider using scatter stores if
supports_vec_scatter_store_p.
* tree-vect-patterns.c (vect_try_gather_scatter_pattern): Handle
scatter stores too.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_stored_value_index.
(check_load_store_masking): Handle scatter stores too.
(vect_get_store_rhs): Make public.
(vectorizable_call): Use internal_store_fn_p.
(vectorizable_store): Handle scatter store internal functions.
(vect_transform_stmt): Compare GROUP_STORE_COUNT with GROUP_SIZE
when deciding whether the end of the group has been reached.
* config/aarch64/aarch64.md (UNSPEC_ST1_SCATTER): New unspec.
* config/aarch64/aarch64-sve.md (scatter_store<mode>): New expander.
(mask_scatter_store<mode>): New insns.
gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_scatter_store):
New proc.
* gcc.dg/vect/pr25413a.c: Expect both loops to be optimized on
targets with scatter stores.
* gcc.dg/vect/vect-71.c: Restrict XFAIL to targets without scatter
stores.
* gcc.target/aarch64/sve/mask_scatter_store_1.c: New test.
* gcc.target/aarch64/sve/mask_scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_1.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_3.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_4.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_5.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_6.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_7.c: Likewise.
* gcc.target/aarch64/sve/strided_store_1.c: Likewise.
* gcc.target/aarch64/sve/strided_store_2.c: Likewise.
* gcc.target/aarch64/sve/strided_store_3.c: Likewise.
* gcc.target/aarch64/sve/strided_store_4.c: Likewise.
* gcc.target/aarch64/sve/strided_store_5.c: Likewise.
* gcc.target/aarch64/sve/strided_store_6.c: Likewise.
* gcc.target/aarch64/sve/strided_store_7.c: Likewise.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256643
|
|
This patch adds support for SVE gather loads. It uses the basically
the same analysis code as the AVX gather support, but after that
there are two major differences:
- It uses new internal functions rather than target built-ins.
The interface is:
IFN_GATHER_LOAD (base, offsets scale)
IFN_MASK_GATHER_LOAD (base, offsets scale, mask)
which should be reasonably generic. One of the advantages of
using internal functions is that other passes can understand what
the functions do, but a more immediate advantage is that we can
query the underlying target pattern to see which scales it supports.
- It uses pattern recognition to convert the offset to the right width,
if it was originally narrower than that. This avoids having to do
a widening operation as part of the gather expansion itself.
2018-01-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* doc/md.texi (gather_load@var{m}): Document.
(mask_gather_load@var{m}): Likewise.
* genopinit.c (main): Add supports_vec_gather_load and
supports_vec_gather_load_cached to target_optabs.
* optabs-tree.c (init_tree_optimization_optabs): Use
ggc_cleared_alloc to allocate target_optabs.
* optabs.def (gather_load_optab, mask_gather_laod_optab): New optabs.
* internal-fn.def (GATHER_LOAD, MASK_GATHER_LOAD): New internal
functions.
* internal-fn.h (internal_load_fn_p): Declare.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* internal-fn.c (gather_load_direct): New macro.
(expand_gather_load_optab_fn): New function.
(direct_gather_load_optab_supported_p): New macro.
(direct_internal_fn_optab): New function.
(internal_load_fn_p): Likewise.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* optabs-query.c (supports_at_least_one_mode_p): New function.
(supports_vec_gather_load_p): Likewise.
* optabs-query.h (supports_vec_gather_load_p): Declare.
* tree-vectorizer.h (gather_scatter_info): Add ifn, element_type
and memory_type field.
(NUM_PATTERNS): Bump to 15.
* tree-vect-data-refs.c: Include internal-fn.h.
(vect_gather_scatter_fn_p): New function.
(vect_describe_gather_scatter_call): Likewise.
(vect_check_gather_scatter): Try using internal functions for
gather loads. Recognize existing calls to a gather load function.
(vect_analyze_data_refs): Consider using gather loads if
supports_vec_gather_load_p.
* tree-vect-patterns.c (vect_get_load_store_mask): New function.
(vect_get_gather_scatter_offset_type): Likewise.
(vect_convert_mask_for_vectype): Likewise.
(vect_add_conversion_to_patterm): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
(vect_recog_gather_scatter_pattern): New pattern recognizer.
(vect_vect_recog_func_ptrs): Add it.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_mask_index and internal_gather_scatter_fn_p.
(check_load_store_masking): Take the gather_scatter_info as an
argument and handle gather loads.
(vect_get_gather_scatter_ops): New function.
(vectorizable_call): Check internal_load_fn_p.
(vectorizable_load): Likewise. Handle gather load internal
functions.
(vectorizable_store): Update call to check_load_store_masking.
* config/aarch64/aarch64.md (UNSPEC_LD1_GATHER): New unspec.
* config/aarch64/iterators.md (SVE_S, SVE_D): New mode iterators.
* config/aarch64/predicates.md (aarch64_gather_scale_operand_w)
(aarch64_gather_scale_operand_d): New predicates.
* config/aarch64/aarch64-sve.md (gather_load<mode>): New expander.
(mask_gather_load<mode>): New insns.
gcc/testsuite/
* gcc.target/aarch64/sve/gather_load_1.c: New test.
* gcc.target/aarch64/sve/gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/gather_load_7.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_1.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256640
|
|
This patch allows us to recognise:
... = bool1 != bool2 ? x : y
as equivalent to:
bool tmp = bool1 ^ bool2;
... = tmp ? x : y
For the latter we were already able to find the natural number
of vector units for tmp based on the types that feed bool1 and
bool2, whereas with the former we would simply treat bool1 and
bool2 as vectorised 8-bit values, possibly requiring them to
be packed and unpacked from their natural width.
This is used by a later SVE patch.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern): When
handling COND_EXPRs with boolean comparisons, try to find a better
basis for the mask type than the boolean itself.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256207
|
|
This patch changes GET_MODE_BITSIZE from an unsigned short
to a poly_uint16.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (mode_to_bits): Return a poly_uint16 rather than an
unsigned short.
(GET_MODE_BITSIZE): Return a constant if ONLY_FIXED_SIZE_MODES,
or if measurement_type is polynomial.
* calls.c (shift_return_value): Treat GET_MODE_BITSIZE as polynomial.
* combine.c (make_extraction): Likewise.
* dse.c (find_shift_sequence): Likewise.
* dwarf2out.c (mem_loc_descriptor): Likewise.
* expmed.c (store_integral_bit_field, extract_bit_field_1): Likewise.
(extract_bit_field, extract_low_bits): Likewise.
* expr.c (convert_move, convert_modes, emit_move_insn_1): Likewise.
(optimize_bitfield_assignment_op, expand_assignment): Likewise.
(store_expr_with_bounds, store_field, expand_expr_real_1): Likewise.
* fold-const.c (optimize_bit_field_compare, merge_ranges): Likewise.
* gimple-fold.c (optimize_atomic_compare_exchange_p): Likewise.
* reload.c (find_reloads): Likewise.
* reload1.c (alter_reg): Likewise.
* stor-layout.c (bitwise_mode_for_mode, compute_record_mode): Likewise.
* targhooks.c (default_secondary_memory_needed_mode): Likewise.
* tree-if-conv.c (predicate_mem_writes): Likewise.
* tree-ssa-strlen.c (handle_builtin_memcmp): Likewise.
* tree-vect-patterns.c (adjust_bool_pattern): Likewise.
* tree-vect-stmts.c (vectorizable_simd_clone_call): Likewise.
* valtrack.c (dead_debug_insert_temp): Likewise.
* varasm.c (mergeable_constant_section): Likewise.
* config/sh/sh.h (LOCAL_ALIGNMENT): Use as_a <fixed_size_mode>.
gcc/ada/
* gcc-interface/misc.c (enumerate_modes): Treat GET_MODE_BITSIZE
as polynomial.
gcc/c-family/
* c-ubsan.c (ubsan_instrument_shift): Treat GET_MODE_BITSIZE
as polynomial.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256200
|
|
This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64. The value is
encoded in the 10-bit precision field and was previously always stored
as a simple log2 value. The challenge was to use this 10 bits to
encode the number of elements in variable-length vectors, so that
we didn't need to increase the size of the tree.
In practice the number of vector elements should always have the form
N + N * X (where X is the runtime value), and as for constant-length
vectors, N must be a power of 2 (even though X itself might not be).
The patch therefore uses the low 8 bits to encode log2(N) and bit
8 to select between constant-length and variable-length vectors.
Targets without variable-length vectors continue to use the old scheme.
A new valid_vector_subparts_p function tests whether a given number
of elements can be encoded. This is false for the vector modes that
represent an LD3 or ST3 vector triple (which we want to treat as arrays
of vectors rather than single vectors).
Most of the patch is mechanical; previous patches handled the changes
that weren't entirely straightforward.
2018-01-03 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (TYPE_VECTOR_SUBPARTS): Turn into a function and handle
polynomial numbers of units.
(SET_TYPE_VECTOR_SUBPARTS): Likewise.
(valid_vector_subparts_p): New function.
(build_vector_type): Remove temporary shim and take the number
of units as a poly_uint64 rather than an int.
(build_opaque_vector_type): Take the number of units as a
poly_uint64 rather than an int.
* tree.c (build_vector_from_ctor): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(type_hash_canon_hash, type_cache_hasher::equal): Likewise.
(uniform_vector_p, vector_type_mode, build_vector): Likewise.
(build_vector_from_val): If the number of units is variable,
use build_vec_duplicate_cst for constant operands and
VEC_DUPLICATE_EXPR otherwise.
(make_vector_type): Remove temporary is_constant ().
(build_vector_type, build_opaque_vector_type): Take the number of
units as a poly_uint64 rather than an int.
(check_vector_cst): Handle polynomial TYPE_VECTOR_SUBPARTS and
VECTOR_CST_NELTS.
* cfgexpand.c (expand_debug_expr): Likewise.
* expr.c (count_type_elements, categorize_ctor_elements_1): Likewise.
(store_constructor, expand_expr_real_1): Likewise.
(const_scalar_mask_from_tree): Likewise.
* fold-const-call.c (fold_const_reduction): Likewise.
* fold-const.c (const_binop, const_unop, fold_convert_const): Likewise.
(operand_equal_p, fold_vec_perm, fold_ternary_loc): Likewise.
(native_encode_vector, vec_cst_ctor_to_array): Likewise.
(fold_relational_const): Likewise.
(native_interpret_vector): Likewise. Change the size from an
int to an unsigned int.
* gimple-fold.c (gimple_fold_stmt_to_constant_1): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(gimple_fold_indirect_ref, gimple_build_vector): Likewise.
(gimple_build_vector_from_val): Use VEC_DUPLICATE_EXPR when
duplicating a non-constant operand into a variable-length vector.
* hsa-brig.c (hsa_op_immed::emit_to_buffer): Handle polynomial
TYPE_VECTOR_SUBPARTS and VECTOR_CST_NELTS.
* ipa-icf.c (sem_variable::equals): Likewise.
* match.pd: Likewise.
* omp-simd-clone.c (simd_clone_subparts): Likewise.
* print-tree.c (print_node): Likewise.
* stor-layout.c (layout_type): Likewise.
* targhooks.c (default_builtin_vectorization_cost): Likewise.
* tree-cfg.c (verify_gimple_comparison): Likewise.
(verify_gimple_assign_binary): Likewise.
(verify_gimple_assign_ternary): Likewise.
(verify_gimple_assign_single): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-ssa-forwprop.c (simplify_vector_constructor): Likewise.
(simplify_bitfield_ref, is_combined_permutation_identity): Likewise.
* tree-vect-data-refs.c (vect_permute_store_chain): Likewise.
(vect_grouped_load_supported, vect_permute_load_chain): Likewise.
(vect_shift_permute_load_chain): Likewise.
* tree-vect-generic.c (nunits_for_known_piecewise_op): Likewise.
(expand_vector_condition, optimize_vector_constructor): Likewise.
(lower_vec_perm, get_compute_type): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(get_initial_defs_for_reduction, vect_transform_loop): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
* tree-vect-slp.c (vect_supported_load_permutation_p): Likewise.
(vect_get_constant_vectors, vect_transform_slp_perm_load): Likewise.
* tree-vect-stmts.c (perm_mask_for_reverse): Likewise.
(get_group_load_store_type, vectorizable_mask_load_store): Likewise.
(vectorizable_bswap, simd_clone_subparts, vectorizable_assignment)
(vectorizable_shift, vectorizable_operation, vectorizable_store)
(vectorizable_load, vect_is_simple_cond, vectorizable_comparison)
(supportable_widening_operation): Likewise.
(supportable_narrowing_operation): Likewise.
* tree-vector-builder.c (tree_vector_builder::binary_encoded_nelts):
Likewise.
* varasm.c (output_constant): Likewise.
gcc/ada/
* gcc-interface/utils.c (gnat_types_compatible_p): Handle
polynomial TYPE_VECTOR_SUBPARTS.
gcc/brig/
* brigfrontend/brig-to-generic.cc (get_unsigned_int_type): Handle
polynomial TYPE_VECTOR_SUBPARTS.
* brigfrontend/brig-util.h (gccbrig_type_vector_subparts): Likewise.
gcc/c-family/
* c-common.c (vector_types_convertible_p, c_build_vec_perm_expr)
(convert_vector_to_array_for_subscript): Handle polynomial
TYPE_VECTOR_SUBPARTS.
(c_common_type_for_mode): Check valid_vector_subparts_p.
* c-pretty-print.c (pp_c_initializer_list): Handle polynomial
VECTOR_CST_NELTS.
gcc/c/
* c-typeck.c (comptypes_internal, build_binary_op): Handle polynomial
TYPE_VECTOR_SUBPARTS.
gcc/cp/
* constexpr.c (cxx_eval_array_reference): Handle polynomial
VECTOR_CST_NELTS.
(cxx_fold_indirect_ref): Handle polynomial TYPE_VECTOR_SUBPARTS.
* call.c (build_conditional_expr_1): Likewise.
* decl.c (cp_finish_decomp): Likewise.
* mangle.c (write_type): Likewise.
* typeck.c (structural_comptypes): Likewise.
(cp_build_binary_op): Likewise.
* typeck2.c (process_init_constructor_array): Likewise.
gcc/fortran/
* trans-types.c (gfc_type_for_mode): Check valid_vector_subparts_p.
gcc/lto/
* lto-lang.c (lto_type_for_mode): Check valid_vector_subparts_p.
* lto.c (hash_canonical_type): Handle polynomial TYPE_VECTOR_SUBPARTS.
gcc/go/
* go-lang.c (go_langhook_type_for_mode): Check valid_vector_subparts_p.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256197
|
|
From-SVN: r256169
|
|
2017-12-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/81303
* tree-vect-stmts.c (vect_is_simple_cond): For invariant
conditions try to create a comparison vector type matching
the data vector type.
(vectorizable_condition): Adjust.
* tree-vect-patterns.c (vect_recog_mask_conversion_pattern):
Leave invariant conditions alone in case we can vectorize those.
* gcc.target/i386/vectorize9.c: New testcase.
* gcc.target/i386/vectorize10.c: New testcase.
From-SVN: r255497
|
|
The wide_int routines allow things like:
wi::add (t, 1)
to add 1 to an INTEGER_CST T in its native precision. But we also have:
wi::to_offset (t) // Treat T as an offset_int
wi::to_widest (t) // Treat T as a widest_int
Recently we also gained:
wi::to_wide (t, prec) // Treat T as a wide_int in preccision PREC
This patch therefore requires:
wi::to_wide (t)
when operating on INTEGER_CSTs in their native precision. This is
just as efficient, and makes it clearer that a deliberate choice is
being made to treat the tree as a wide_int in its native precision.
This also removes the inconsistency that
a) INTEGER_CSTs in their native precision can be used without an accessor
but must use wi:: functions instead of C++ operators
b) the other forms need an explicit accessor but the result can be used
with C++ operators.
It also helps with SVE, where there's the additional possibility
that the tree could be a runtime value.
2017-10-10 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* wide-int.h (wide_int_ref_storage): Make host_dependent_precision
a template parameter.
(WIDE_INT_REF_FOR): Update accordingly.
* tree.h (wi::int_traits <const_tree>): Delete.
(wi::tree_to_widest_ref, wi::tree_to_offset_ref): New typedefs.
(wi::to_widest, wi::to_offset): Use them. Expand commentary.
(wi::tree_to_wide_ref): New typedef.
(wi::to_wide): New function.
* calls.c (get_size_range): Use wi::to_wide when operating on
trees as wide_ints.
* cgraph.c (cgraph_node::create_thunk): Likewise.
* config/i386/i386.c (ix86_data_alignment): Likewise.
(ix86_local_alignment): Likewise.
* dbxout.c (stabstr_O): Likewise.
* dwarf2out.c (add_scalar_info, gen_enumeration_type_die): Likewise.
* expr.c (const_vector_from_tree): Likewise.
* fold-const-call.c (host_size_t_cst_p, fold_const_call_1): Likewise.
* fold-const.c (may_negate_without_overflow_p, negate_expr_p)
(fold_negate_expr_1, int_const_binop_1, const_binop)
(fold_convert_const_int_from_real, optimize_bit_field_compare)
(all_ones_mask_p, sign_bit_p, unextend, extract_muldiv_1)
(fold_div_compare, fold_single_bit_test, fold_plusminus_mult_expr)
(pointer_may_wrap_p, expr_not_equal_to, fold_binary_loc)
(fold_ternary_loc, multiple_of_p, fold_negate_const, fold_abs_const)
(fold_not_const, round_up_loc): Likewise.
* gimple-fold.c (gimple_fold_indirect_ref): Likewise.
* gimple-ssa-warn-alloca.c (alloca_call_type_by_arg): Likewise.
(alloca_call_type): Likewise.
* gimple.c (preprocess_case_label_vec_for_gimple): Likewise.
* godump.c (go_output_typedef): Likewise.
* graphite-sese-to-poly.c (tree_int_to_gmp): Likewise.
* internal-fn.c (get_min_precision): Likewise.
* ipa-cp.c (ipcp_store_vr_results): Likewise.
* ipa-polymorphic-call.c
(ipa_polymorphic_call_context::ipa_polymorphic_call_context): Likewise.
* ipa-prop.c (ipa_print_node_jump_functions_for_edge): Likewise.
(ipa_modify_call_arguments): Likewise.
* match.pd: Likewise.
* omp-low.c (scan_omp_1_op, lower_omp_ordered_clauses): Likewise.
* print-tree.c (print_node_brief, print_node): Likewise.
* stmt.c (expand_case): Likewise.
* stor-layout.c (layout_type): Likewise.
* tree-affine.c (tree_to_aff_combination): Likewise.
* tree-cfg.c (group_case_labels_stmt): Likewise.
* tree-data-ref.c (dr_analyze_indices): Likewise.
(prune_runtime_alias_test_list): Likewise.
* tree-dump.c (dequeue_and_dump): Likewise.
* tree-inline.c (remap_gimple_op_r, copy_tree_body_r): Likewise.
* tree-predcom.c (is_inv_store_elimination_chain): Likewise.
* tree-pretty-print.c (dump_generic_node): Likewise.
* tree-scalar-evolution.c (iv_can_overflow_p): Likewise.
(simple_iv_with_niters): Likewise.
* tree-ssa-address.c (addr_for_mem_ref): Likewise.
* tree-ssa-ccp.c (ccp_finalize, evaluate_stmt): Likewise.
* tree-ssa-loop-ivopts.c (constant_multiple_of): Likewise.
* tree-ssa-loop-niter.c (split_to_var_and_offset)
(refine_value_range_using_guard, number_of_iterations_ne_max)
(number_of_iterations_lt_to_ne, number_of_iterations_lt)
(get_cst_init_from_scev, record_nonwrapping_iv)
(scev_var_range_cant_overflow): Likewise.
* tree-ssa-phiopt.c (minmax_replacement): Likewise.
* tree-ssa-pre.c (compute_avail): Likewise.
* tree-ssa-sccvn.c (vn_reference_fold_indirect): Likewise.
(vn_reference_maybe_forwprop_address, valueized_wider_op): Likewise.
* tree-ssa-structalias.c (get_constraint_for_ptr_offset): Likewise.
* tree-ssa-uninit.c (is_pred_expr_subset_of): Likewise.
* tree-ssanames.c (set_nonzero_bits, get_nonzero_bits): Likewise.
* tree-switch-conversion.c (collect_switch_conv_info, array_value_type)
(dump_case_nodes, try_switch_expansion): Likewise.
* tree-vect-loop-manip.c (vect_gen_vector_loop_niters): Likewise.
(vect_do_peeling): Likewise.
* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.
* tree-vect-stmts.c (vectorizable_load): Likewise.
* tree-vrp.c (compare_values_warnv, vrp_int_const_binop): Likewise.
(zero_nonzero_bits_from_vr, ranges_from_anti_range): Likewise.
(extract_range_from_binary_expr_1, adjust_range_with_scev): Likewise.
(overflow_comparison_p_1, register_edge_assert_for_2): Likewise.
(is_masked_range_test, find_switch_asserts, maybe_set_nonzero_bits)
(vrp_evaluate_conditional_warnv_with_ops, intersect_ranges): Likewise.
(range_fits_type_p, two_valued_val_range_p, vrp_finalize): Likewise.
(evrp_dom_walker::before_dom_children): Likewise.
* tree.c (cache_integer_cst, real_value_from_int_cst, integer_zerop)
(integer_all_onesp, integer_pow2p, integer_nonzerop, tree_log2)
(tree_floor_log2, tree_ctz, mem_ref_offset, tree_int_cst_sign_bit)
(tree_int_cst_sgn, get_unwidened, int_fits_type_p): Likewise.
(get_type_static_bounds, num_ending_zeros, drop_tree_overflow)
(get_range_pos_neg): Likewise.
* ubsan.c (ubsan_expand_ptr_ifn): Likewise.
* config/darwin.c (darwin_mergeable_constant_section): Likewise.
* config/aarch64/aarch64.c (aapcs_vfp_sub_candidate): Likewise.
* config/arm/arm.c (aapcs_vfp_sub_candidate): Likewise.
* config/avr/avr.c (avr_fold_builtin): Likewise.
* config/bfin/bfin.c (bfin_local_alignment): Likewise.
* config/msp430/msp430.c (msp430_attr): Likewise.
* config/nds32/nds32.c (nds32_insert_attributes): Likewise.
* config/powerpcspe/powerpcspe-c.c
(altivec_resolve_overloaded_builtin): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_aggregate_candidate)
(rs6000_expand_ternop_builtin): Likewise.
* config/rs6000/rs6000-c.c
(altivec_resolve_overloaded_builtin): Likewise.
* config/rs6000/rs6000.c (rs6000_aggregate_candidate): Likewise.
(rs6000_expand_ternop_builtin): Likewise.
* config/s390/s390.c (s390_handle_hotpatch_attribute): Likewise.
gcc/ada/
* gcc-interface/decl.c (annotate_value): Use wi::to_wide when
operating on trees as wide_ints.
gcc/c/
* c-parser.c (c_parser_cilk_clause_vectorlength): Use wi::to_wide when
operating on trees as wide_ints.
* c-typeck.c (build_c_cast, c_finish_omp_clauses): Likewise.
(c_tree_equal): Likewise.
gcc/c-family/
* c-ada-spec.c (dump_generic_ada_node): Use wi::to_wide when
operating on trees as wide_ints.
* c-common.c (pointer_int_sum): Likewise.
* c-pretty-print.c (pp_c_integer_constant): Likewise.
* c-warn.c (match_case_to_enum_1): Likewise.
(c_do_switch_warnings): Likewise.
(maybe_warn_shift_overflow): Likewise.
gcc/cp/
* cvt.c (ignore_overflows): Use wi::to_wide when
operating on trees as wide_ints.
* decl.c (check_array_designated_initializer): Likewise.
* mangle.c (write_integer_cst): Likewise.
* semantics.c (cp_finish_omp_clause_depend_sink): Likewise.
gcc/fortran/
* target-memory.c (gfc_interpret_logical): Use wi::to_wide when
operating on trees as wide_ints.
* trans-const.c (gfc_conv_tree_to_mpz): Likewise.
* trans-expr.c (gfc_conv_cst_int_power): Likewise.
* trans-intrinsic.c (trans_this_image): Likewise.
(gfc_conv_intrinsic_bound): Likewise.
(conv_intrinsic_cobound): Likewise.
gcc/lto/
* lto.c (compare_tree_sccs_1): Use wi::to_wide when
operating on trees as wide_ints.
gcc/objc/
* objc-act.c (objc_decl_method_attributes): Use wi::to_wide when
operating on trees as wide_ints.
From-SVN: r253595
|
|
2017-09-25 Richard Biener <rguenther@suse.de>
PR tree-optimization/82285
* tree-vect-patterns.c (vect_recog_bool_pattern): Also handle
enumeral types.
* gcc.dg/torture/pr82285.c: New testcase.
From-SVN: r253146
|
|
VECTOR_MODE_P check.
* tree-vect-patterns.c (vect_pattern_recog_1): Use VECTOR_TYPE_P instead
of VECTOR_MODE_P check.
* tree-vect-stmts.c (get_vectype_for_scalar_type_and_size): Allow single
element vector types.
Co-Authored-By: Richard Biener <rguenther@suse.de>
From-SVN: r251538
|
|
This patch adds a SCALAR_TYPE_MODE macro, along the same lines as
SCALAR_INT_TYPE_MODE and SCALAR_FLOAT_TYPE_MODE. It also adds
two instances of as_a <scalar_mode> to c_common_type, when converting
an unsigned fixed-point SCALAR_TYPE_MODE to the equivalent signed mode.
2017-08-30 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_TYPE_MODE): New macro.
* expr.c (expand_expr_addr_expr_1): Use it.
(expand_expr_real_2): Likewise.
* fold-const.c (fold_convert_const_fixed_from_fixed): Likeise.
(fold_convert_const_fixed_from_int): Likewise.
(fold_convert_const_fixed_from_real): Likewise.
(native_encode_fixed): Likewise
(native_encode_complex): Likewise
(native_encode_vector): Likewise.
(native_interpret_fixed): Likewise.
(native_interpret_real): Likewise.
(native_interpret_complex): Likewise.
(native_interpret_vector): Likewise.
* omp-simd-clone.c (simd_clone_adjust_return_type): Likewise.
(simd_clone_adjust_argument_types): Likewise.
(simd_clone_init_simd_arrays): Likewise.
(simd_clone_adjust): Likewise.
* stor-layout.c (layout_type): Likewise.
* tree.c (build_minus_one_cst): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-inline.c (estimate_move_cost): Likewise.
* tree-ssa-math-opts.c (convert_plusminus_to_widen): Likewise.
* tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise.
(vectorizable_reduction): Likewise.
* tree-vect-patterns.c (vect_recog_widen_mult_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
(check_bool_pattern): Likewise.
(adjust_bool_pattern): Likewise.
(search_type_for_mask_1): Likewise.
* tree-vect-slp.c (vect_schedule_slp_instance): Likewise.
* tree-vect-stmts.c (vectorizable_conversion): Likewise.
(vectorizable_load): Likewise.
(vectorizable_store): Likewise.
* ubsan.c (ubsan_encode_value): Likewise.
* varasm.c (output_constant): Likewise.
gcc/c-family/
* c-lex.c (interpret_fixed): Use SCALAR_TYPE_MODE.
* c-common.c (c_build_vec_perm_expr): Likewise.
gcc/c/
* c-typeck.c (build_binary_op): Use SCALAR_TYPE_MODE.
(c_common_type): Likewise. Use as_a <scalar_mode> when setting
m1 and m2 to the signed equivalent of a fixed-point
SCALAR_TYPE_MODE.
gcc/cp/
* typeck.c (cp_build_binary_op): Use SCALAR_TYPE_MODE.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r251516
|
|
This patch adds a SCALAR_INT_TYPE_MODE macro that asserts
that the type has a scalar integer mode and returns it as
a scalar_int_mode.
2017-08-30 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_INT_TYPE_MODE): New macro.
* builtins.c (expand_builtin_signbit): Use it.
* cfgexpand.c (expand_debug_expr): Likewise.
* dojump.c (do_jump): Likewise.
(do_compare_and_jump): Likewise.
* dwarf2cfi.c (expand_builtin_init_dwarf_reg_sizes): Likewise.
* expmed.c (make_tree): Likewise.
* expr.c (expand_expr_real_2): Likewise.
(expand_expr_real_1): Likewise.
(try_casesi): Likewise.
* fold-const-call.c (fold_const_call_ss): Likewise.
* fold-const.c (unextend): Likewise.
(extract_muldiv_1): Likewise.
(fold_single_bit_test): Likewise.
(native_encode_int): Likewise.
(native_encode_string): Likewise.
(native_interpret_int): Likewise.
* gimple-fold.c (gimple_fold_builtin_memset): Likewise.
* internal-fn.c (expand_addsub_overflow): Likewise.
(expand_neg_overflow): Likewise.
(expand_mul_overflow): Likewise.
(expand_arith_overflow): Likewise.
* match.pd: Likewise.
* stor-layout.c (layout_type): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-ssa-math-opts.c (convert_mult_to_widen): Likewise.
* tree-ssanames.c (get_range_info): Likewise.
* tree-switch-conversion.c (array_value_type) Likewise.
* tree-vect-patterns.c (vect_recog_rotate_pattern): Likewise.
(vect_recog_divmod_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
* tree-vrp.c (extract_range_basic): Likewise.
(simplify_float_conversion_using_ranges): Likewise.
* tree.c (int_fits_type_p): Likewise.
* ubsan.c (instrument_bool_enum_load): Likewise.
* varasm.c (mergeable_string_section): Likewise.
(narrowing_initializer_constant_valid_p): Likewise.
(output_constant): Likewise.
gcc/cp/
* cvt.c (cp_convert_to_pointer): Use SCALAR_INT_TYPE_MODE.
gcc/fortran/
* target-memory.c (size_integer): Use SCALAR_INT_TYPE_MODE.
(size_logical): Likewise.
gcc/objc/
* objc-encoding.c (encode_type): Use SCALAR_INT_TYPE_MODE.
Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r251486
|
|
This patch sets the nothrow flag for various calls to internal functions
that are not inherently NOTHROW (and so can't be declared that way in
internal-fn.def) but that are used in contexts that can guarantee
NOTHROWness.
2017-08-29 Richard Sandiford <richard.sandiford@linaro.org>
gcc/
* gimplify.c (gimplify_call_expr): Copy the nothrow flag to
calls to internal functions.
(gimplify_modify_expr): Likewise.
* tree-call-cdce.c (use_internal_fn): Likewise.
* tree-ssa-math-opts.c (pass_cse_reciprocals::execute): Likewise.
(convert_to_divmod): Set the nothrow flag.
* tree-if-conv.c (predicate_mem_writes): Likewise.
* tree-vect-stmts.c (vectorizable_mask_load_store): Likewise.
(vectorizable_call): Likewise.
(vectorizable_store): Likewise.
(vectorizable_load): Likewise.
* tree-vect-patterns.c (vect_recog_pow_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
From-SVN: r251401
|