Age | Commit message (Collapse) | Author | Files | Lines |
|
With the bit_cast changes, I have added support for bitfields which don't
have scalar representatives. For bit_cast it works fine, as when mask
is non-NULL, off is asserted to be 0. But when native_encode_initializer
is called e.g. from sccvn with off > 0 (i.e. we are interested in encoding
just a few bytes out of it somewhere from the middle or at the end), the
following computations are incorrect.
pos is a byte position from the start of the constructor, repr_size is the
size in bytes of the bit-field representative and len is the length
of the buffer. If the buffer is offsetted by positive off, those numbers
are uncomparable though, we need to add off to len to make both
count bytes from the start of the constructor, and o is a utility temporary
set to off != -1 ? off : 0 (because off -1 also means start at offset 0
and just force special behavior).
2020-12-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/98199
* fold-const.c (native_encode_initializer): Fix handling bit-fields
when off > 0.
* gcc.c-torture/compile/pr98199.c: New test.
|
|
When native_encode_initializer is called with non-NULL mask (i.e. ATM
bit_cast only), it checks if the current index in the CONSTRUCTOR (if any)
is the next initializable FIELD_DECL, and if not, decrements cnt and
performs the iteration with that FIELD_DECL as field and val of zero
(so that it computes mask properly). As the testcase shows, I forgot to
set pos to the byte position of the field though (like it is done
for e.g. index referenced FIELD_DECLs in the constructor.
2020-12-09 Jakub Jelinek <jakub@redhat.com>
PR c++/98193
* fold-const.c (native_encode_initializer): Set pos to field's
byte position if iterating over a field with missing initializer.
* g++.dg/cpp2a/bit-cast7.C: New test.
|
|
native_encode_initializer [PR93121]
The following testcase is rejected, because when trying to encode a zeroing
CONSTRUCTOR, the code was using build_constructor to build initializers for
the elements but when recursing the function handles CONSTRUCTOR only for
aggregate types.
The following patch fixes that by using build_zero_cst instead for
non-aggregates. Another option would be add handling CONSTRUCTOR for
non-aggregates in native_encode_initializer. Or we can do both, I guess
the middle-end generally doesn't like CONSTRUCTORs for scalar variables, but
am not 100% sure if the FE doesn't produce those sometimes.
2020-12-04 Jakub Jelinek <jakub@redhat.com>
PR libstdc++/93121
* fold-const.c (native_encode_initializer): Use build_zero_cst
instead of build_constructor.
* g++.dg/cpp2a/bit-cast6.C: New test.
|
|
The following patch adds __builtin_bit_cast builtin, similarly to
clang or MSVC which implement std::bit_cast using such an builtin too.
It checks the various std::bit_cast requirements, when not constexpr
evaluated acts pretty much like VIEW_CONVERT_EXPR of the source argument
to the destination type and the hardest part is obviously the constexpr
evaluation.
I've left out PDP11 handling of those, couldn't figure out how exactly are
bitfields laid out there
2020-12-03 Jakub Jelinek <jakub@redhat.com>
PR libstdc++/93121
* fold-const.h (native_encode_initializer): Add mask argument
defaulted to nullptr.
(find_bitfield_repr_type): Declare.
(native_interpret_aggregate): Declare.
* fold-const.c (find_bitfield_repr_type): New function.
(native_encode_initializer): Add mask argument and support for
filling it. Handle also some bitfields without integral
DECL_BIT_FIELD_REPRESENTATIVE.
(native_interpret_aggregate): New function.
* gimple-fold.h (clear_type_padding_in_mask): Declare.
* gimple-fold.c (struct clear_padding_struct): Add clear_in_mask
member.
(clear_padding_flush): Handle buf->clear_in_mask.
(clear_padding_union): Copy clear_in_mask. Don't error if
buf->clear_in_mask is set.
(clear_padding_type): Don't error if buf->clear_in_mask is set.
(clear_type_padding_in_mask): New function.
(gimple_fold_builtin_clear_padding): Set buf.clear_in_mask to false.
* doc/extend.texi (__builtin_bit_cast): Document.
* c-common.h (enum rid): Add RID_BUILTIN_BIT_CAST.
* c-common.c (c_common_reswords): Add __builtin_bit_cast.
* cp-tree.h (cp_build_bit_cast): Declare.
* cp-tree.def (BIT_CAST_EXPR): New tree code.
* cp-objcp-common.c (names_builtin_p): Handle RID_BUILTIN_BIT_CAST.
(cp_common_init_ts): Handle BIT_CAST_EXPR.
* cxx-pretty-print.c (cxx_pretty_printer::postfix_expression):
Likewise.
* parser.c (cp_parser_postfix_expression): Handle
RID_BUILTIN_BIT_CAST.
* semantics.c (cp_build_bit_cast): New function.
* tree.c (cp_tree_equal): Handle BIT_CAST_EXPR.
(cp_walk_subtrees): Likewise.
* pt.c (tsubst_copy): Likewise.
* constexpr.c (check_bit_cast_type, cxx_eval_bit_cast): New functions.
(cxx_eval_constant_expression): Handle BIT_CAST_EXPR.
(potential_constant_expression_1): Likewise.
* cp-gimplify.c (cp_genericize_r): Likewise.
* g++.dg/cpp2a/bit-cast1.C: New test.
* g++.dg/cpp2a/bit-cast2.C: New test.
* g++.dg/cpp2a/bit-cast3.C: New test.
* g++.dg/cpp2a/bit-cast4.C: New test.
* g++.dg/cpp2a/bit-cast5.C: New test.
|
|
The PR38359 change made the -1 >> x to -1 optimization less useful by
requiring that the x must be non-negative.
Shifts by negative amount are UB, but we for historic reasons had in some
(but not all) places some hack to treat shifts by negative value as the
other direction shifts by the negated amount.
The following patch just removes that special handling, instead we punt on
optimizing those (and ideally path isolation should catch that up and turn
those into __builtin_unreachable, perhaps with __builtin_warning next to
it). Folding the shifts in some places as if they were rotates and in other
as if they were saturating just leads to inconsistencies.
For C++ constexpr diagnostics and -fpermissive, I've added code to pretend
fold-const.c has not changed, without -fpermissive it will be an error
anyway and I think it is better not to change all the diagnostics.
During x86_64-linux and i686-linux bootstrap/regtest, my statistics
gathering patch noted 185 unique -m32/-m64 x TU x function_name x shift_kind
x fold-const/tree-ssa-ccp cases. I have investigated the
64 ../../gcc/config/i386/i386.c x86_output_aligned_bss LSHIFT_EXPR wide_int_bitop
64 ../../gcc/config/i386/i386-expand.c emit_memmov LSHIFT_EXPR wide_int_bitop
64 ../../gcc/config/i386/i386-expand.c ix86_expand_carry_flag_compare LSHIFT_EXPR wide_int_bitop
64 ../../gcc/expmed.c expand_divmod LSHIFT_EXPR wide_int_bitop
64 ../../gcc/lra-lives.c process_bb_lives LSHIFT_EXPR wide_int_bitop
64 ../../gcc/rtlanal.c nonzero_bits1 LSHIFT_EXPR wide_int_bitop
64 ../../gcc/varasm.c optimize_constant_pool.isra LSHIFT_EXPR wide_int_bitop
cases and all of them are either during jump threading (dom) or during PRE.
For jump threading, the most common case is 1 << floor_log2 (whatever) where
floor_log2 is return HOST_BITS_PER_WIDE_INT - 1 - clz_hwi (x);
and clz_hwi is if (x == 0) return HOST_BITS_PER_WIDE_INT; return __builtin_clz* (x);
and so has range [-1, 63] and a comparison against == 0 which makes the
threader think it might be nice to jump thread the case leading to 1 << -1.
I think it is better to keep the 1 << -1 s in the IL for this and let path
isolation turn that into __builtin_unreachable () if the user wishes so.
2020-11-24 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96929
* fold-const.c (wide_int_binop) <case LSHIFT_EXPR, case RSHIFT_EXPR>:
Return false on negative second argument rather than trying to handle
it as shift in the other direction.
* tree-ssa-ccp.c (bit_value_binop) <case LSHIFT_EXPR,
case RSHIFT_EXPR>: Punt on negative shift count rather than trying
to handle it as shift in the other direction.
* match.pd (-1 >> x to -1): Remove tree_expr_nonnegative_p check.
* constexpr.c (cxx_eval_binary_expression): For shifts by constant
with MSB set, emulate older wide_int_binop behavior to preserve
diagnostics and -fpermissive behavior.
* gcc.dg/tree-ssa/pr96929.c: New test.
|
|
* fold-const.c (operand_compare::operand_equal_p): Fix thinko in
COMPONENT_REF handling and guard types_same_for_odr by
virtual_method_call_p.
(operand_compare::hash_operand): Likewise.
|
|
This fixes a typo in the TREE_CODE compare which should
compare against TYPE_DECL, not TYPE_NAME.
2020-11-19 Richard Biener <rguenther@suse.de>
* fold-const.c (operand_compare::hash_operand): Fix typo.
|
|
* fold-const.c (operand_compare::operand_equal_p): More OBJ_TYPE_REF
matching to correct place; drop OEP_ADDRESS_OF for TOKEN, OBJECT and
class.
(operand_compare::hash_operand): Hash ODR type for OBJ_TYPE_REF.
|
|
The motivation for this patch is PR middle-end/85811, a wrong-code
regression entitled "Invalid optimization with fmax, fabs and nan".
The optimization involves assuming max(x,y) is non-negative if (say)
y is non-negative, i.e. max(x,2.0). Unfortunately, this is an invalid
assumption in the presence of NaNs. Hence max(x,+qNaN), with IEEE fmax
semantics will always return x even though the qNaN is non-negative.
Worse, max(x,2.0) may return a negative value if x is -sNaN.
I'll quote Joseph Myers (many thanks) who describes things clearly as:
> (a) When both arguments are NaNs, the return value should be a qNaN,
> but sometimes it is an sNaN if at least one argument is an sNaN.
> (b) Under TS 18661-1 semantics, if either argument is an sNaN then the
> result should be a qNaN (whereas if one argument is a qNaN and the
> other is not a NaN, the result should be the non-NaN argument).
> Various implementations treat sNaNs like qNaNs here.
Under this logic, the tree_expr_nonnegative_p for IEEE fmax should be:
CASE_CFN_FMAX:
CASE_CFN_FMAX_FN:
/* Usually RECURSE (arg0) || RECURSE (arg1) but NaNs complicate
things. In the presence of sNaNs, we're only guaranteed to be
non-negative if both operands are non-negative. In the presence
of qNaNs, we're non-negative if either operand is non-negative
and can't be a qNaN, or if both operands are non-negative. */
if (tree_expr_maybe_signaling_nan_p (arg0) ||
tree_expr_maybe_signaling_nan_p (arg1))
return RECURSE (arg0) && RECURSE (arg1);
return RECURSE (arg0) ? (!tree_expr_maybe_nan_p (arg0)
|| RECURSE (arg1))
: (RECURSE (arg1)
&& !tree_expr_maybe_nan_p (arg1));
Which indeed resolves the wrong code in the PR. The infrastructure that
makes this possible are the two new functions tree_expr_maybe_nan_p and
tree_expr_maybe_signaling_nan_p which test whether a value may potentially
be a NaN or a signaling NaN respectively. In fact, this patch adds seven
new predicates to the middle-end:
bool tree_expr_finite_p (const_tree);
bool tree_expr_infinite_p (const_tree);
bool tree_expr_maybe_infinite_p (const_tree);
bool tree_expr_signaling_nan_p (const_tree);
bool tree_expr_maybe_signaling_nan_p (const_tree);
bool tree_expr_nan_p (const_tree);
bool tree_expr_maybe_nan_p (const_tree);
These functions correspond to the "must" and "may" operators in modal logic,
and allow us to triage expressions in the middle-end; definitely a NaN,
definitely not a NaN, and unknown at compile-time, etc. A prime example of
the utility of these functions is that a IEEE floating point value promoted
from an integer type can't be a NaN or infinite. Hence (double)i+0.0 where
i is an integer can be simplified to (double)i even with -fsignaling-nans.
Currently in GCC optimizations are enabled/disabled based on whether the
expression's type supports NaNs or sNaNs; with these new predicates they
can be controlled by whether the actual operands may or may not be NaNs.
Having added these extremely useful helper functions to the middle-end,
I couldn't help by use then in a few places in fold-const.c, builtins.c
and match.pd. In the near term, these can/should be used in places
where the tree optimizers test for HONOR_NANS, HONOR_INFINITIES or
HONOR_SNANS, or explicitly test whether a REAL_CST is a NaN or Inf.
In the longer term (I'm not volunteering) these predicates could perhaps
be hooked into the middle-end's SSA chaining and/or VRP machinery,
allowing finiteness to propagated around the CFG, much like we
currently propagate value ranges.
This patch has been tested on x86_64-pc-linux-gnu with a "make bootstrap"
and "make -k check".
Ok for mainline?
2020-08-15 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR middle-end/85811
* fold-const.c (tree_expr_finite_p): New function to test whether
a tree expression must be finite, i.e. not a FP NaN or infinity.
(tree_expr_infinite_p): New function to test whether a tree
expression must be infinite, i.e. a FP infinity.
(tree_expr_maybe_infinite_p): New function to test whether a tree
expression may be infinite, i.e. a FP infinity.
(tree_expr_signaling_nan_p): New function to test whether a tree
expression must evaluate to a signaling NaN (sNaN).
(tree_expr_maybe_signaling_nan_p): New function to test whether a
tree expression may be a signaling NaN (sNaN).
(tree_expr_nan_p): New function to test whether a tree expression
must evaluate to a (quiet or signaling) NaN.
(tree_expr_maybe_nan_p): New function to test whether a tree
expression me be a (quiet or signaling) NaN.
(tree_binary_nonnegative_warnv_p) [MAX_EXPR]: In the presence
of NaNs, MAX_EXPR is only guaranteed to be non-negative, if both
operands are non-negative.
(tree_call_nonnegative_warnv_p) [CASE_CFN_FMAX,CASE_CFN_FMAX_FN]:
In the presence of signaling NaNs, fmax is only guaranteed to be
non-negative if both operands are negative. In the presence of
quiet NaNs, fmax is non-negative if either operand is non-negative
and not a qNaN, or both operands are non-negative.
* fold-const.h (tree_expr_finite_p, tree_expr_infinite_p,
tree_expr_maybe_infinite_p, tree_expr_signaling_nan_p,
tree_expr_maybe_signaling_nan_p, tree_expr_nan_p,
tree_expr_maybe_nan_p): Prototype new functions here.
* builtins.c (fold_builtin_classify) [BUILT_IN_ISINF]: Fold to
a constant if argument is known to be (or not to be) an Infinity.
[BUILT_IN_ISFINITE]: Fold to a constant if argument is known to
be (or not to be) finite.
[BUILT_IN_ISNAN]: Fold to a constant if argument is known to be
(or not to be) a NaN.
(fold_builtin_fpclassify): Check tree_expr_maybe_infinite_p and
tree_expr_maybe_nan_p instead of HONOR_INFINITIES and HONOR_NANS
respectively.
(fold_builtin_unordered_cmp): Fold UNORDERED_EXPR to a constant
when its arguments are known to be (or not be) NaNs. Check
tree_expr_maybe_nan_p instead of HONOR_NANS when choosing between
unordered and regular forms of comparison operators.
* match.pd (ordered(x,y)->true/false): Constant fold ORDERED_EXPR
if its operands are known to be (or not to be) NaNs.
(unordered(x,y)->true/false): Constant fold UNORDERED_EXPR if its
operands are known to be (or not to be) NaNs.
(sqrt(x)*sqrt(x)->x): Check tree_expr_maybe_signaling_nan_p instead
of HONOR_SNANS.
gcc/testsuite/ChangeLog
PR middle-end/85811
* gcc.dg/pr85811.c: New test.
* gcc.dg/fold-isfinite-1.c: New test.
* gcc.dg/fold-isfinite-2.c: New test.
* gcc.dg/fold-isinf-1.c: New test.
* gcc.dg/fold-isinf-2.c: New test.
* gcc.dg/fold-isnan-1.c: New test.
* gcc.dg/fold-isnan-2.c: New test.
|
|
* fold-const.c (operand_compare::operand_equal_p): Compare field
offsets in operand_equal_p and OEP_ADDRESS_OF.
(operand_compare::hash_operand): Update.
|
|
This removes a duplicated statement.
It was apparently introduced due to a merge mistake.
2020-11-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
* fold-const.c (getbyterep): Remove duplicated statement.
|
|
split_constant_offset is confused about a nop-conversion from
unsigned long to sizetype and tries to prove non-overflowing
of the inner operation. Obviously the conversion could have been
elided so make sure split_constant_offset handles this properly.
It also makes sure that convert_to_ptrofftype does not introduce
conversions not necessary which in this case is the source for
the unnecessary conversion.
2020-10-15 Richard Biener <rguenther@suse.de>
PR tree-optimization/97482
* tree-data-ref.c (split_constant_offset_1): Handle
trivial conversions better.
* fold-const.c (convert_to_ptrofftype_loc): Elide conversion
if the offset is already ptrofftype_p.
* gcc.dg/vect/pr97428.c: New testcase.
|
|
This improves the situation somewhat when vector lowering tries
to access vector bools as seen in PR96814.
2020-09-03 Richard Biener <rguenther@suse.de>
* tree-vect-generic.c (tree_vec_extract): Remove odd
special-casing of boolean vectors.
* fold-const.c (fold_ternary_loc): Handle boolean vector
type BIT_FIELD_REFs.
|
|
PR tree-optimization/21137 is now an old enhancement request pointing out
that an optimization I added back in 2006, to optimize "((x>>31)&64) != 0"
as "x < 0", doesn't fire in the presence of unanticipated type conversions.
The fix is to call STRIP_NOPS at the appropriate point.
2020-08-25 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/21137
* fold-const.c (fold_binary_loc) [NE_EXPR/EQ_EXPR]: Call
STRIP_NOPS when checking whether to simplify ((x>>C1)&C2) != 0.
gcc/testsuite/ChangeLog
PR tree-optimization/21137
* gcc.dg/pr21137.c: New test.
|
|
My patch to introduce native_encode_initializer to fold_ctor_reference
apparently broke gnulib/m4 on powerpc64.
There it uses a const union with two doubles and corresponding IBM double
double long double which actually is the largest normalizable long double
value (1 ulp higher than __LDBL_MAX__). The reason our __LDBL_MAX__ is
smaller is that we internally treat the double double type as one having
106-bit precision, but it actually has a variable 53-bit to 2000-ish bit precision
and for the
0x1.fffffffffffff7ffffffffffffc000p+1023L
value gnulib uses we need 107-bit precision, therefore for GCC __LDBL_MAX__
is
0x1.fffffffffffff7ffffffffffff8000p+1023L
Before my changes, we wouldn't be able to fold_ctor_reference it and it
worked fine at runtime, but with the change we are able to do that, but
because it is larger than anything we can handle internally, we treat it
weirdly. Similar problem would be if somebody creates this way valid,
but much more than 106 bit precision e.g. 1.0 + 1.0e-768.
Now, I think similar problem could happen e.g. on i?86/x86_64 with long
double there, it also has some weird values in the format, e.g. the
unnormals, pseudo infinities and various other magic values.
This patch for floating point types (including vector and complex types
with such elements) will try to encode the returned value again and punt
if it has different memory representation from the original. Note, this
is only done in the path where native_encode_initializer was used, in order
not to affect e.g. just reading an unpunned long double value; the value
should be compiler generated in that case and thus should be properly
representable. It will punt also if e.g. the padding bits are initialized
to non-zero values.
I think the verification that what we encode can be interpreted back
woiuld be only an internal consistency check (so perhaps for ENABLE_CHECKING
if flag_checking only, but if both directions perform it, then we need
to avoid mutual recursion).
While for the other direction (interpretation), at least for the broken by
design long doubles we just know we can't represent in GCC all valid values.
The other floating point formats are just theoretical case, perhaps we would
canonicalize something to a value that wouldn't trigger invalid exception
when without canonicalization it would trigger it at runtime, so let's just
ignore those.
Adjusted (so far untested) patch to do it in native_interpret_real instead
and limit it to the MODE_COMPOSITE_P cases, for which e.g.
fold-const.c/simplify-rtx.c punts in several other places too because we just
know we can't represent everything.
E.g.
/* Don't constant fold this floating point operation if the
result may dependent upon the run-time rounding mode and
flag_rounding_math is set, or if GCC's software emulation
is unable to accurately represent the result. */
if ((flag_rounding_math
|| (MODE_COMPOSITE_P (mode) && !flag_unsafe_math_optimizations))
&& (inexact || !real_identical (&result, &value)))
return NULL_TREE;
Or perhaps guard it with MODE_COMPOSITE_P (mode) && !flag_unsafe_math_optimizations
too, thus break what gnulib / m4 does with -ffast-math, but not normally?
2020-08-25 Jakub Jelinek <jakub@redhat.com>
PR target/95450
* fold-const.c (native_interpret_real): For MODE_COMPOSITE_P modes
punt if the to be returned REAL_CST does not encode to the bitwise
same representation.
* gcc.target/powerpc/pr95450.c: New test.
|
|
gcc/ChangeLog:
* fold-const.c (native_encode_expr): Update comment.
|
|
gcc/ChangeLog:
PR middle-end/78257
* builtins.c (expand_builtin_memory_copy_args): Rename called function.
(expand_builtin_stpcpy_1): Remove argument from call.
(expand_builtin_memcmp): Rename called function.
(inline_expand_builtin_bytecmp): Same.
* expr.c (convert_to_bytes): New function.
(constant_byte_string): New function (formerly string_constant).
(string_constant): Call constant_byte_string.
(byte_representation): New function.
* expr.h (byte_representation): Declare.
* fold-const-call.c (fold_const_call): Rename called function.
* fold-const.c (c_getstr): Remove an argument.
(getbyterep): Define a new function.
* fold-const.h (c_getstr): Remove an argument.
(getbyterep): Declare a new function.
* gimple-fold.c (gimple_fold_builtin_memory_op): Rename callee.
(gimple_fold_builtin_string_compare): Same.
(gimple_fold_builtin_memchr): Same.
gcc/testsuite/ChangeLog:
PR middle-end/78257
* gcc.dg/memchr.c: New test.
* gcc.dg/memcmp-2.c: New test.
* gcc.dg/memcmp-3.c: New test.
* gcc.dg/memcmp-4.c: New test.
|
|
gcc/ChangeLog:
* fold-const.c (expr_not_equal_to): Adjust for irange API.
|
|
This makes the special case of constant evaluated LHS for a
short-circuiting or/and explicit rather than doing range
merging and eventually exposing a side-effect that shouldn't be
evaluated.
2020-07-31 Richard Biener <rguenther@suse.de>
PR middle-end/96369
* fold-const.c (fold_range_test): Special-case constant
LHS for short-circuiting operations.
* c-c++-common/pr96369.c: New testcase.
|
|
Resolves:
PR middle-end/95189 - memcmp being wrongly stripped like strcm
PR middle-end/95886 - suboptimal memcpy with embedded zero bytes
gcc/ChangeLog:
PR middle-end/95189
PR middle-end/95886
* builtins.c (inline_expand_builtin_string_cmp): Rename...
(inline_expand_builtin_bytecmp): ...to this.
(builtin_memcpy_read_str): Don't expect data to be nul-terminated.
(expand_builtin_memory_copy_args): Handle object representations
with embedded nul bytes.
(expand_builtin_memcmp): Same.
(expand_builtin_strcmp): Adjust call to naming change.
(expand_builtin_strncmp): Same.
* expr.c (string_constant): Create empty strings with nonzero size.
* fold-const.c (c_getstr): Rename locals and update comments.
* tree.c (build_string): Accept null pointer argument.
(build_string_literal): Same.
* tree.h (build_string): Provide a default.
(build_string_literal): Same.
gcc/testsuite/ChangeLog:
PR middle-end/95189
PR middle-end/95886
* gcc.dg/memcmp-pr95189.c: New test.
* gcc.dg/strncmp-3.c: New test.
* gcc.target/i386/memcpy-pr95886.c: New test.
|
|
When working on __builtin_bit_cast that needs to handle bitfields too,
I've made the following change to handle at least some bitfields in
native_encode_initializer (those that have integral representative).
2020-07-20 Jakub Jelinek <jakub@redhat.com>
PR libstdc++/93121
* fold-const.c (native_encode_initializer): Handle bit-fields.
* gcc.dg/tree-ssa/pr93121-1.c: New test.
|
|
We folded A <= 0 ? A : -A into -ABS (A), which is for signed integral types
incorrect - can invoke on INT_MIN UB twice, once on ABS and once on its
negation.
The following patch fixes it by instead folding it to (type)-ABSU (A).
2020-06-24 Jakub Jelinek <jakub@redhat.com>
PR middle-end/95810
* fold-const.c (fold_cond_expr_with_comparison): Optimize
A <= 0 ? A : -A into (type)-absu(A) rather than -abs(A).
* gcc.dg/ubsan/pr95810.c: New test.
|
|
This patch introduces a new builtin named __builtin_bswap128 on targets
where TImode is supported, i.e. 64-bit targets only in practice. The
implementation simply reuses the existing double word path in optab, so
no routine is added to libgcc (which means that you get two calls to
_bswapdi2 in the worst case).
gcc/ChangeLog:
* builtin-types.def (BT_UINT128): New primitive type.
(BT_FN_UINT128_UINT128): New function type.
* builtins.def (BUILT_IN_BSWAP128): New GCC builtin.
* doc/extend.texi (__builtin_bswap128): Document it.
* builtins.c (expand_builtin): Deal with BUILT_IN_BSWAP128.
(is_inexpensive_builtin): Likewise.
* fold-const-call.c (fold_const_call_ss): Likewise.
* fold-const.c (tree_call_nonnegative_warnv_p): Likewise.
* tree-ssa-ccp.c (evaluate_stmt): Likewise.
* tree-vect-stmts.c (vect_get_data_ptr_increment): Likewise.
(vectorizable_call): Likewise.
* optabs.c (expand_unop): Always use the double word path for it.
* tree-core.h (enum tree_index): Add TI_UINT128_TYPE.
* tree.h (uint128_type_node): New global type.
* tree.c (build_common_tree_nodes): Build it if TImode is supported.
gcc/testsuite/ChangeLog:
* gcc.dg/builtin-bswap-10.c: New test.
* gcc.dg/builtin-bswap-11.c: Likewise.
* gcc.dg/builtin-bswap-12.c: Likewise.
* gcc.target/i386/builtin-bswap-5.c: Likewise.
|
|
[PR94718]
This patch moves this optimization from fold-const.c to match.pd where it
is actually much shorter to do and lets optimize even code not seen together
in a single expression in the source, as the first step towards fixing the
PR.
2020-05-04 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/94718
* fold-const.c (fold_binary_loc): Move (X & C) eqne (Y & C)
-> (X ^ Y) & C eqne 0 optimization to ...
* match.pd ((X & C) op (Y & C) into (X ^ Y) & C op 0): ... here.
* gcc.dg/tree-ssa/pr94718-1.c: New test.
* gcc.dg/tree-ssa/pr94718-2.c: New test.
|
|
The following testcase is miscompiled since 4.9, we treat unsigned
vector types as if they were signed and "optimize" negations across it.
2020-03-31 Marc Glisse <marc.glisse@inria.fr>
Jakub Jelinek <jakub@redhat.com>
PR middle-end/94412
* fold-const.c (fold_binary_loc) <case TRUNC_DIV_EXPR>: Use
ANY_INTEGRAL_TYPE_P instead of INTEGRAL_TYPE_P.
* gcc.c-torture/execute/pr94412.c: New test.
Co-authored-by: Marc Glisse <marc.glisse@inria.fr>
|
|
2020-03-19 Richard Biener <rguenther@suse.de>
PR middle-end/94216
* fold-const.c (fold_binary_loc): Avoid using
build_fold_addr_expr when we really want an ADDR_EXPR.
* g++.dg/torture/pr94216.C: New testcase.
|
|
This adds a missing type conversion to build_fold_addr_expr and adjusts
fallout - build_fold_addr_expr was used as a convenience to build an
ADDR_EXPR but some callers do not expect the result to be simplified
to something else.
2020-03-18 Richard Biener <rguenther@suse.de>
PR middle-end/94188
* fold-const.c (build_fold_addr_expr): Convert address to
correct type.
* asan.c (maybe_create_ssa_name): Strip useless type conversions.
* gimple-fold.c (gimple_fold_stmt_to_constant_1): Use build1
to build the ADDR_EXPR which we don't really want to simplify.
* tree-ssa-dom.c (record_equivalences_from_stmt): Likewise.
* tree-ssa-loop-im.c (gather_mem_refs_stmt): Likewise.
* tree-ssa-forwprop.c (forward_propagate_addr_expr_1): Likewise.
(simplify_builtin_call): Strip useless type conversions.
* tree-ssa-strlen.c (new_strinfo): Likewise.
* gcc.dg/pr94188.c: New testcase.
|
|
The following patch is first step towards fixing PR93582.
vn_reference_lookup_3 right now punts on anything that isn't byte aligned,
so to be able to lookup a constant bitfield store, one needs to use
the exact same COMPONENT_REF, otherwise it isn't found.
This patch lifts up that that restriction if the bits to be loaded are
covered by a single store of a constant (keeps the restriction so far
for the multiple store case, can tweak that incrementally, but I think
for bisection etc. it is worth to do it one step at a time).
2020-02-13 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/93582
* fold-const.h (shift_bytes_in_array_left,
shift_bytes_in_array_right): Declare.
* fold-const.c (shift_bytes_in_array_left,
shift_bytes_in_array_right): New function, moved from
gimple-ssa-store-merging.c, no longer static.
* gimple-ssa-store-merging.c (shift_bytes_in_array): Move
to gimple-ssa-store-merging.c and rename to shift_bytes_in_array_left.
(shift_bytes_in_array_right): Move to gimple-ssa-store-merging.c.
(encode_tree_to_bitpos): Use shift_bytes_in_array_left instead of
shift_bytes_in_array.
(verify_shift_bytes_in_array): Rename to ...
(verify_shift_bytes_in_array_left): ... this. Use
shift_bytes_in_array_left instead of shift_bytes_in_array.
(store_merging_c_tests): Call verify_shift_bytes_in_array_left
instead of verify_shift_bytes_in_array.
* tree-ssa-sccvn.c (vn_reference_lookup_3): For native_encode_expr
/ native_interpret_expr where the store covers all needed bits,
punt on PDP-endian, otherwise allow all involved offsets and sizes
not to be byte-aligned.
* gcc.dg/tree-ssa/pr93582-1.c: New test.
* gcc.dg/tree-ssa/pr93582-2.c: New test.
* gcc.dg/tree-ssa/pr93582-3.c: New test.
|
|
struct/combound constexpr (gcc vs. clang))
PR tree-optimization/93210
* fold-const.h (native_encode_initializer,
can_native_interpret_type_p): Declare.
* fold-const.c (native_encode_string): Fix up handling with off != -1,
simplify.
(native_encode_initializer): New function, moved from dwarf2out.c.
Adjust to native_encode_expr compatible arguments, including dry-run
and partial extraction modes. Don't handle STRING_CST.
(can_native_interpret_type_p): No longer static.
* gimple-fold.c (fold_ctor_reference): For native_encode_expr, verify
offset / BITS_PER_UNIT fits into int and don't call it if
can_native_interpret_type_p fails. If suboff is NULL and for
CONSTRUCTOR fold_{,non}array_ctor_reference returns NULL, retry with
native_encode_initializer.
(fold_const_aggregate_ref_1): Formatting fix.
* dwarf2out.c (native_encode_initializer): Moved to fold-const.c.
(tree_add_const_value_attribute): Adjust caller.
* gcc.dg/pr93210.c: New test.
* g++.dg/opt/pr93210.C: New test.
From-SVN: r280141
|
|
From-SVN: r279813
|
|
Compiling this testcase results in a bogus "invalid cast" error; this occurs
since the introduction of location wrappers in finish_id_expression.
Here we are parsing the decltype expression via cp_parser_decltype_expr which
can lead to calling various fold_* and c-family routines. They use
non_lvalue_loc, but that won't create a NON_LVALUE_EXPR wrapper around a location
wrapper.
So before the location wrappers addition cp_parser_decltype_expr would return
NON_LVALUE_EXPR <c>. Now it returns VIEW_CONVERT_EXPR<float *>(c), but the
STRIP_ANY_LOCATION_WRAPPER immediately following it strips the location wrapper,
and suddenly we don't know whether we have an lvalue anymore. And that's sad
because then decltype produces the wrong type, causing nonsense errors.
* fold-const.c (maybe_lvalue_p): Handle VIEW_CONVERT_EXPR.
* g++.dg/cpp0x/decltype73.C: New test.
From-SVN: r279077
|
|
This PR shows that we weren't checking for bitwise-identical values
when trying to encode a VECTOR_CST, so -0.0 was treated the same as
0.0 for -fno-signed-zeros. The patch adds a new OEP flag to select
that behaviour.
2019-12-05 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR middle-end/92768
* tree-core.h (OEP_BITWISE): New flag.
* fold-const.c (operand_compare::operand_equal_p): Handle it.
* tree-vector-builder.h (tree_vector_builder::equal_p): Pass it.
gcc/testsuite/
PR middle-end/92768
* gcc.dg/pr92768.c: New test.
From-SVN: r279002
|
|
In r278410 I added code to handle VIEW_CONVERT_EXPRs between
variable-length vectors. This included support for decoding
a VECTOR_BOOLEAN_TYPE_P with subbyte elements.
However, it turns out that we were already mishandling such bool vectors
for fixed-length vectors: we treated each element as a stand-alone byte
instead of putting multiple elements into the same byte. I think in
principle this could have been an issue for AVX512 as well.
This patch adds encoding support for boolean vectors and reuses
a version of the new decode support for fixed-length vectors.
2019-12-04 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* fold-const.c (native_encode_vector_part): Handle
VECTOR_BOOLEAN_TYPE_Ps that have subbyte precision.
(native_decode_vector_tree): Delete, moving the bulk of the code to...
(native_interpret_vector_part): ...this new function. Use a pointer
and length instead of a vec<> and start index.
(native_interpret_vector): Use native_interpret_vector_part.
(fold_view_convert_vector_encoding): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/whilelt_5.c: New test.
From-SVN: r278964
|
|
In this PR, IPA-CP was misled into using NOP_EXPR rather than
VIEW_CONVERT_EXPR to reinterpret a vector of 4 shorts as a vector
of 2 ints. This tripped the tree-cfg.c assert I'd added in r278245.
2019-12-02 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR middle-end/92741
* fold-const.c (fold_convertible_p): Check vector types more
thoroughly.
gcc/testsuite/
PR middle-end/92741
* gcc.dg/pr92741.c: New test.
From-SVN: r278910
|
|
This patch handles VIEW_CONVERT_EXPRs of variable-length VECTOR_CSTs
by adding tree-level versions of native_decode_vector_rtx and
simplify_const_vector_subreg. It uses the same code for fixed-length
vectors, both to get more coverage and because operating directly on
the compressed encoding should be more efficient for longer vectors
with a regular pattern.
The structure and comments are very similar between the tree and
rtx routines.
2019-11-18 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* fold-const.c (native_encode_vector): Turn into a wrapper function,
splitting the main code out into...
(native_encode_vector_part): ...this new function.
(native_decode_vector_tree): New function.
(fold_view_convert_vector_encoding): Likewise.
(fold_view_convert_expr): Use it for converting VECTOR_CSTs
to VECTOR_TYPEs.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/temporaries_1.c: New test.
From-SVN: r278410
|
|
2019-11-12 Martin Liska <mliska@suse.cz>
* Makefile.in: Remove PARAMS_H and params.list
and params.options.
* params-enum.h: Remove.
* params-list.h: Remove.
* params-options.h: Remove.
* params.c: Remove.
* params.def: Remove.
* params.h: Remove.
* asan.c: Do not include params.h.
* auto-profile.c: Likewise.
* bb-reorder.c: Likewise.
* builtins.c: Likewise.
* cfgcleanup.c: Likewise.
* cfgexpand.c: Likewise.
* cfgloopanal.c: Likewise.
* cgraph.c: Likewise.
* combine.c: Likewise.
* common/config/aarch64/aarch64-common.c: Likewise.
* common/config/gcn/gcn-common.c: Likewise.
* common/config/ia64/ia64-common.c: Likewise.
* common/config/powerpcspe/powerpcspe-common.c: Likewise.
* common/config/rs6000/rs6000-common.c: Likewise.
* common/config/sh/sh-common.c: Likewise.
* config/aarch64/aarch64.c: Likewise.
* config/alpha/alpha.c: Likewise.
* config/arm/arm.c: Likewise.
* config/avr/avr.c: Likewise.
* config/csky/csky.c: Likewise.
* config/i386/i386-builtins.c: Likewise.
* config/i386/i386-expand.c: Likewise.
* config/i386/i386-features.c: Likewise.
* config/i386/i386-options.c: Likewise.
* config/i386/i386.c: Likewise.
* config/ia64/ia64.c: Likewise.
* config/rs6000/rs6000-logue.c: Likewise.
* config/rs6000/rs6000.c: Likewise.
* config/s390/s390.c: Likewise.
* config/sparc/sparc.c: Likewise.
* config/visium/visium.c: Likewise.
* coverage.c: Likewise.
* cprop.c: Likewise.
* cse.c: Likewise.
* cselib.c: Likewise.
* dse.c: Likewise.
* emit-rtl.c: Likewise.
* explow.c: Likewise.
* final.c: Likewise.
* fold-const.c: Likewise.
* gcc.c: Likewise.
* gcse.c: Likewise.
* ggc-common.c: Likewise.
* ggc-page.c: Likewise.
* gimple-loop-interchange.cc: Likewise.
* gimple-loop-jam.c: Likewise.
* gimple-loop-versioning.cc: Likewise.
* gimple-ssa-split-paths.c: Likewise.
* gimple-ssa-sprintf.c: Likewise.
* gimple-ssa-store-merging.c: Likewise.
* gimple-ssa-strength-reduction.c: Likewise.
* gimple-ssa-warn-alloca.c: Likewise.
* gimple-ssa-warn-restrict.c: Likewise.
* graphite-isl-ast-to-gimple.c: Likewise.
* graphite-optimize-isl.c: Likewise.
* graphite-scop-detection.c: Likewise.
* graphite-sese-to-poly.c: Likewise.
* graphite.c: Likewise.
* haifa-sched.c: Likewise.
* hsa-gen.c: Likewise.
* ifcvt.c: Likewise.
* ipa-cp.c: Likewise.
* ipa-fnsummary.c: Likewise.
* ipa-inline-analysis.c: Likewise.
* ipa-inline.c: Likewise.
* ipa-polymorphic-call.c: Likewise.
* ipa-profile.c: Likewise.
* ipa-prop.c: Likewise.
* ipa-split.c: Likewise.
* ipa-sra.c: Likewise.
* ira-build.c: Likewise.
* ira-conflicts.c: Likewise.
* loop-doloop.c: Likewise.
* loop-invariant.c: Likewise.
* loop-unroll.c: Likewise.
* lra-assigns.c: Likewise.
* lra-constraints.c: Likewise.
* modulo-sched.c: Likewise.
* opt-suggestions.c: Likewise.
* opts.c: Likewise.
* postreload-gcse.c: Likewise.
* predict.c: Likewise.
* reload.c: Likewise.
* reorg.c: Likewise.
* resource.c: Likewise.
* sanopt.c: Likewise.
* sched-deps.c: Likewise.
* sched-ebb.c: Likewise.
* sched-rgn.c: Likewise.
* sel-sched-ir.c: Likewise.
* sel-sched.c: Likewise.
* shrink-wrap.c: Likewise.
* stmt.c: Likewise.
* targhooks.c: Likewise.
* toplev.c: Likewise.
* tracer.c: Likewise.
* trans-mem.c: Likewise.
* tree-chrec.c: Likewise.
* tree-data-ref.c: Likewise.
* tree-if-conv.c: Likewise.
* tree-inline.c: Likewise.
* tree-loop-distribution.c: Likewise.
* tree-parloops.c: Likewise.
* tree-predcom.c: Likewise.
* tree-profile.c: Likewise.
* tree-scalar-evolution.c: Likewise.
* tree-sra.c: Likewise.
* tree-ssa-ccp.c: Likewise.
* tree-ssa-dom.c: Likewise.
* tree-ssa-dse.c: Likewise.
* tree-ssa-ifcombine.c: Likewise.
* tree-ssa-loop-ch.c: Likewise.
* tree-ssa-loop-im.c: Likewise.
* tree-ssa-loop-ivcanon.c: Likewise.
* tree-ssa-loop-ivopts.c: Likewise.
* tree-ssa-loop-manip.c: Likewise.
* tree-ssa-loop-niter.c: Likewise.
* tree-ssa-loop-prefetch.c: Likewise.
* tree-ssa-loop-unswitch.c: Likewise.
* tree-ssa-math-opts.c: Likewise.
* tree-ssa-phiopt.c: Likewise.
* tree-ssa-pre.c: Likewise.
* tree-ssa-reassoc.c: Likewise.
* tree-ssa-sccvn.c: Likewise.
* tree-ssa-scopedtables.c: Likewise.
* tree-ssa-sink.c: Likewise.
* tree-ssa-strlen.c: Likewise.
* tree-ssa-structalias.c: Likewise.
* tree-ssa-tail-merge.c: Likewise.
* tree-ssa-threadbackward.c: Likewise.
* tree-ssa-threadedge.c: Likewise.
* tree-ssa-uninit.c: Likewise.
* tree-switch-conversion.c: Likewise.
* tree-vect-data-refs.c: Likewise.
* tree-vect-loop.c: Likewise.
* tree-vect-slp.c: Likewise.
* tree-vrp.c: Likewise.
* tree.c: Likewise.
* value-prof.c: Likewise.
* var-tracking.c: Likewise.
2019-11-12 Martin Liska <mliska@suse.cz>
* gimple-parser.c: Do not include params.h.
2019-11-12 Martin Liska <mliska@suse.cz>
* name-lookup.c: Do not include params.h.
* typeck.c: Likewise.
2019-11-12 Martin Liska <mliska@suse.cz>
* lto-common.c: Do not include params.h.
* lto-partition.c: Likewise.
* lto.c: Likewise.
From-SVN: r278086
|
|
2019-11-12 Martin Liska <mliska@suse.cz>
* asan.c (asan_sanitize_stack_p): Replace old parameter syntax
with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET
macro.
(asan_sanitize_allocas_p): Likewise.
(asan_emit_stack_protection): Likewise.
(asan_protect_global): Likewise.
(instrument_derefs): Likewise.
(instrument_builtin_call): Likewise.
(asan_expand_mark_ifn): Likewise.
* auto-profile.c (auto_profile): Likewise.
* bb-reorder.c (copy_bb_p): Likewise.
(duplicate_computed_gotos): Likewise.
* builtins.c (inline_expand_builtin_string_cmp): Likewise.
* cfgcleanup.c (try_crossjump_to_edge): Likewise.
(try_crossjump_bb): Likewise.
* cfgexpand.c (defer_stack_allocation): Likewise.
(stack_protect_classify_type): Likewise.
(pass_expand::execute): Likewise.
* cfgloopanal.c (expected_loop_iterations_unbounded): Likewise.
(estimate_reg_pressure_cost): Likewise.
* cgraph.c (cgraph_edge::maybe_hot_p): Likewise.
* combine.c (combine_instructions): Likewise.
(record_value_for_reg): Likewise.
* common/config/aarch64/aarch64-common.c (aarch64_option_validate_param): Likewise.
(aarch64_option_default_params): Likewise.
* common/config/ia64/ia64-common.c (ia64_option_default_params): Likewise.
* common/config/powerpcspe/powerpcspe-common.c (rs6000_option_default_params): Likewise.
* common/config/rs6000/rs6000-common.c (rs6000_option_default_params): Likewise.
* common/config/sh/sh-common.c (sh_option_default_params): Likewise.
* config/aarch64/aarch64.c (aarch64_output_probe_stack_range): Likewise.
(aarch64_allocate_and_probe_stack_space): Likewise.
(aarch64_expand_epilogue): Likewise.
(aarch64_override_options_internal): Likewise.
* config/alpha/alpha.c (alpha_option_override): Likewise.
* config/arm/arm.c (arm_option_override): Likewise.
(arm_valid_target_attribute_p): Likewise.
* config/i386/i386-options.c (ix86_option_override_internal): Likewise.
* config/i386/i386.c (get_probe_interval): Likewise.
(ix86_adjust_stack_and_probe_stack_clash): Likewise.
(ix86_max_noce_ifcvt_seq_cost): Likewise.
* config/ia64/ia64.c (ia64_adjust_cost): Likewise.
* config/rs6000/rs6000-logue.c (get_stack_clash_protection_probe_interval): Likewise.
(get_stack_clash_protection_guard_size): Likewise.
* config/rs6000/rs6000.c (rs6000_option_override_internal): Likewise.
* config/s390/s390.c (allocate_stack_space): Likewise.
(s390_emit_prologue): Likewise.
(s390_option_override_internal): Likewise.
* config/sparc/sparc.c (sparc_option_override): Likewise.
* config/visium/visium.c (visium_option_override): Likewise.
* coverage.c (get_coverage_counts): Likewise.
(coverage_compute_profile_id): Likewise.
(coverage_begin_function): Likewise.
(coverage_end_function): Likewise.
* cse.c (cse_find_path): Likewise.
(cse_extended_basic_block): Likewise.
(cse_main): Likewise.
* cselib.c (cselib_invalidate_mem): Likewise.
* dse.c (dse_step1): Likewise.
* emit-rtl.c (set_new_first_and_last_insn): Likewise.
(get_max_insn_count): Likewise.
(make_debug_insn_raw): Likewise.
(init_emit): Likewise.
* explow.c (compute_stack_clash_protection_loop_data): Likewise.
* final.c (compute_alignments): Likewise.
* fold-const.c (fold_range_test): Likewise.
(fold_truth_andor): Likewise.
(tree_single_nonnegative_warnv_p): Likewise.
(integer_valued_real_single_p): Likewise.
* gcse.c (want_to_gcse_p): Likewise.
(prune_insertions_deletions): Likewise.
(hoist_code): Likewise.
(gcse_or_cprop_is_too_expensive): Likewise.
* ggc-common.c: Likewise.
* ggc-page.c (ggc_collect): Likewise.
* gimple-loop-interchange.cc (MAX_NUM_STMT): Likewise.
(MAX_DATAREFS): Likewise.
(OUTER_STRIDE_RATIO): Likewise.
* gimple-loop-jam.c (tree_loop_unroll_and_jam): Likewise.
* gimple-loop-versioning.cc (loop_versioning::max_insns_for_loop): Likewise.
* gimple-ssa-split-paths.c (is_feasible_trace): Likewise.
* gimple-ssa-store-merging.c (imm_store_chain_info::try_coalesce_bswap): Likewise.
(imm_store_chain_info::coalesce_immediate_stores): Likewise.
(imm_store_chain_info::output_merged_store): Likewise.
(pass_store_merging::process_store): Likewise.
* gimple-ssa-strength-reduction.c (find_basis_for_base_expr): Likewise.
* graphite-isl-ast-to-gimple.c (class translate_isl_ast_to_gimple): Likewise.
(scop_to_isl_ast): Likewise.
* graphite-optimize-isl.c (get_schedule_for_node_st): Likewise.
(optimize_isl): Likewise.
* graphite-scop-detection.c (build_scops): Likewise.
* haifa-sched.c (set_modulo_params): Likewise.
(rank_for_schedule): Likewise.
(model_add_to_worklist): Likewise.
(model_promote_insn): Likewise.
(model_choose_insn): Likewise.
(queue_to_ready): Likewise.
(autopref_multipass_dfa_lookahead_guard): Likewise.
(schedule_block): Likewise.
(sched_init): Likewise.
* hsa-gen.c (init_prologue): Likewise.
* ifcvt.c (bb_ok_for_noce_convert_multiple_sets): Likewise.
(cond_move_process_if_block): Likewise.
* ipa-cp.c (ipcp_lattice::add_value): Likewise.
(merge_agg_lats_step): Likewise.
(devirtualization_time_bonus): Likewise.
(hint_time_bonus): Likewise.
(incorporate_penalties): Likewise.
(good_cloning_opportunity_p): Likewise.
(ipcp_propagate_stage): Likewise.
* ipa-fnsummary.c (decompose_param_expr): Likewise.
(set_switch_stmt_execution_predicate): Likewise.
(analyze_function_body): Likewise.
(compute_fn_summary): Likewise.
* ipa-inline-analysis.c (estimate_growth): Likewise.
* ipa-inline.c (caller_growth_limits): Likewise.
(inline_insns_single): Likewise.
(inline_insns_auto): Likewise.
(can_inline_edge_by_limits_p): Likewise.
(want_early_inline_function_p): Likewise.
(big_speedup_p): Likewise.
(want_inline_small_function_p): Likewise.
(want_inline_self_recursive_call_p): Likewise.
(edge_badness): Likewise.
(recursive_inlining): Likewise.
(compute_max_insns): Likewise.
(early_inliner): Likewise.
* ipa-polymorphic-call.c (csftc_abort_walking_p): Likewise.
* ipa-profile.c (ipa_profile): Likewise.
* ipa-prop.c (determine_known_aggregate_parts): Likewise.
(ipa_analyze_node): Likewise.
(ipcp_transform_function): Likewise.
* ipa-split.c (consider_split): Likewise.
* ipa-sra.c (allocate_access): Likewise.
(process_scan_results): Likewise.
(ipa_sra_summarize_function): Likewise.
(pull_accesses_from_callee): Likewise.
* ira-build.c (loop_compare_func): Likewise.
(mark_loops_for_removal): Likewise.
* ira-conflicts.c (build_conflict_bit_table): Likewise.
* loop-doloop.c (doloop_optimize): Likewise.
* loop-invariant.c (gain_for_invariant): Likewise.
(move_loop_invariants): Likewise.
* loop-unroll.c (decide_unroll_constant_iterations): Likewise.
(decide_unroll_runtime_iterations): Likewise.
(decide_unroll_stupid): Likewise.
(expand_var_during_unrolling): Likewise.
* lra-assigns.c (spill_for): Likewise.
* lra-constraints.c (EBB_PROBABILITY_CUTOFF): Likewise.
* modulo-sched.c (sms_schedule): Likewise.
(DFA_HISTORY): Likewise.
* opts.c (default_options_optimization): Likewise.
(finish_options): Likewise.
(common_handle_option): Likewise.
* postreload-gcse.c (eliminate_partially_redundant_load): Likewise.
(if): Likewise.
* predict.c (get_hot_bb_threshold): Likewise.
(maybe_hot_count_p): Likewise.
(probably_never_executed): Likewise.
(predictable_edge_p): Likewise.
(predict_loops): Likewise.
(expr_expected_value_1): Likewise.
(tree_predict_by_opcode): Likewise.
(handle_missing_profiles): Likewise.
* reload.c (find_equiv_reg): Likewise.
* reorg.c (redundant_insn): Likewise.
* resource.c (mark_target_live_regs): Likewise.
(incr_ticks_for_insn): Likewise.
* sanopt.c (pass_sanopt::execute): Likewise.
* sched-deps.c (sched_analyze_1): Likewise.
(sched_analyze_2): Likewise.
(sched_analyze_insn): Likewise.
(deps_analyze_insn): Likewise.
* sched-ebb.c (schedule_ebbs): Likewise.
* sched-rgn.c (find_single_block_region): Likewise.
(too_large): Likewise.
(haifa_find_rgns): Likewise.
(extend_rgns): Likewise.
(new_ready): Likewise.
(schedule_region): Likewise.
(sched_rgn_init): Likewise.
* sel-sched-ir.c (make_region_from_loop): Likewise.
* sel-sched-ir.h (MAX_WS): Likewise.
* sel-sched.c (process_pipelined_exprs): Likewise.
(sel_setup_region_sched_flags): Likewise.
* shrink-wrap.c (try_shrink_wrapping): Likewise.
* targhooks.c (default_max_noce_ifcvt_seq_cost): Likewise.
* toplev.c (print_version): Likewise.
(process_options): Likewise.
* tracer.c (tail_duplicate): Likewise.
* trans-mem.c (tm_log_add): Likewise.
* tree-chrec.c (chrec_fold_plus_1): Likewise.
* tree-data-ref.c (split_constant_offset): Likewise.
(compute_all_dependences): Likewise.
* tree-if-conv.c (MAX_PHI_ARG_NUM): Likewise.
* tree-inline.c (remap_gimple_stmt): Likewise.
* tree-loop-distribution.c (MAX_DATAREFS_NUM): Likewise.
* tree-parloops.c (MIN_PER_THREAD): Likewise.
(create_parallel_loop): Likewise.
* tree-predcom.c (determine_unroll_factor): Likewise.
* tree-scalar-evolution.c (instantiate_scev_r): Likewise.
* tree-sra.c (analyze_all_variable_accesses): Likewise.
* tree-ssa-ccp.c (fold_builtin_alloca_with_align): Likewise.
* tree-ssa-dse.c (setup_live_bytes_from_ref): Likewise.
(dse_optimize_redundant_stores): Likewise.
(dse_classify_store): Likewise.
* tree-ssa-ifcombine.c (ifcombine_ifandif): Likewise.
* tree-ssa-loop-ch.c (ch_base::copy_headers): Likewise.
* tree-ssa-loop-im.c (LIM_EXPENSIVE): Likewise.
* tree-ssa-loop-ivcanon.c (try_unroll_loop_completely): Likewise.
(try_peel_loop): Likewise.
(tree_unroll_loops_completely): Likewise.
* tree-ssa-loop-ivopts.c (avg_loop_niter): Likewise.
(CONSIDER_ALL_CANDIDATES_BOUND): Likewise.
(MAX_CONSIDERED_GROUPS): Likewise.
(ALWAYS_PRUNE_CAND_SET_BOUND): Likewise.
* tree-ssa-loop-manip.c (can_unroll_loop_p): Likewise.
* tree-ssa-loop-niter.c (MAX_ITERATIONS_TO_TRACK): Likewise.
* tree-ssa-loop-prefetch.c (PREFETCH_BLOCK): Likewise.
(L1_CACHE_SIZE_BYTES): Likewise.
(L2_CACHE_SIZE_BYTES): Likewise.
(should_issue_prefetch_p): Likewise.
(schedule_prefetches): Likewise.
(determine_unroll_factor): Likewise.
(volume_of_references): Likewise.
(add_subscript_strides): Likewise.
(self_reuse_distance): Likewise.
(mem_ref_count_reasonable_p): Likewise.
(insn_to_prefetch_ratio_too_small_p): Likewise.
(loop_prefetch_arrays): Likewise.
(tree_ssa_prefetch_arrays): Likewise.
* tree-ssa-loop-unswitch.c (tree_unswitch_single_loop): Likewise.
* tree-ssa-math-opts.c (gimple_expand_builtin_pow): Likewise.
(convert_mult_to_fma): Likewise.
(math_opts_dom_walker::after_dom_children): Likewise.
* tree-ssa-phiopt.c (cond_if_else_store_replacement): Likewise.
(hoist_adjacent_loads): Likewise.
(gate_hoist_loads): Likewise.
* tree-ssa-pre.c (translate_vuse_through_block): Likewise.
(compute_partial_antic_aux): Likewise.
* tree-ssa-reassoc.c (get_reassociation_width): Likewise.
* tree-ssa-sccvn.c (vn_reference_lookup_pieces): Likewise.
(vn_reference_lookup): Likewise.
(do_rpo_vn): Likewise.
* tree-ssa-scopedtables.c (avail_exprs_stack::lookup_avail_expr): Likewise.
* tree-ssa-sink.c (select_best_block): Likewise.
* tree-ssa-strlen.c (new_stridx): Likewise.
(new_addr_stridx): Likewise.
(get_range_strlen_dynamic): Likewise.
(class ssa_name_limit_t): Likewise.
* tree-ssa-structalias.c (push_fields_onto_fieldstack): Likewise.
(create_variable_info_for_1): Likewise.
(init_alias_vars): Likewise.
* tree-ssa-tail-merge.c (find_clusters_1): Likewise.
(tail_merge_optimize): Likewise.
* tree-ssa-threadbackward.c (thread_jumps::profitable_jump_thread_path): Likewise.
(thread_jumps::fsm_find_control_statement_thread_paths): Likewise.
(thread_jumps::find_jump_threads_backwards): Likewise.
* tree-ssa-threadedge.c (record_temporary_equivalences_from_stmts_at_dest): Likewise.
* tree-ssa-uninit.c (compute_control_dep_chain): Likewise.
* tree-switch-conversion.c (switch_conversion::check_range): Likewise.
(jump_table_cluster::can_be_handled): Likewise.
* tree-switch-conversion.h (jump_table_cluster::case_values_threshold): Likewise.
(SWITCH_CONVERSION_BRANCH_RATIO): Likewise.
(param_switch_conversion_branch_ratio): Likewise.
* tree-vect-data-refs.c (vect_mark_for_runtime_alias_test): Likewise.
(vect_enhance_data_refs_alignment): Likewise.
(vect_prune_runtime_alias_test_list): Likewise.
* tree-vect-loop.c (vect_analyze_loop_costing): Likewise.
(vect_get_datarefs_in_loop): Likewise.
(vect_analyze_loop): Likewise.
* tree-vect-slp.c (vect_slp_bb): Likewise.
* tree-vectorizer.h: Likewise.
* tree-vrp.c (find_switch_asserts): Likewise.
(vrp_prop::check_mem_ref): Likewise.
* tree.c (wide_int_to_tree_1): Likewise.
(cache_integer_cst): Likewise.
* var-tracking.c (EXPR_USE_DEPTH): Likewise.
(reverse_op): Likewise.
(vt_find_locations): Likewise.
2019-11-12 Martin Liska <mliska@suse.cz>
* gimple-parser.c (c_parser_parse_gimple_body): Replace old parameter syntax
with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET
macro.
2019-11-12 Martin Liska <mliska@suse.cz>
* name-lookup.c (namespace_hints::namespace_hints): Replace old parameter syntax
with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET
macro.
* typeck.c (comptypes): Likewise.
2019-11-12 Martin Liska <mliska@suse.cz>
* lto-partition.c (lto_balanced_map): Replace old parameter syntax
with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET
macro.
* lto.c (do_whole_program_analysis): Likewise.
From-SVN: r278085
|
|
2019-11-07 Martin Liska <mliska@suse.cz>
* fold-const.c (operand_compare::operand_equal_p): Add comparison
of CONSTRUCTOR_NO_CLEARING.
(operand_compare::hash_operand): Likewise.
From-SVN: r277912
|
|
2019-11-05 Martin Liska <mliska@suse.cz>
PR c++/92339
* fold-const.c (operand_compare::hash_operand): Remove
FIELD_DECL handling.
2019-11-05 Martin Liska <mliska@suse.cz>
PR c++/92339
* g++.dg/pr92339.C: New test.
From-SVN: r277816
|
|
2019-11-04 Martin Liska <mliska@suse.cz>
PR ipa/92304
* fold-const.c (operand_compare::hash_operand): Fix field
hashing of CONSTRUCTOR.
From-SVN: r277768
|
|
2019-10-30 Martin Liska <mliska@suse.cz>
* fold-const.c (operand_equal_p): Move to ...
(operand_compare::operand_equal_p): ... here.
(operand_compare::verify_hash_value): New.
(add_expr): Move to ...
(operand_compare::hash_operand): ... here.
* fold-const.h (operand_equal_p): Move to the class.
(class operand_compare): New.
* tree.c (add_expr): Remove.
From-SVN: r277614
|
|
2019-10-30 Martin Liska <mliska@suse.cz>
* fold-const.c (operand_equal_p): Support OBJ_TYPE_REF.
* tree.c (add_expr): Hash parts of OBJ_TYPE_REF.
From-SVN: r277612
|
|
when compiling Python's Python/_warnings.c)
PR middle-end/92063
* tree-eh.c (operation_could_trap_helper_p) <case COND_EXPR>
<case VEC_COND_EXPR>: Return false with *handled = false.
(tree_could_trap_p): For {,VEC_}COND_EXPR return false instead of
recursing on the first operand.
* fold-const.c (simple_operand_p_2): Use generic_expr_could_trap_p
instead of tree_could_trap_p.
* tree-ssa-sccvn.c (vn_nary_may_trap): Formatting fixes.
* gcc.c-torture/compile/pr92063.c: New test.
From-SVN: r276915
|
|
PR go/91617
* fold-const.c (range_check_type): For enumeral and boolean
type, pass 1 to type_for_size langhook instead of
TYPE_UNSIGNED (etype). Return unsigned_type_for result whenever
etype isn't TYPE_UNSIGNED INTEGER_TYPE.
(build_range_check): Don't call unsigned_type_for for pointer types.
* match.pd (X / C1 op C2): Don't call unsigned_type_for on
range_check_type result.
From-SVN: r275299
|
|
2019-08-26 Tejas Joshi <tejasjoshi9673@gmail.com>
* builtins.c (mathfn_built_in_2): Added CASE_MATHFN_FLOATN
for ROUNDEVEN.
* builtins.def: Added function definitions for roundeven function
variants.
* fold-const-call.c (fold_const_call_ss): Added case for roundeven
function call. Adjust condition for floor, ceil, trunc and round.
* fold-const.c (negate_mathfn_p): Added case for roundeven function.
(tree_call_nonnegative_warnv_p): Added case for roundeven function.
(integer_valued_real_call_p): Added case for roundeven function.
* real.c (is_even): New function. Returns true if real number is even,
otherwise returns false.
(is_halfway_below): New function. Returns true if real number is
halfway between two integers, else return false.
(real_roundeven): New function. Round real number to nearest integer,
rounding halfway cases towards even.
* real.h (real_value): Added descriptive comments. Added function
declaration for roundeven function.
* doc/extend.texi (Other Builtins): List roundeven variants among
functions which can be handled as builtins.
gcc/testsuite/ChangeLog:
2019-08-26 Tejas Joshi <tejasjoshi9673@gmail.com>
* gcc.dg/torture/builtin-round-roundeven.c: New test.
* gcc.dg/torture/builtin-round-roundevenf128.c: New test.
From-SVN: r274927
|
|
We were shoe-horning all built-in enumerations (including frontend
and target-specific ones) into a field of type built_in_function. This
was accessed as either an lvalue or an rvalue using DECL_FUNCTION_CODE.
The obvious danger with this (as was noted by several ??? comments)
is that the ranges have nothing to do with each other, and targets can
easily have more built-in functions than generic code. But my patch to
make the field bigger was the straw that finally made the problem visible.
This patch therefore:
- replaces the field with a plain unsigned int
- turns DECL_FUNCTION_CODE into an rvalue-only accessor that checks
that the function really is BUILT_IN_NORMAL
- adds corresponding DECL_MD_FUNCTION_CODE and DECL_FE_FUNCTION_CODE
accessors for BUILT_IN_MD and BUILT_IN_FRONTEND respectively
- adds DECL_UNCHECKED_FUNCTION_CODE for places that need to access the
underlying field (should be low-level code only)
- adds new helpers for setting the built-in class and function code
- makes DECL_BUILT_IN_CLASS an rvalue-only accessor too, since all
assignments should go through the new helpers
2019-08-13 Richard Sandiford <richard.sandiford@arm.com>
gcc/
PR middle-end/91421
* tree-core.h (function_decl::function_code): Change type to
unsigned int.
* tree.h (DECL_FUNCTION_CODE): Rename old definition to...
(DECL_UNCHECKED_FUNCTION_CODE): ...this.
(DECL_BUILT_IN_CLASS): Make an rvalue macro only.
(DECL_FUNCTION_CODE): New function. Assert that the built-in class
is BUILT_IN_NORMAL.
(DECL_MD_FUNCTION_CODE, DECL_FE_FUNCTION_CODE): New functions.
(set_decl_built_in_function, copy_decl_built_in_function): Likewise.
(fndecl_built_in_p): Change the type of the "name" argument to
unsigned int.
* builtins.c (expand_builtin): Move DECL_FUNCTION_CODE use
after check for DECL_BUILT_IN_CLASS.
* cgraphclones.c (build_function_decl_skip_args): Use
set_decl_built_in_function.
* ipa-param-manipulation.c (ipa_modify_formal_parameters): Likewise.
* ipa-split.c (split_function): Likewise.
* langhooks.c (add_builtin_function_common): Likewise.
* omp-simd-clone.c (simd_clone_create): Likewise.
* tree-streamer-in.c (unpack_ts_function_decl_value_fields): Likewise.
* config/darwin.c (darwin_init_cfstring_builtins): Likewise.
(darwin_fold_builtin): Use DECL_MD_FUNCTION_CODE instead of
DECL_FUNCTION_CODE.
* fold-const.c (operand_equal_p): Compare DECL_UNCHECKED_FUNCTION_CODE
instead of DECL_FUNCTION_CODE.
* lto-streamer-out.c (hash_tree): Use DECL_UNCHECKED_FUNCTION_CODE
instead of DECL_FUNCTION_CODE.
* tree-streamer-out.c (pack_ts_function_decl_value_fields): Likewise.
* print-tree.c (print_node): Use DECL_MD_FUNCTION_CODE when
printing DECL_BUILT_IN_MD. Handle DECL_BUILT_IN_FRONTEND.
* config/aarch64/aarch64-builtins.c (aarch64_expand_builtin)
(aarch64_fold_builtin, aarch64_gimple_fold_builtin): Use
DECL_MD_FUNCTION_CODE instead of DECL_FUNCTION_CODE.
* config/aarch64/aarch64.c (aarch64_builtin_reciprocal): Likewise.
* config/alpha/alpha.c (alpha_expand_builtin, alpha_fold_builtin):
(alpha_gimple_fold_builtin): Likewise.
* config/arc/arc.c (arc_expand_builtin): Likewise.
* config/arm/arm-builtins.c (arm_expand_builtin): Likewise.
* config/avr/avr-c.c (avr_resolve_overloaded_builtin): Likewise.
* config/avr/avr.c (avr_expand_builtin, avr_fold_builtin): Likewise.
* config/bfin/bfin.c (bfin_expand_builtin): Likewise.
* config/c6x/c6x.c (c6x_expand_builtin): Likewise.
* config/frv/frv.c (frv_expand_builtin): Likewise.
* config/gcn/gcn.c (gcn_expand_builtin_1): Likewise.
(gcn_expand_builtin): Likewise.
* config/i386/i386-builtins.c (ix86_builtin_reciprocal): Likewise.
(fold_builtin_cpu): Likewise.
* config/i386/i386-expand.c (ix86_expand_builtin): Likewise.
* config/i386/i386.c (ix86_fold_builtin): Likewise.
(ix86_gimple_fold_builtin): Likewise.
* config/ia64/ia64.c (ia64_fold_builtin): Likewise.
(ia64_expand_builtin): Likewise.
* config/iq2000/iq2000.c (iq2000_expand_builtin): Likewise.
* config/mips/mips.c (mips_expand_builtin): Likewise.
* config/msp430/msp430.c (msp430_expand_builtin): Likewise.
* config/nds32/nds32-intrinsic.c (nds32_expand_builtin_impl): Likewise.
* config/nios2/nios2.c (nios2_expand_builtin): Likewise.
* config/nvptx/nvptx.c (nvptx_expand_builtin): Likewise.
* config/pa/pa.c (pa_expand_builtin): Likewise.
* config/pru/pru.c (pru_expand_builtin): Likewise.
* config/riscv/riscv-builtins.c (riscv_expand_builtin): Likewise.
* config/rs6000/rs6000-c.c (altivec_resolve_overloaded_builtin):
Likewise.
* config/rs6000/rs6000-call.c (htm_expand_builtin): Likewise.
(altivec_expand_dst_builtin, altivec_expand_builtin): Likewise.
(rs6000_gimple_fold_builtin, rs6000_expand_builtin): Likewise.
* config/rs6000/rs6000.c (rs6000_builtin_md_vectorized_function)
(rs6000_builtin_reciprocal): Likewise.
* config/rx/rx.c (rx_expand_builtin): Likewise.
* config/s390/s390-c.c (s390_resolve_overloaded_builtin): Likewise.
* config/s390/s390.c (s390_expand_builtin): Likewise.
* config/sh/sh.c (sh_expand_builtin): Likewise.
* config/sparc/sparc.c (sparc_expand_builtin): Likewise.
(sparc_fold_builtin): Likewise.
* config/spu/spu-c.c (spu_resolve_overloaded_builtin): Likewise.
* config/spu/spu.c (spu_expand_builtin): Likewise.
* config/stormy16/stormy16.c (xstormy16_expand_builtin): Likewise.
* config/tilegx/tilegx.c (tilegx_expand_builtin): Likewise.
* config/tilepro/tilepro.c (tilepro_expand_builtin): Likewise.
* config/xtensa/xtensa.c (xtensa_fold_builtin): Likewise.
(xtensa_expand_builtin): Likewise.
gcc/ada/
PR middle-end/91421
* gcc-interface/trans.c (gigi): Call set_decl_buillt_in_function.
(Call_to_gnu): Use DECL_FE_FUNCTION_CODE instead of DECL_FUNCTION_CODE.
gcc/c/
PR middle-end/91421
* c-decl.c (merge_decls): Use copy_decl_built_in_function.
gcc/c-family/
PR middle-end/91421
* c-common.c (resolve_overloaded_builtin): Use
copy_decl_built_in_function.
gcc/cp/
PR middle-end/91421
* decl.c (duplicate_decls): Use copy_decl_built_in_function.
* pt.c (declare_integer_pack): Use set_decl_built_in_function.
gcc/d/
PR middle-end/91421
* intrinsics.cc (maybe_set_intrinsic): Use set_decl_built_in_function.
gcc/jit/
PR middle-end/91421
* jit-playback.c (new_function): Use set_decl_built_in_function.
gcc/lto/
PR middle-end/91421
* lto-common.c (compare_tree_sccs_1): Use DECL_UNCHECKED_FUNCTION_CODE
instead of DECL_FUNCTION_CODE.
* lto-symtab.c (lto_symtab_merge_p): Likewise.
From-SVN: r274404
|
|
2019-08-07 Martin Liska <mliska@suse.cz>
* fold-const.c (twoval_comparison_p): Replace int
with bool as a return type.
(simple_operand_p): Likewise.
(operand_equal_p): Replace int with bool as a return type.
* fold-const.h (operand_equal_p): Likewise.
From-SVN: r274161
|
|
2019-08-05 Richard Biener <rguenther@suse.de>
PR middle-end/91169
* fold-const.c (get_array_ctor_element_at_index): Create
offset_ints according to the sign of the index type and treat
that as signed if it is obviously so.
* gnat.dg/array37.adb: New testcase.
From-SVN: r274114
|
|
2019-07-25 Martin Liska <mliska@suse.cz>
* calls.c (maybe_warn_alloc_args_overflow): Use new macros
(e.g. DECL_SET_LAMBDA_FUNCTION and DECL_LAMBDA_FUNCTION_P).
* coverage.c (coverage_begin_function): Likewise.
* fold-const.c (tree_expr_nonzero_warnv_p): Likewise.
* gimple.c (gimple_call_nonnull_result_p): Likewise.
* ipa-icf.c (sem_item::compare_referenced_symbol_properties): Likewise.
(sem_item::hash_referenced_symbol_properties): Likewise.
* lto-streamer-out.c (hash_tree): Likewise.
* predict.c (expr_expected_value_1): Likewise.
* tree-inline.c (expand_call_inline): Likewise.
* tree-streamer-in.c (unpack_ts_function_decl_value_fields): Likewise.
* tree-streamer-out.c (pack_ts_function_decl_value_fields): Likewise.
* tree-core.h (enum function_decl_type): New enum.
(struct tree_function_decl): Remove operator_new_flag and lambda_function.
* tree.h (FUNCTION_DECL_DECL_TYPE): New.
(set_function_decl_type): Likewise.
(DECL_IS_OPERATOR_NEW_P): New.
(DECL_SET_IS_OPERATOR_NEW): Likewise.
(DECL_LAMBDA_FUNCTION): Likewise.
(DECL_LAMBDA_FUNCTION_P): Likewise.
(DECL_IS_OPERATOR_NEW): Remove.
(DECL_SET_LAMBDA_FUNCTION): Likewise.
2019-07-25 Martin Liska <mliska@suse.cz>
* c-decl.c (merge_decls): Use new macros
(e.g. DECL_SET_LAMBDA_FUNCTION and DECL_LAMBDA_FUNCTION_P).
2019-07-25 Martin Liska <mliska@suse.cz>
* decl.c (duplicate_decls): Use new macros
(e.g. DECL_SET_LAMBDA_FUNCTION and DECL_LAMBDA_FUNCTION_P).
(cxx_init_decl_processing): Likewise.
(grok_op_properties): Likewise.
* parser.c (cp_parser_lambda_declarator_opt): Likewise.
2019-07-25 Martin Liska <mliska@suse.cz>
* lto-common.c (compare_tree_sccs_1): Use new macros
(e.g. DECL_SET_LAMBDA_FUNCTION and DECL_LAMBDA_FUNCTION_P).
From-SVN: r273790
|
|
2019-07-12 Richard Biener <rguenther@suse.de>
* fold-const.h (get_array_ctor_element_at_index): Adjust.
* fold-const.c (get_array_ctor_element_at_index): Add
ctor_idx output parameter informing the caller where in
the constructor the element was (not) found. Add early exit
for when the ctor is sorted.
* gimple-fold.c (fold_array_ctor_reference): Support constant
folding across multiple array elements.
* gcc.dg/tree-ssa/vector-7.c: New testcase.
From-SVN: r273435
|