Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
libsanitizer/ChangeLog:
* sanitizer_common/Makefile.am: Remove sanitizer_openbsd.
* sanitizer_common/Makefile.in: Regenerate.
|
|
|
|
The following removes duplicate dumping and makes the predicate
dumping more readable. That makes the GENERIC predicate build
routines unused which is also nice.
* gimple-predicate-analysis.cc (dump_pred_chain): Fix
parentizing and AND prepending.
(predicate::dump): Do not dump the GENERIC expanded
predicate, properly parentize and prepend ORs to the
piecewise predicate dump.
(build_pred_expr): Remove.
|
|
This is the implementation of the relational range operators for
frange. These are the core operations that require specific FP domain
knowledge.
gcc/ChangeLog:
* range-op-float.cc (finite_operand_p): New.
(build_le): New.
(build_lt): New.
(build_ge): New.
(build_gt): New.
(foperator_equal::fold_range): New implementation with endpoints.
(foperator_equal::op1_range): Same.
(foperator_not_equal::fold_range): Same.
(foperator_not_equal::op1_range): Same.
(foperator_lt::fold_range): Same.
(foperator_lt::op1_range): Same.
(foperator_lt::op2_range): Same.
(foperator_le::fold_range): Same.
(foperator_le::op1_range): Same.
(foperator_le::op2_range): Same.
(foperator_gt::fold_range): Same.
(foperator_gt::op1_range): Same.
(foperator_gt::op2_range): Same.
(foperator_ge::fold_range): Same.
(foperator_ge::op1_range): Same.
(foperator_ge::op2_range): Same.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/recip-3.c: Avoid premature optimization so test
has a chance to succeed.
|
|
The current implementation of frange is just a type with some bits to
represent NAN and INF. We can do better and represent endpoints to
ultimately solve longstanding PRs such as PR24021. This patch adds
these endpoints. In follow-up patches I will add support for a bare
bones PLUS_EXPR range-op-float entry to solve the PR.
I have chosen to use REAL_VALUE_TYPEs for the endpoints, since that's
what we use underneath the trees. This will be somewhat analogous to
our eventual use of wide-ints in the irange. No sense going through
added levels of indirection if we can avoid it. That, plus real.*
already has a nice API for dealing with floats.
With this patch, ranges will be closed float point intervals, which
make the implementation simpler, since we don't have to keep track of
open/closed intervals. This is conservative enough for use in the
ranger world, as we'd rather err on the side of more elements in a
range, than less.
For example, even though we cannot precisely represent the open
interval (3.0, 5.0) with this approach, it is perfectably reasonable
to represent it as [3.0, 5.0] since the closed interval is a super set
of the open one. In the VRP/ranger world, it is always better to
err on the side of more information in a range, than not. After all,
when we don't know anything about a range, we just use VARYING which
is a fancy term for a range spanning the entire domain.
Since REAL_VALUE_TYPEs have properly defined infinity and NAN
semantics, all the math can be made to work:
[-INF, 3.0] !NAN => Numbers <= 3.0 (NAN cannot happen)
[3.0, 3.0] => 3.0 or NAN.
[3.0, +INF] => Numbers >= 3.0 (NAN is possible)
[-INF, +INF] => VARYING (NAN is possible)
[-INF, +INF] !NAN => Entire domain. NAN cannot happen.
Also, since REAL_VALUE_TYPEs can represent the minimum and maximum
representable values of a TYPE_MODE, we can disambiguate between them
and negative and positive infinity (see get_max_float in real.cc).
This also makes the math all work. For example, suppose we know
nothing about x and y (VARYING). On the TRUE side of x > y, we can
deduce that:
(a) x cannot be NAN
(b) y cannot be NAN
(c) y cannot be +INF.
(c) means that we can drop the upper bound of "y" from +INF to the
maximum representable value for its type.
Having endpoints with different representation for infinity and the
maximum representable values, means we can drop the +-INF properties
we currently have in the frange.
gcc/ChangeLog:
* range-op-float.cc (frange_set_nan): New.
(frange_drop_inf): New.
(frange_drop_ninf): New.
(foperator_equal::op1_range): Adjust for endpoints.
(foperator_lt::op1_range): Same.
(foperator_lt::op2_range): Same.
(foperator_gt::op1_range): Same.
(foperator_gt::op2_range): Same.
(foperator_unordered::op1_range): Same.
* value-query.cc (range_query::get_tree_range): Same.
* value-range-pretty-print.cc (vrange_printer::visit): Same.
* value-range-storage.cc (frange_storage_slot::get_frange): Same.
* value-range.cc (frange::set): Same.
(frange::normalize_kind): Same.
(frange::union_): Same.
(frange::intersect): Same.
(frange::operator=): Same.
(early_nan_resolve): New.
(frange::contains_p): New.
(frange::singleton_p): New.
(frange::set_nonzero): New.
(frange::nonzero_p): New.
(frange::set_zero): New.
(frange::zero_p): New.
(frange::set_nonnegative): New.
(frange_float): New.
(frange_nan): New.
(range_tests_nan): New.
(range_tests_signed_zeros): New.
(range_tests_floats): New.
(range_tests): New.
* value-range.h (frange::lower_bound): New.
(frange::upper_bound): New.
(vrp_val_min): Use real_inf with a sign instead of negating inf.
(frange::frange): New.
(frange::set_varying): Adjust for endpoints.
(real_max_representable): New.
(real_min_representable): New.
|
|
The upcoming work for frange triggers a regression in
gcc.dg/tree-ssa/phi-opt-24.c.
For -O2 -fno-signed-zeros, we fail to transform the following into -A:
float f0(float A)
{
// A == 0? A : -A same as -A
if (A == 0) return A;
return -A;
}
This is because the abs/negative match.pd pattern here:
/* abs/negative simplifications moved from fold_cond_expr_with_comparison,
Need to handle (A - B) case as fold_cond_expr_with_comparison does.
Need to handle UN* comparisons.
...
...
Sees IL that has the 0.0 propagated.
Instead of:
<bb 2> [local count: 1073741824]:
if (A_2(D) == 0.0)
goto <bb 4>; [34.00%]
else
goto <bb 3>; [66.00%]
<bb 3> [local count: 708669601]:
_3 = -A_2(D);
<bb 4> [local count: 1073741824]:
# _1 = PHI <A_2(D)(2), _3(3)>
It now sees:
<bb 4> [local count: 1073741824]:
# _1 = PHI <0.0(2), _3(3)>
which it leaves untouched, causing the if conditional to survive.
Changing integger_zerop to zerop fixes the problem.
I did not include a testcase, as it's just phi-opt-24.c which will get
triggered when I commit the frange with endpoints work.
gcc/ChangeLog:
* match.pd ((cmp @0 zerop) real_zerop (negate@1 @0)): Add variant
for real zero.
|
|
Fixes build on i686:
gcc/config/s390/s390.cc: In function 'bool s390_rtx_costs(rtx, machine_mode, int, int, int*, bool)':
gcc/config/s390/s390.cc:3728:63: error: cannot convert 'long int*' to 'long long int*'
gcc/ChangeLog:
* config/s390/s390.cc (s390_rtx_costs): Use proper type as
argument.
|
|
This patch does what the comment in uninit diagnostic suggests.
When the value-numbering run done without optimizing figures there's
a fallthru path, consider blocks on it as always executed.
* tree-ssa-uninit.cc (warn_uninitialized_vars): Pre-compute
the set of fallthru reachable blocks from function entry
and use that to determine wlims.always_executed.
|
|
This adds a testcase for the PR which was fixed with r13-2155-gbaa3ffb19c54fa
PR tree-optimization/63660
* gcc.dg/uninit-pr63660.c: New testcase.
|
|
The following sorts the immediate uses of a possibly uninitialized
SSA variable after their RPO order so we prefer warning for an
earlier occuring use rather than issueing the diagnostic for the
first uninitialized immediate use.
The sorting will inevitably be imperfect but it also allows us to
optimize the expensive predicate check for the case where there
are multiple uses in the same basic-block which is a nice side-effect.
PR tree-optimization/56654
* tree-ssa-uninit.cc (cand_cmp): New.
(find_uninit_use): First process all PHIs and collect candidate
stmts, then sort those after RPO.
(warn_uninitialized_phi): Pass on bb_to_rpo.
(execute_late_warn_uninitialized): Compute and pass on
reverse lookup of RPO number from basic block index.
|
|
Currently the main working of the maybe-uninit pass is to scan over
all PHIs with possibly undefined arguments, diagnosing whether there's
a direct not guarded use. For not guarded uses in PHIs those are queued for
later processing and to make the uninit analysis PHI def handling work,
mark the PHI def as possibly uninitialized. But this happens only
for those PHI uses that happen to be seen before a direct not guarded
use and whether all arguments of a PHI node which are defined by a PHI
are properly marked as maybe uninitialized depends on the processing
order.
The following changes the uninit pass to perform an RPO walk over
the function, ensuring that PHI argument defs are visited before
the PHI node (besides backedge uses which we ignore already),
getting rid of the worklist. It also makes sure to process all
PHI uses, but recording those that are properly guarded so they
are not treated as maybe undefined when processing the PHI use
later.
Overall this should make behavior more consistent, avoid some
false negative because of the previous early out and order issue,
and avoid some false positive because of the missed recording
of guarded PHI uses.
The patch correctly diagnoses an uninitalized use of 'regnum'
in store_bit_field_1 and also diagnoses an uninitialized use of
best_match::m_best_candidate_len in c-decl.cc which I've chosen to
silence by initializing m_best_candidate_len. The warning is
a false positive but GCC cannot see that m_best_candidate_len is
initialized when m_best_candidate is not NULL so from this
perspective this was a false negative. I've added
g++.dg/uninit-pred-5.C with a reduced testcase that nicely shows
how the previous behavior missed the diagnostic because the
worklist ended up visiting the PHI with the dependend uninit
value before visiting the PHIs producing it.
* gimple-predicate-analysis.h (uninit_analysis::operator()):
Remove.
* gimple-predicate-analysis.cc
(uninit_analysis::collect_phi_def_edges): Use phi_arg_set,
simplify a bit.
* tree-ssa-uninit.cc (defined_args): New global.
(compute_uninit_opnds_pos): Mask with the recorded set
of guarded maybe-uninitialized uses.
(uninit_undef_val_t::operator()): Remove.
(find_uninit_use): Process all PHI uses, recording the
guarded ones and marking the PHI result as uninitialized
consistently.
(warn_uninitialized_phi): Adjust.
(execute_late_warn_uninitialized): Get rid of the PHI worklist
and instead walk the function in RPO order.
* spellcheck.h (best_match::m_best_candidate_len): Initialize.
* g++.dg/uninit-pred-5.C: New testcase.
|
|
This corrects the argument usage to use them in the order that they occur in
the comparisons in gimple.
gcc/ChangeLog:
PR tree-optimization/106744
* tree-ssa-phiopt.cc (minmax_replacement): Correct arguments.
gcc/testsuite/ChangeLog:
PR tree-optimization/106744
* gcc.dg/tree-ssa/minmax-10.c: Make runtime test.
* gcc.dg/tree-ssa/minmax-11.c: Likewise.
* gcc.dg/tree-ssa/minmax-12.c: Likewise.
* gcc.dg/tree-ssa/minmax-13.c: Likewise.
* gcc.dg/tree-ssa/minmax-14.c: Likewise.
* gcc.dg/tree-ssa/minmax-15.c: Likewise.
* gcc.dg/tree-ssa/minmax-16.c: Likewise.
* gcc.dg/tree-ssa/minmax-3.c: Likewise.
* gcc.dg/tree-ssa/minmax-4.c: Likewise.
* gcc.dg/tree-ssa/minmax-5.c: Likewise.
* gcc.dg/tree-ssa/minmax-6.c: Likewise.
* gcc.dg/tree-ssa/minmax-7.c: Likewise.
* gcc.dg/tree-ssa/minmax-8.c: Likewise.
* gcc.dg/tree-ssa/minmax-9.c: Likewise.
|
|
This initializes regnum to 0 for when undefined_p.
0 is the right default as it's supposed to get the lowpart
when undefined.
gcc/ChangeLog:
* expmed.cc (store_bit_field_1): Initialize regnum to 0.
|
|
|
|
When we have
[[noreturn]] int fn1 [[nodiscard]](), fn2();
"noreturn" should apply to both fn1 and fn2 but "nodiscard" only to fn1:
[dcl.pre]/3: "The attribute-specifier-seq appertains to each of
the entities declared by the declarators of the init-declarator-list."
[dcl.spec.general]: "The attribute-specifier-seq affects the type
only for the declaration it appears in, not other declarations involving
the same type."
As Ed Catmur correctly analyzed, this is because, for the test above,
we call start_decl with prefix_attributes=noreturn, but this line:
attributes = attr_chainon (attributes, prefix_attributes);
results in attributes == prefix_attributes, because chainon sees
that attributes is null so it just returns prefix_attributes. Then
in grokdeclarator we reach
*attrlist = attr_chainon (*attrlist, declarator->std_attributes);
which modifies prefix_attributes so now it's "noreturn, nodiscard"
and so fn2 is wrongly marked nodiscard as well. Fixed by reversing
the order of arguments to attr_chainon. That way, we tack the prefix
attributes onto ->std_attributes, avoiding modifying prefix_attributes.
PR c++/106712
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Reverse the order of arguments to
attr_chainon.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/gen-attrs-77.C: New test.
|
|
The old method for computing a member index for a CO-RE relocation
relied on a name comparison, which could SEGV if the member in question
is itself part of an anonymous inner struct or union.
This patch changes the index computation to not rely on a name, while
maintaining the ability to account for other sibling fields which may
not have a representation in BTF.
gcc/ChangeLog:
PR target/106745
* config/bpf/coreout.cc (bpf_core_get_sou_member_index): Fix
computation of index for anonymous members.
gcc/testsuite/ChangeLog:
PR target/106745
* gcc.target/bpf/core-pr106745.c: New test.
|
|
LLVM defines both __bpf__ and __BPF_ as target macros.
GCC was defining only __BPF__.
This patch defines __bpf__ as a target macro for BPF.
Tested in bpf-unknown-none.
gcc/ChangeLog:
* config/bpf/bpf.cc (bpf_target_macros): Define __bpf__ as a
target macro.
|
|
Handle E_V16BFmode in ix86_avx256_split_vector_move_misalign and add
V16BF to V_256H iterator.
gcc/
PR target/106748
* config/i386/i386-expand.cc
(ix86_avx256_split_vector_move_misalign): Handle E_V16BFmode.
* config/i386/sse.md (V_256H): Add V16BF.
gcc/testsuite/
PR target/106748
* gcc.target/i386/pr106748.c: New test.
|
|
If GCC is not built with a working linker for the target (developers
occansionally build such a "minimal" GCC for testing and debugging),
TLS will be emulated and __tls_get_addr won't be used. Refine those
tests depending on __tls_get_addr with tls_native to avoid test
failures.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/func-call-medium-1.c: Refine test
depending on __tls_get_addr with { target tls_native }.
* gcc.target/loongarch/func-call-medium-2.c: Likewise.
* gcc.target/loongarch/func-call-medium-3.c: Likewise.
* gcc.target/loongarch/func-call-medium-4.c: Likewise.
* gcc.target/loongarch/func-call-medium-5.c: Likewise.
* gcc.target/loongarch/func-call-medium-6.c: Likewise.
* gcc.target/loongarch/func-call-medium-7.c: Likewise.
* gcc.target/loongarch/func-call-medium-8.c: Likewise.
* gcc.target/loongarch/tls-gd-noplt.c: Likewise.
|
|
The IF_THEN_ELSE detection currently prevents us from properly costing
register-register moves which causes the lower-subreg pass to assume that
a VR-VR move is as expensive as two GPR-GPR moves.
This patch adds handling for SETs containing REGs as well as MEMs and is
inspired by the aarch64 implementation.
gcc/ChangeLog:
* config/s390/s390.cc (s390_address_cost): Declare.
(s390_hard_regno_nregs): Declare.
(s390_rtx_costs): Add handling for REG and MEM in SET.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vec-sum-across-no-lower-subreg-1.c: New test.
|
|
This adds functions to recognize reverse/element swap permute patterns
for vler, vster as well as vpdi and rotate.
gcc/ChangeLog:
* config/s390/s390.cc (expand_perm_with_vpdi): Recognize swap pattern.
(is_reverse_perm_mask): New function.
(expand_perm_with_rot): Recognize reverse pattern.
(expand_perm_with_vstbrq): New function.
(expand_perm_with_vster): Use vler/vster for element reversal on z15.
(vectorize_vec_perm_const_1): Use.
(s390_vectorize_vec_perm_const): Add expand functions.
* config/s390/vx-builtins.md: Prefer vster over vler.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vperm-rev-z14.c: New test.
* gcc.target/s390/vector/vperm-rev-z15.c: New test.
* gcc.target/s390/zvector/vec-reve-store-byte.c: Adjust test
expectation.
|
|
vec_select can handle dynamic/runtime masks nowadays. Therefore we can
get rid of the UNSPEC_VEC_EXTRACT that was preventing further
optimizations like combining instructions with vec_extract patterns.
gcc/ChangeLog:
* config/s390/s390.md: Remove UNSPEC_VEC_EXTRACT.
* config/s390/vector.md: Rewrite patterns to use vec_select.
* config/s390/vx-builtins.md (vec_scatter_element<V_HW_2:mode>_SI):
Likewise.
|
|
Swapping the two elements of a V2DImode or V2DFmode vector can be done
with vpdi instead of using the generic way of loading a permutation mask
from the literal pool and vperm.
Analogous to the V2DI/V2DF case reversing the elements of a four-element
vector can be done by first swapping the elements of the first
doubleword as well the ones of the second one and subsequently rotate
the doublewords by 32 bits.
gcc/ChangeLog:
PR target/100869
* config/s390/vector.md (@vpdi4_2<mode>): New pattern.
(rotl<mode>3_di): New pattern.
* config/s390/vx-builtins.md: Use vpdi and verll for reversing
elements.
gcc/testsuite/ChangeLog:
* gcc.target/s390/zvector/vec-reve-int-long.c: New test.
|
|
Be more explicit by mentioning z15 in s390_issue_rate.
gcc/ChangeLog:
* config/s390/s390.cc (s390_issue_rate): Add z15.
|
|
Inspired by Power we also introduce -munroll-only-small-loops. This
implies activating -funroll-loops and -munroll-only-small-loops at -O2 and
above.
gcc/ChangeLog:
* common/config/s390/s390-common.cc: Enable -funroll-loops and
-munroll-only-small-loops for OPT_LEVELS_2_PLUS_SPEED_ONLY.
* config/s390/s390.cc (s390_loop_unroll_adjust): Do not unroll
loops larger than 12 instructions.
(s390_override_options_after_change): Set unroll options.
(s390_option_override_internal): Likewise.
* config/s390/s390.opt: Document munroll-only-small-loops.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vec-copysign.c: Do not unroll.
* gcc.target/s390/zvector/autovec-double-quiet-uneq.c: Dito.
* gcc.target/s390/zvector/autovec-double-signaling-ltgt.c: Dito.
* gcc.target/s390/zvector/autovec-float-quiet-uneq.c: Dito.
* gcc.target/s390/zvector/autovec-float-signaling-ltgt.c: Dito.
|
|
The following inlines find_control_equiv_block and is_loop_exit
into init_use_preds and refactors that for better readability and
similarity with the post-dominator walk in compute_control_dep_chain.
* gimple-predicate-analysis.cc (is_loop_exit,
find_control_equiv_block): Inline into single caller ...
(uninit_analysis::init_use_preds): ... here and refactor.
|
|
The following refactors compute_control_dep_chain slightly by
inlining is_loop_exit and factoring the check on the loop
invariant condition. It also adds a comment as of how I
understand the code and it's current problem.
* gimple-predicate-analysis.cc (compute_control_dep_chain):
Inline is_loop_exit and refactor, add comment about
loop exits.
|
|
poly_int64 is non-trivial type, we need to clean up manully instead
of memset to prevent this warning.
../../gcc/gcc/config/riscv/riscv.cc: In function 'void riscv_compute_frame_info()':
../../gcc/gcc/config/riscv/riscv.cc:4113:10: error: 'void* memset(void*, int, size_t)' clearing an object of non-trivial type 'struct riscv_frame_info'; use assignment or value-initialization instead [-Werror=class-memaccess]
4113 | memset (frame, 0, sizeof (*frame));
| ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gcc/gcc/config/riscv/riscv.cc:101:17: note: 'struct riscv_frame_info' declared here
101 | struct GTY(()) riscv_frame_info {
| ^~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_frame_info): Introduce `reset(void)`;
(riscv_frame_info::reset(void)): New.
(riscv_compute_frame_info): Use riscv_frame_info::reset instead
of memset when clean frame.
|
|
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_v_ext_vector_mode_p): New function.
(riscv_classify_address): Disallow PLUS/LO_SUM/CONST_INT address types for RVV.
(riscv_address_insns): Add RVV modes condition.
(riscv_binary_cost): Ditto.
(riscv_rtx_costs): Adjust cost for RVV.
(riscv_secondary_memory_needed): Add RVV modes condition.
(riscv_hard_regno_nregs): Add RVV register allocation.
(riscv_hard_regno_mode_ok): Add RVV register allocation.
(riscv_class_max_nregs): Add RVV register allocation.
* config/riscv/riscv.h (DWARF_FRAME_REGNUM): Add VL/VTYPE and vector registers in Dwarf.
(UNITS_PER_V_REG): New macro.
(FIRST_PSEUDO_REGISTER): Adjust first pseudo num for RVV.
(V_REG_FIRST): New macro.
(V_REG_LAST): Ditto.
(V_REG_NUM): Ditto.
(V_REG_P): Ditto.
(VL_REG_P): Ditto.
(VTYPE_REG_P): Ditto.
(RISCV_DWARF_VL): Ditto.
(RISCV_DWARF_VTYPE): Ditto.
(enum reg_class): Add RVV register types.
(REG_CLASS_CONTENTS): Add RVV register types.
* config/riscv/riscv.md: Add VL/VTYPE register number constants.
|
|
gcc/ChangeLog:
* config/riscv/riscv.md: Add new type for vector instructions.
|
|
|
|
GCC incorrectly disables conversions between MMA pointer types, which
are allowed with clang. The original intent was to disable conversions
between MMA types and other other types, but pointer conversions should
have been allowed. The fix is to just remove the MMA pointer conversion
handling code altogether.
gcc/
PR target/106017
* config/rs6000/rs6000.cc (rs6000_invalid_conversion): Remove handling
of MMA pointer conversions.
gcc/testsuite/
PR target/106017
* gcc.target/powerpc/pr106017.c: New test.
|
|
|
|
D front-end changes:
- Import latest bug fixes to mainline.
Phobos changes:
- Import latest bug fixes to mainline.
- std.logger module has been moved out of experimental.
- Removed std.experimental.typecons module.
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd 817610b16d.
* d-ctfloat.cc (CTFloat::parse): Update for new front-end interface.
* d-lang.cc (d_parse_file): Likewise.
* expr.cc (ExprVisitor::visit (AssignExp *)): Remove handling of array
assignments to non-trivial static and dynamic arrays.
* runtime.def (ARRAYASSIGN): Remove.
(ARRAYASSIGN_L): Remove.
(ARRAYASSIGN_R): Remove.
libphobos/ChangeLog:
* libdruntime/MERGE: Merge upstream druntime 817610b16d.
* libdruntime/Makefile.am (DRUNTIME_DSOURCES): Add
core/internal/array/arrayassign.d.
* libdruntime/Makefile.in: Regenerate.
* src/MERGE: Merge upstream phobos b578dfad9.
* src/Makefile.am (PHOBOS_DSOURCES): Remove
std/experimental/typecons.d. Add std/logger package.
* src/Makefile.in: Regenerate.
|
|
libstdc++-v3/ChangeLog:
* testsuite/20_util/logical_traits/requirements/base_classes.cc: New test.
|
|
The test uses -floop-parallelize-all which emits a sorry when graphite
isn't configured in.
2022-08-27 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/106737
* gcc.dg/autopar/pr106737.c: Require fgraphite effective target.
|
|
Python 2 has been EOL'ed for two years. egrep has been deprecated
for many years and the next grep release will start to print warning if
it is used.
-E option may be unsupported by some non-POSIX grep implementations, but
gcc-auto-profile won't work on non-Linux systems anyway.
contrib/ChangeLog:
* gen_autofdo_event.py: Port to Python 3, and use grep -E
instead of egrep.
gcc/ChangeLog:
* config/i386/gcc-auto-profile: Regenerate.
|
|
|
|
libstdc++-v3/ChangeLog:
* include/std/ranges (zip_view::_Iterator::operator<): Remove
as per LWG 3692.
(zip_view::_Iterator::operator>): Likewise.
(zip_view::_Iterator::operator<=): Likewise.
(zip_view::_Iterator::operator>=): Likewise.
(zip_view::_Iterator::operator<=>): Remove three_way_comparable
constraint as per LWG 3692.
(zip_transform_view::_Iterator): Ditto as per LWG 3702.
|
|
libstdc++-v3/ChangeLog:
* include/std/ranges (zip_view::_Iterator): Befriend
zip_transform_view.
(__detail::__range_iter_cat): Define.
(zip_transform_view): Define.
(zip_transform_view::_Iterator): Define.
(zip_transform_view::_Sentinel): Define.
(views::__detail::__can_zip_transform_view): Define.
(views::_ZipTransform): Define.
(views::zip_transform): Define.
* testsuite/std/ranges/zip_transform/1.cc: New test.
|
|
The internal type-level logical operator traits __and_ and __or_ seem to
have high overhead for a couple of reasons:
1. They are drop-in replacements for std::con/disjunction, which
are rigidly specified to form a type that derives from the first
type argument that caused the overall computation to short-circuit.
In practice this inheritance property seems to be rarely needed;
usually all we care about is the value of the overall result.
2. Their recursive implementations instantiate O(N) class templates
and form an inheritance chain of depth O(N).
This patch gets rid of this inheritance property of __and_ and __or_
(which seems to be unneeded in the library except indirectly by
std::con/disjunction) which allows us to redefine them non-recursively
as alias templates that yield either false_type or true_type via
enable_if_t and partial ordering of a pair of function templates
(alternatively we could use an equivalent partially specialized class
template, but using function templates appears to be slightly more
efficient).
As for std::con/disjunction, it seems we need to keep implementing them
via a recursive class template for sake of the inheritance property.
But instead of using inheritance recursion, use a recursive member
typedef that gets immediately flattened, so that specializations thereof
now have O(1) instead of O(N) inheritance depth.
In passing, redefine __not_ as an alias template for consistency with
__and_ and __or_, and to remove a layer of indirection.
Together these changes have a substantial effect on compile time and
memory usage for code that heavily uses these internal type traits.
For the following example (which tests constructibility between two
compatible 257-element tuple types):
#include <tuple>
#define M(x) x, x
using ty1 = std::tuple<M(M(M(M(M(M(M(M(int)))))))), int>;
using ty2 = std::tuple<M(M(M(M(M(M(M(M(int)))))))), long>;
static_assert(std::is_constructible_v<ty2, ty1>);
memory usage improves ~27% from 440MB to 320MB and compile time improves
~20% from ~2s to ~1.6s (with -std=c++23).
libstdc++-v3/ChangeLog:
* include/std/type_traits (enable_if, __enable_if_t): Define them
earlier.
(__detail::__first_t): Define.
(__detail::__or_fn, __detail::__and_fn): Declare.
(__or_, __and_): Redefine as alias templates in terms of __or_fn
and __and_fn.
(__not_): Redefine as an alias template.
(__detail::__disjunction_impl, __detail::__conjunction_impl):
Define.
(conjuction, disjunction): Redefine in terms of __disjunction_impl
and __conjunction_impl.
|
|
We have real_isnegzero but no real_iszero. We could memcmp with 0,
but that's just ugly.
gcc/ChangeLog:
* real.cc (real_iszero): New.
* real.h (real_iszero): New.
|
|
For the frange implementation with endpoints I'm about to contribute,
we need to set REAL_VALUE_TYPEs with negative infinity. The support
is already there in real.cc, but it is awkward to get at. One could
call real_inf() and then negate the value, but I've added the ability
to pass the sign argument like many of the existing real.* functions.
I've declared the functions in such a way to avoid changes to the
existing code base:
// Unchanged function returning true for either +-INF.
bool real_isinf (const REAL_VALUE_TYPE *r);
// New overload to be able to specify the sign.
bool real_isinf (const REAL_VALUE_TYPE *r, int sign);
// Replacement function for setting INF, defaults to +INF.
void real_inf (REAL_VALUE_TYPE *, int sign = 0);
gcc/ChangeLog:
* real.cc (real_isinf): New overload.
(real_inf): Add sign argument.
* real.h (real_isinf): New overload.
(real_inf): Add sign argument.
|
|
About 5 years ago we got a request to implement -Wself-move, which
warns about useless moves like this:
int x;
x = std::move (x);
This patch implements that warning.
PR c++/81159
gcc/c-family/ChangeLog:
* c.opt (Wself-move): New option.
gcc/cp/ChangeLog:
* typeck.cc (maybe_warn_self_move): New.
(cp_build_modify_expr): Call maybe_warn_self_move.
gcc/ChangeLog:
* doc/invoke.texi: Document -Wself-move.
gcc/testsuite/ChangeLog:
* g++.dg/warn/Wself-move1.C: New test.
|
|
frange is using some of the default vrange setters, some of which are
leaving the range in an undefined state. We hadn't noticed this
because neither frange nor unsupported_range, both which use some of
the default implementation, weren't being used much.
We can never go wrong with setting VARYING ;-).
gcc/ChangeLog:
* value-range.cc (vrange::set): Set varying.
(vrange::set_nonzero): Same.
(vrange::set_zero): Same.
(vrange::set_nonnegative): Same.
|
|
On the true side of x == -0.0, we can't just blindly value propagate
the -0.0 into every use of x because x could be +0.0.
With this change, we only allow the transformation if
!HONOR_SIGNED_ZEROS or if the range is known not to contain 0.
gcc/ChangeLog:
* range-op-float.cc (foperator_equal::op1_range): Do not blindly
copy op2 range when honoring signed zeros.
|
|
It looks like we're missing a newline for cases where we don't print
anything.
gcc/ChangeLog:
* tree-ssa-threadbackward.cc (possibly_profitable_path_p): Always
add newline.
(profitable_path_p): Same.
|
|
This removes the redundant operator=(E) from std::error_code and
std::error_condition. Without that overload, assignment from a custom
type will use the templated constructor to create a temporary and then
use the trivial copy assignment operator. With the overloaded
assignment, we have to check the constraints twice as often, because
that overload and its constraints are checked for simple copy
assignments (including the one in the overloaded assignment operator
itself!)
Also add tests that ADL is used as per LWG 3629.
libstdc++-v3/ChangeLog:
* include/std/system_error (error_code::_Check): New alias
template for constructor SFINAE constraint.
(error_code::error_code(ErrorCodeEnum)): Use it.
(error_code::operator=(ErrorCodeEnum)): Remove.
(error_condition::_Check): New alias template for constraint.
(error_condition::error_condition(ErrorConditionEnum)): Use it.
(error_condition::operator=(ErrorConditionEnum)): Remove.
* testsuite/19_diagnostics/error_code/cons/1.cc: Check
constructor taking user-defined error enum.
* testsuite/19_diagnostics/error_condition/cons/1.cc: Likewise.
|
|
Ideally this wouldn't be needed, because eventually these pointers all
get passed to either the basic_string_view(const CharT*) constructor, or
to basic_string_view::find(const CharT*), both of which already have the
attribute. But for that to work requires optimization, so that the null
value gets propagated through the call chain.
Adding it explicitly to each member that requires a non-null pointer
makes the diagnostics more reliable even without optimization. It's
better to give a diagnostic earlier anyway, at the actual problematic
call in the user's code.
libstdc++-v3/ChangeLog:
* include/bits/basic_string.h (starts_with, ends_with, contains):
Add nonnull attribute.
* include/bits/cow_string.h (starts_with, ends_with, contains):
Likewise.
* include/std/string_view (starts_with, ends_with, contains):
Likewise.
* testsuite/21_strings/basic_string/operations/contains/nonnull.cc
* testsuite/21_strings/basic_string/operations/ends_with/nonnull.cc
* testsuite/21_strings/basic_string/operations/starts_with/nonnull.cc
* testsuite/21_strings/basic_string_view/operations/contains/nonnull.cc
* testsuite/21_strings/basic_string_view/operations/ends_with/nonnull.cc
* testsuite/21_strings/basic_string_view/operations/starts_with/nonnull.cc
|