Age | Commit message (Collapse) | Author | Files | Lines |
|
gcc/fortran/ChangeLog:
PR fortran/104332
* resolve.cc (resolve_symbol): Avoid NULL pointer dereference while
checking a symbol with the BIND(C) attribute.
gcc/testsuite/ChangeLog:
PR fortran/104332
* gfortran.dg/bind_c_usage_34.f90: New test.
|
|
After r6-2044-g98e30e515f184b, code like "((x & 0xff00ff00U) >> 8)"
would be optimized like (x >> 8) & 0xff00ffU which is normally better
except on aarch64, the shift right could be combined with another
operation in some cases. So we need to add a few define_splits
to the aarch64 backends that match "((x >> shift) & CST0) OP Y"
and splits it to:
TMP = X & CST1
(TMP >> shift) OP Y
Note this also gets us to matching rev16 back too so I added a
testcase to make sure we don't lose that matching any more.
Note when the generic patch to recognize those as bswap ROT 16,
we might regress again and need to add a few more patterns to
the aarch64 backend but will deal with that once that happens.
Committed as approved after a bootstrapp/test on aarch64-linux-gnu with no regressions.
gcc/ChangeLog:
* config/aarch64/aarch64.md: Add a new define_split
to help combine.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/rev16_2.c: New test.
* gcc.target/aarch64/shift_and_operator-1.c: New test.
|
|
If a bound region gets overwritten with UNKNOWN due to being
possibly-aliased during a write, that could have been the only
region keeping its value live, in which case we could falsely report
a leak. This is hidden somewhat by the "uncertainty" mechanism for
cases where the write happens in the same stmt as the last reference
to the value goes away, but not in the general case, which occurs
in PR analyzer/109059, which falsely complains about a leak whilst
haproxy updates a doubly-linked list.
The whole "uncertainty_t" class seems broken to me now; I think we need
to track (in the store) what values could have escaped to the external
part of the program. We do this to some extent for pointers by tracking
the region as escaped, though we're failing to do this for this case:
even though there could still be other pointers to the region,
eventually they go away; we want to capture the fact that the external
part of the state is still keeping it live. Also, this doesn't work for
non-pointer svalues, such as for detecting file-descriptor leaks.
As both a workaround and a step towards eventually removing
"class uncertainty_t" this patch updates the "mark_region_as_unknown"
code called by possibly-aliased set_value so that when old values are
removed, any base region pointed to them is marked as escaped, fixing
the leak false positive.
The patch has this effect on my integration tests of -fanalyzer:
Comparison:
GOOD: 129 (19.20% -> 20.22%)
BAD: 543 -> 509 (-34)
where there's a big improvement in -Wanalyzer-malloc-leak:
-Wanalyzer-malloc-leak:
GOOD: 61 (45.19% -> 54.95%)
BAD: 74 -> 50 (-24)
Known false positives: 25 -> 2 (-23)
haproxy-2.7.1: 24 -> 1 (-23)
Suspected false positives: 49 -> 48 (-1)
coreutils-9.1: 32 -> 31 (-1)
and some churn in the other warnings:
-Wanalyzer-use-of-uninitialized-value:
GOOD: 0
BAD: 81 -> 80 (-1)
-Wanalyzer-file-leak:
GOOD: 0
BAD: 10 -> 11 (+1)
-Wanalyzer-out-of-bounds:
GOOD: 0
BAD: 24 -> 22 (-2)
gcc/analyzer/ChangeLog:
PR analyzer/109059
* region-model.cc (region_model::mark_region_as_unknown): Gather a
set of maybe-live svalues and call on_maybe_live_values with it.
* store.cc (binding_map::remove_overlapping_bindings): Add new
"maybe_live_values" param; add any removed svalues to it.
(binding_cluster::clobber_region): Add NULL as new param of
remove_overlapping_bindings.
(binding_cluster::mark_region_as_unknown): Add "maybe_live_values"
param and pass it to remove_overlapping_bindings.
(binding_cluster::maybe_get_compound_binding): Add NULL for new
param of binding_map::remove_overlapping_bindings.
(binding_cluster::remove_overlapping_bindings): Add
"maybe_live_values" param and pass to
binding_map::remove_overlapping_bindings.
(store::set_value): Capture a set of maybe-live svalues, and call
on_maybe_live_values with it.
(store::on_maybe_live_values): New.
(store::mark_region_as_unknown): Add "maybe_live_values" param
and pass it to binding_cluster::mark_region_as_unknown.
(store::remove_overlapping_bindings): Pass NULL for new param of
binding_cluster::remove_overlapping_bindings.
* store.h (binding_map::remove_overlapping_bindings): Add
"maybe_live_values" param.
(binding_cluster::mark_region_as_unknown): Likewise.
(binding_cluster::remove_overlapping_bindings): Likewise.
(store::mark_region_as_unknown): Likewise.
(store::on_maybe_live_values): New decl.
gcc/testsuite/ChangeLog:
PR analyzer/109059
* gcc.dg/analyzer/flex-with-call-summaries.c: Remove xfail.
* gcc.dg/analyzer/leak-pr109059-1.c: New test.
* gcc.dg/analyzer/leak-pr109059-2.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
|
|
With calls we now often get contraints like
callarg = *callarg + UNKNOWN
or similar cases. The important thing to note is that this
complex constraint changes the node solution itself, so when
solving the node is marked as changed immediately again. When
that happens it's profitable to iterate that self-cycle immediately
so we maximize cache reuse and build up the successor graph quickly
to get better topological ordering and reduce the number of
iterations of the solving.
For a testcase derived from ceph this reduces the time spent in
PTA solving from 453s to 92s which is quite significant.
* tree-ssa-structalias.cc (solve_graph): Immediately
iterate self-cycles.
|
|
We were failing to come up with the name for the anonymous union. It seems
like unfortunate redundancy, but the ABI does say that the name of an
anonymous union is its first named member.
PR c++/108566
gcc/cp/ChangeLog:
* mangle.cc (anon_aggr_naming_decl): New.
(write_unqualified_name): Use it.
gcc/testsuite/ChangeLog:
* g++.dg/abi/anon6.C: New test.
|
|
Integration testing showed various false positives from
-Wanalyzer-deref-before-check where the expression that's dereferenced
is different from the one that's checked, but the diagnostic is emitted
because they both evaluate to the same symbolic value.
This patch rejects such warnings, unless we have tree expressions for
both and that both tree expressions are "spelled the same way" i.e.
would be printed to the same user-facing string.
gcc/analyzer/ChangeLog:
PR analyzer/108475
PR analyzer/109060
* sm-malloc.cc (deref_before_check::deref_before_check):
Initialize new field m_deref_expr. Assert that arg is non-NULL.
(deref_before_check::emit): Reject cases where the spelling of the
thing that was dereferenced differs from that of what is checked,
or if the dereference expression was not found. Remove code to
handle NULL m_arg.
(deref_before_check::describe_state_change): Remove code to handle
NULL m_arg.
(deref_before_check::describe_final_event): Likewise.
(deref_before_check::sufficiently_similar_p): New.
(deref_before_check::m_deref_expr): New field.
(malloc_state_machine::maybe_complain_about_deref_before_check):
Don't warn if the diag_ptr is NULL.
gcc/testsuite/ChangeLog:
PR analyzer/108475
PR analyzer/109060
* gcc.dg/analyzer/deref-before-check-pr108475-1.c: New test.
* gcc.dg/analyzer/deref-before-check-pr108475-haproxy-tcpcheck.c:
New test.
* gcc.dg/analyzer/deref-before-check-pr109060-haproxy-cfgparse.c:
New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
|
|
[PR109008]
This patch, incremental to the just posted one, improves the reverse
operation ranges significantly by widening just by 0.5ulp in each
direction rather than 1ulp. Again, REAL_VALUE_TYPE has both wider
exponent range and wider mantissa precision (160 bits) than any
supported type, this patch uses the latter property.
The patch doesn't do it if -frounding-math, because then the rounding
can be +-1ulp in each direction depending on the rounding mode which
we don't know, or for IBM double double because that type is just weird
and we can't trust in sane properties.
I've performed testing of these 2 patches on 300000 random tests as with
yesterday's patch, exact numbers are in the PR, but I see very significant
improvement in the precision of the ranges while keeping it conservatively
correct.
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109008
* range-op-float.cc (float_widen_lhs_range): If not
-frounding-math and not IBM double double format, extend lhs
range just by 0.5ulp rather than 1ulp in each direction.
|
|
As discussed in the PR, t-cygwin-w64 file has been introduced in 2013
and has one important problem, two different multilib options -m64 and -m32,
but MULTILIB_DIRNAMES with just one word in it.
Before the genmultilib sanity checking was added, my understanding is that
this essentially resulted in effective --disable-multilib,
$ gcc -print-multi-lib
.;
;@m32
$ gcc -print-multi-directory
.
$ gcc -print-multi-directory -m64
.
$ gcc -print-multi-directory -m32
$ gcc -print-multi-os-directory
../lib
$ gcc -print-multi-os-directory -m64
../lib
$ gcc -print-multi-os-directory -m32
../lib32
and because of the way e.g. config-ml.in operates
multidirs=
for i in `${CC-gcc} --print-multi-lib 2>/dev/null`; do
dir=`echo $i | sed -e 's/;.*$//'`
if [ "${dir}" = "." ]; then
true
else
if [ -z "${multidirs}" ]; then
multidirs="${dir}"
else
multidirs="${multidirs} ${dir}"
fi
fi
done
dir was . first time (and so nothing was done) and empty
second time, multidirs empty too, so multidirs was set to empty
like it would be with --disable-multilib.
With the added sanity checking the build fails unless --disable-multilib
is used in configure (dunno whether people usually configure that way
on cygwin).
>From what has been said in the PR, multilibs were not meant to be supported
and e.g. cygwin headers probably aren't ready for it.
So the following patch just removes the file with the (incorrect) multilib
stuff instead of fixing it (say by setting MULTILIB_DIRNAMES to 64 32).
I have no way to test this though, no Windows around, can anyone please
test this? I just would like to get some progress on the P1s we have...
2023-02-22 Jakub Jelinek <jakub@redhat.com>
gcc/ChangeLog:
PR target/107998
* config.gcc (x86_64-*-cygwin*): Don't add i386/t-cygwin-w64 into
$tmake_file.
* config/i386/t-cygwin-w64: Remove.
Signed-off-by: Jonathan Yong <10walls@gmail.com>
|
|
The recent change to undo the tree_code_type/tree_code_length
excessive duplication apparently broke building the Linux kernel
plugin. While it is certainly desirable that GCC plugins are built
with the same compiler as GCC has been built and with the same options
(at least the important ones), it might be hard to arrange that,
e.g. if gcc is built using a cross-compiler but the plugin then built
natively, or GCC isn't bootstrapped for other reasons, or just as in
the kernel case they were building the plugin with -std=gnu++11 while
the bootstrapped GCC has been built without any such option and so with
whatever the compiler defaulted to.
For C++17 and later tree_code_{type,length} are UNIQUE symbols with
those assembler names, while for C++11/14 they were
_ZL14tree_code_type and _ZL16tree_code_length.
The following patch uses a comdat var for those even for C++11/14
as suggested by Maciej Cencora. Relying on weak attribute is not an
option because not all hosts support it and there are non-GNU system
compilers. While we could use it unconditionally,
I think defining a template just to make it comdat is weird, and
the compiler itself is always built with the same compiler.
Plugins, being separate shared libraries, will have a separate copy of
the arrays if they are ODR-used in the plugin, so there is not a big
deal if e.g. cc1plus uses tree_code_type while plugin uses
_ZN19tree_code_type_tmplILi0EE14tree_code_typeE or vice versa.
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR plugins/108634
* tree-core.h (tree_code_type, tree_code_length): For C++11 or
C++14, don't declare as extern const arrays.
(tree_code_type_tmpl, tree_code_length_tmpl): New types with
static constexpr member arrays for C++11 or C++14.
* tree.h (TREE_CODE_CLASS): For C++11 or C++14 use
tree_code_type_tmpl <0>::tree_code_type instead of tree_code_type.
(TREE_CODE_LENGTH): For C++11 or C++14 use
tree_code_length_tmpl <0>::tree_code_length instead of
tree_code_length.
* tree.cc (tree_code_type, tree_code_length): Remove.
|
|
On Tue, Nov 01, 2022 at 01:46:20PM -0600, Jeff Law via Gcc-patches wrote:
> > This does cause a change of behaviour if users were previously relying upon
> > symlinks or absolute paths not being resolved.
>
> I'm not too worried about this scenario.
As mentioned in the PR, this patch breaks e.g. ccache testsuite.
I strongly doubt most of the users want such a behavior, because it
makes all filenames absolute when -f*-prefix-map= options remap one
absolute path to another one.
Say if I'm in /tmp and /tmp is the canonical path and there is
src/test.c file, with -fdebug-prefix-map=/tmp=/blah
previously there would be DW_AT_comp_dir "/blah" and it is still there,
but DW_AT_name which was previouly "src/test.c" (relative against
DW_AT_comp_dir) is now "/blah/src/test.c" instead.
Even worse, the canonicalization is only done on the remap_filename
argument, but not on the old_prefix side. That is e.g. what breaks
ccache. If there is
/tmp/foobar1 directory and
ln -sf foobar1 /tmp/foobar2
cd /tmp/foobar2
then -fdebug-prefix-map=`pwd`:/blah will just not work, while
src/test.c will be canonicalized to /tmp/foobar1/src/test.c,
old_prefix is still what the user provided which is /tmp/foobar2.
User would need to change their uses to use -fdebug-prefix-map=`realpath $(pwd)`=/blah
I've created 3 patches for this.
The first patch just reverts the patch (and its follow-up patch).
The second introduces a new option, -f{,no}-canon-prefix-map which affects
the behavior of -f{file,macro,debug,profile}-prefix-map=, if on it
canonicalizes the old path of the prefix map option and compares that
against the canonicalized filename for absolute paths but not relative.
And last is like the second, but does that also for relative paths except
for filenames with no / (or / or \ on DOS based fs). So, the third patch
gets an optional behavior of what has been on the trunk lately with the
difference that the old_prefix is canonicalized by the compiler.
Initially I've thought I'd just add some magic syntax to the OLD=NEW
argument of those options (because there are 4 of them), but as noted
in the comments, = is valid char in OLD (just not new), so it would
be hard to figure out some syntax. So instead a new option, which one
can turn on and off for different -f*-prefix-map= options if needed.
-fdebug-prefix-map=/path1=/mypath1 -fcanon-prefix-map \
-fdebug-prefix-map=/path2=/mypath2 -fno-canon-prefix-map \
-fdebug-prefix-map=/path3=/mypath3
will use the old behavior for the /path1 and /path3 handling and
the new one only for /path2 handling.
This commit is the third patch described above.
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR other/108464
* common.opt (fcanon-prefix-map): New option.
* opts.cc: Include file-prefix-map.h.
(flag_canon_prefix_map): New variable.
(common_handle_option): Handle OPT_fcanon_prefix_map.
(gen_command_line_string): Ignore OPT_fcanon_prefix_map.
* file-prefix-map.h (flag_canon_prefix_map): Declare.
* file-prefix-map.cc (struct file_prefix_map): Add canonicalize
member.
(add_prefix_map): Initialize canonicalize member from
flag_canon_prefix_map, and if true canonicalize it using lrealpath.
(remap_filename): Revert 2022-11-01 and 2022-11-07 changes,
use lrealpath result only for map->canonicalize map entries.
* lto-opts.cc (lto_write_options): Ignore OPT_fcanon_prefix_map.
* opts-global.cc (handle_common_deferred_options): Clear
flag_canon_prefix_map at the start and handle OPT_fcanon_prefix_map.
* doc/invoke.texi (-fcanon-prefix-map): Document.
(-ffile-prefix-map, -fdebug-prefix-map, -fprofile-prefix-map): Add
see also for -fcanon-prefix-map.
* doc/cppopts.texi (-fmacro-prefix-map): Likewise.
|
|
On the following testcase, we warn with -Wunused-value twice, once
in the FEs and later on cgraphunit again with slightly different
wording.
The following patch fixes that by registering a warning suppression in the
FEs when we warn and not warning in cgraphunit anymore if that happened.
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR c/108079
gcc/
* cgraphunit.cc (check_global_declaration): Don't warn for unused
variables which have OPT_Wunused_variable warning suppressed.
gcc/c/
* c-decl.cc (pop_scope): Suppress OPT_Wunused_variable warning
after diagnosing it.
gcc/cp/
* decl.cc (poplevel): Suppress OPT_Wunused_variable warning
after diagnosing it.
gcc/testsuite/
* c-c++-common/Wunused-var-18.c: New test.
|
|
into infinities [PR109008]
The following patch does two things (both related to range extension
around the boundaries).
The first part (in the 2 real_isfinite blocks) is to make the ranges
narrower when the old boundaries are minimum and/or maximum representable
finite number. In that case frange_nextafter gives -Inf or +Inf,
but then the resulting computed reverse range is very far from the actually
needed range, usually extends up to infinity or could even result in NaNs.
While infinities are really the next representable numbers in the
corresponding mode, REAL_VALUE_TYPE is actually a type with wider range
for exponent and 160 bit precision, so the patch instead uses
nextafter number in a hypothetical floating point format with the same
mantissa precision but wider range of exponents. This significantly
improves the actual ranges of the reverse operations, while still making
them conservatively correct.
The second part is a fix for miscompilation of the new testcase below.
For -ffinite-math-only, without this patch we extend the minimum and/or
maximum representable finite number to -Inf or +Inf, with the patch to
some number outside of the normal exponent range of the mode, but then
we use set which canonicalizes it and turns the boundaries back to
the minimum and/or maximum representable finite numbers, but because
in say [__DBL_MAX__, __DBL_MAX__] = op1 + [__DBL_MAX__, __DBL_MAX__]
op1 can be larger than 0, up to the largest number which rounds to even
down back to __DBL_MAX__ and there are still no infinities involved,
it needs to work even with -ffinite-math-only. So, we really need to
widen the lhs range a little bit even in that case. The patch does
that through temporarily clearing -ffinite-math-only, such that the
value with infinities or the outside of bounds values passes the
setting and verification (the VR_VARYING case is needed because
we get ICEs otherwise, but when lhs is VR_VARYING in -ffast-math,
i.e. minimum to maximum representable finite and both signs of NaN,
then set does all we need, we don't need to or in a NaN range).
We don't really later use the range in a way that would become a problem
that it is wider than varying, we actually just perform maths on the
two boundaries.
As I said in the PR, this doesn't fix the !MODE_HAS_INFINITIES case,
I believe we actually need to treat the boundary values as infinities
in that case because they (probably) work like that, but it is unclear
if it is just the reverse operation lhs widening that is a problem there,
or whether it is a general problem. I have zero experience with
floating points without infinities (PDP11, some ARM half type?,
what else?).
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109008
* range-op-float.cc (float_widen_lhs_range): If lb is
minimum representable finite number or ub is maximum
representable finite number, instead of widening it to
-inf or inf widen it to negative or positive 0x0.8p+(EMAX+1).
Temporarily clear flag_finite_math_only when canonicalizing
the widened range.
* gcc.dg/pr109008.c: New test.
|
|
gcc/ChangeLog:
* config/riscv/riscv-builtins.cc (riscv_gimple_fold_builtin): New function.
* config/riscv/riscv-protos.h (riscv_gimple_fold_builtin): Ditto.
(gimple_fold_builtin): Ditto.
* config/riscv/riscv-vector-builtins-bases.cc (class read_vl): New class.
(class vleff): Ditto.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def (read_vl): Ditto.
(vleff): Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct read_vl_def): Ditto.
(struct fault_load_def): Ditto.
(SHAPE): Ditto.
* config/riscv/riscv-vector-builtins-shapes.h: Ditto.
* config/riscv/riscv-vector-builtins.cc
(rvv_arg_type_info::get_tree_type): Add size_ptr.
(gimple_folder::gimple_folder): New class.
(gimple_folder::fold): Ditto.
(gimple_fold_builtin): New function.
(get_read_vl_instance): Ditto.
(get_read_vl_decl): Ditto.
* config/riscv/riscv-vector-builtins.def (size_ptr): Add size_ptr.
* config/riscv/riscv-vector-builtins.h (class gimple_folder): New class.
(get_read_vl_instance): New function.
(get_read_vl_decl): Ditto.
* config/riscv/riscv-vsetvl.cc (fault_first_load_p): Ditto.
(read_vl_insn_p): Ditto.
(available_occurrence_p): Ditto.
(backward_propagate_worthwhile_p): Ditto.
(gen_vsetvl_pat): Adapt for vleff support.
(get_forward_read_vl_insn): New function.
(get_backward_fault_first_load_insn): Ditto.
(source_equal_p): Adapt for vleff support.
(first_ratio_invalid_for_second_sew_p): Remove.
(first_ratio_invalid_for_second_lmul_p): Ditto.
(first_lmul_less_than_second_lmul_p): Ditto.
(first_ratio_less_than_second_ratio_p): Ditto.
(support_relaxed_compatible_p): New function.
(vector_insn_info::operator>): Remove.
(vector_insn_info::operator>=): Refine.
(vector_insn_info::parse_insn): Adapt for vleff support.
(vector_insn_info::compatible_p): Ditto.
(vector_insn_info::update_fault_first_load_avl): New function.
(pass_vsetvl::transfer_after): Adapt for vleff support.
(pass_vsetvl::demand_fusion): Ditto.
(pass_vsetvl::cleanup_insns): Ditto.
* config/riscv/riscv-vsetvl.def (DEF_INCOMPATIBLE_COND): Remove
redundant condtions.
* config/riscv/riscv-vsetvl.h (struct demands_cond): New function.
* config/riscv/riscv.cc (TARGET_GIMPLE_FOLD_BUILTIN): New target hook.
* config/riscv/riscv.md: Adapt for vleff support.
* config/riscv/t-riscv: Ditto.
* config/riscv/vector-iterators.md: New iterator.
* config/riscv/vector.md (read_vlsi): New pattern.
(read_vldi_zero_extend): Ditto.
(@pred_fault_load<mode>): Ditto.
|
|
Hi, current maybe_gen_insn can only expand 9 nops.
For RVV intrinsics, I need to extend it as 10, otherwise I should use GEN_FCN.
This patch is quite obvious change, Ok for trunk ?
Thanks.
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins.cc
(function_expander::use_ternop_insn): Use maybe_gen_insn instead.
(function_expander::use_widen_ternop_insn): Ditto.
* optabs.cc (maybe_gen_insn): Extend nops handling.
|
|
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc: Split indexed load
patterns according to RVV ISA.
* config/riscv/vector-iterators.md: New iterators.
* config/riscv/vector.md
(@pred_indexed_<order>load<VNX1_QHSD:mode><VNX1_QHSDI:mode>): Remove.
(@pred_indexed_<order>load<mode>_same_eew): New pattern.
(@pred_indexed_<order>load<mode>_x2_greater_eew): Ditto.
(@pred_indexed_<order>load<mode>_x4_greater_eew): Ditto.
(@pred_indexed_<order>load<mode>_x8_greater_eew): Ditto.
(@pred_indexed_<order>load<mode>_x2_smaller_eew): Ditto.
(@pred_indexed_<order>load<mode>_x4_smaller_eew): Ditto.
(@pred_indexed_<order>load<mode>_x8_smaller_eew): Ditto.
(@pred_indexed_<order>load<VNX2_QHSD:mode><VNX2_QHSDI:mode>): Remove.
(@pred_indexed_<order>load<VNX4_QHSD:mode><VNX4_QHSDI:mode>): Ditto.
(@pred_indexed_<order>load<VNX8_QHSD:mode><VNX8_QHSDI:mode>): Ditto.
(@pred_indexed_<order>load<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
(@pred_indexed_<order>load<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
(@pred_indexed_<order>load<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/merge_constraint-1.c: New test.
|
|
* tree-vect-loop-manip.cc (vect_do_peeling): Use
result of constant_lower_bound instead of vf for the lower
bound of the epilog loop trip count.
|
|
The code for handling signed + typedef was breaking on __int128_t, because
it isn't a proper typedef: it doesn't have DECL_ORIGINAL_TYPE.
PR c++/108099
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Handle non-typedef typedef_decl.
gcc/testsuite/ChangeLog:
* g++.dg/ext/int128-7.C: New test.
|
|
PR c++/108542
gcc/cp/ChangeLog:
* class.cc (instantiate_type): Strip location wrapper.
gcc/testsuite/ChangeLog:
* g++.dg/contracts/contracts-err1.C: New test.
|
|
|
|
The optimization to reuse the same allocator temporary for all string
constructor calls was breaking on this testcase, because the temps were
already in the argument to build_vec_init, and replacing them with
references to one slot got confused with calls at multiple levels (for the
initializer_list backing array, and then again for the array member of the
std::array). Fixed by reusing the whole TARGET_EXPR instead of pulling out
the slot; gimplification ensures that it's only initialized once.
I also moved the check for initializing a std:: class down into the tree
walk, and handle multiple temps within a single array element
initialization.
PR c++/108773
gcc/cp/ChangeLog:
* init.cc (find_allocator_temps_r): New.
(combine_allocator_temps): Replace find_allocator_temp.
(build_vec_init): Adjust.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/initlist-array18.C: New test.
* g++.dg/cpp0x/initlist-array19.C: New test.
|
|
There are various -Wanalyzer-null-dereference false +ves in bugzilla
that I've been attempting to fix. Unfortunately I haven't made much
progress, but it seems worth at least capturing the reduced
reproducers as test cases, to make it easier to spot changes in
behavior.
gcc/testsuite/ChangeLog:
PR analyzer/102671
PR analyzer/105755
PR analyzer/108251
PR analyzer/108400
* gcc.dg/analyzer/null-deref-pr102671-1.c: New test, reduced
from Emacs.
* gcc.dg/analyzer/null-deref-pr102671-2.c: Likewise.
* gcc.dg/analyzer/null-deref-pr105755.c: Likewise.
* gcc.dg/analyzer/null-deref-pr108251-smp_fetch_ssl_fc_has_early-O2.c:
New test, reduced from haproxy's src/ssl_sample.c.
* gcc.dg/analyzer/null-deref-pr108251-smp_fetch_ssl_fc_has_early.c:
Likewise.
* gcc.dg/analyzer/null-deref-pr108400-SoftEtherVPN-WebUi.c: New
test, reduced from SoftEtherVPN's src/Cedar/WebUI.c.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
|
|
When doing an emergency dump the cfg output dumps are corrupted because the
ending "}" is missing.
Normally when the pass manager finishes it would call finish_graph_dump_file to
produce this. This is called here because each pass can dump multiple digraphs.
However during an emergency dump we only dump the current function and so after
that is done we never go back to the pass manager.
As such, we need to manually call finish_graph_dump_file in order to properly
finish off graph generation.
With this -ftree-dump-*-graph works properly during a crash dump.
gcc/ChangeLog:
* passes.cc (emergency_dump_function): Finish graph generation.
|
|
We were analyzing code quality after recent changes and have noticed that the
tbz support somehow managed to increase the number of branches overall rather
than decreased them.
While investigating this we figured out that the problem is that when an
existing & <contants> exists in gimple and the instruction is generated because
of the range information gotten from the ANDed constant that we end up with the
situation that you get a NOP AND in the RTL expansion.
This is not a problem as CSE will take care of it normally. The issue is when
this original AND was done in a location where PRE or FRE "lift" the AND to a
different basic block. This triggers a problem when the resulting value is not
single use. Instead of having an AND and tbz, we end up generating an
AND + TST + BR if the mode is HI or QI.
This CSE across BB was a problem before but this change made it worse. Our
branch patterns rely on combine being able to fold AND or zero_extends into the
instructions.
To work around this (since a proper fix is outside of the scope of stage-4) we
are limiting the new tbranch optab to only HI and QI mode values. This isn't a
problem because these two modes are modes for which we don't have CBZ support,
so they are the problematic cases to begin with. Additionally booleans are QI.
The second thing we're doing is limiting the only legal bitpos to pos 0. i.e.
only the bottom bit. This such that we prevent the double ANDs as much as
possible.
Now most other cases, i.e. where we had an explicit & in the source code are
still handled correctly by the anonymous (*tb<optab><ALLI:mode><GPI:mode>1)
pattern that was added along with tbranch support.
This means we don't expand the superflous AND here, and while it doesn't fix the
problem that in the cross BB case we loss tbz, it also doesn't make things worse.
With these tweaks we've now reduced the number of insn uniformly was originally
expected.
gcc/ChangeLog:
* config/aarch64/aarch64.md (tbranch_<code><mode>3): Restrict to SHORT
and bottom bit only.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/tbz_2.c: New test.
* gcc.target/aarch64/tbz_3.c: New test.
|
|
The problem here is after r13-4748-g2a27ae32fabf85, in some
cases we were calling inform without a corresponding warning.
This changes the logic such that we only cause that to happen
if there was a warning happened before hand.
Changes since
* v1: Fix formating and dump message as suggested by Jakub.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
PR tree-optimization/108980
* gimple-array-bounds.cc (array_bounds_checker::check_array_ref):
Reorgnize the call to warning for not strict flexible arrays
to be before the check of warned.
|
|
The standard was unclear what happens with the transformation of a deduction
guide if the initial template argument deduction fails for a reason other
than not deducing all the arguments; my implementation assumed that the
right thing was to give up on the deduction guide. But in consideration of
CWG2664 this week I realized that we get a better result by just continuing
with an empty set of deductions, so the alias deduction guide is the same as
the original deduction guide plus the deducible constraint.
DR 2664
PR c++/102529
gcc/cp/ChangeLog:
* pt.cc (alias_ctad_tweaks): Continue after deduction failure.
gcc/testsuite/ChangeLog:
* g++.dg/DRs/dr2664.C: New test.
* g++.dg/cpp2a/class-deduction-alias15.C: New test.
|
|
In my initial implementation of alias CTAD, I described a couple of
differences from the specification that I thought would not have a practical
effect; this testcase demonstrates that I was wrong. One difference is
resolved by the CPTK_IS_DEDUCIBLE commit; the other (adding too many of the
alias template parameters to the new deduction guide) is fixed by this
patch.
PR c++/105841
gcc/cp/ChangeLog:
* pt.cc (corresponding_template_parameter_list): Split out...
(corresponding_template_parameter): ...from here.
(find_template_parameters): Factor out...
(find_template_parameter_info::find_in): ...this function.
(find_template_parameter_info::find_in_recursive): New.
(find_template_parameter_info::found): New.
(alias_ctad_tweaks): Only add parms used in the deduced args.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/class-deduction-alias14.C: New test.
Co-authored-by: Michael Spertus <mike@spertus.com>
|
|
I want to have more discussion about the interface before claiming the
__is_deducible name, so for GCC 13 make it internal-only.
gcc/ChangeLog:
* doc/extend.texi: Comment out __is_deducible docs.
gcc/cp/ChangeLog:
* cp-trait.def (IS_DEDUCIBLE): Add space to name.
gcc/testsuite/ChangeLog:
* g++.dg/ext/is_deducible1.C: Guard with
__has_builtin (__is_deducible).
|
|
C++20 class template argument deduction for an alias template involves
adding a constraint that the template arguments for the alias template can
be deduced from the return type of the deduction guide for the underlying
class template. In the standard, this is modeled as defining a class
template with a partial specialization, but it's much more efficient to
implement with a trait that directly tries to perform the deduction.
The first argument to the trait is a template rather than a type, so various
places needed to be adjusted to accommodate that.
PR c++/105841
gcc/ChangeLog:
* doc/extend.texi (Type Traits):: Document __is_deducible.
gcc/cp/ChangeLog:
* cp-trait.def (IS_DEDUCIBLE): New.
* cxx-pretty-print.cc (pp_cxx_trait): Handle non-type.
* parser.cc (cp_parser_trait): Likewise.
* tree.cc (cp_tree_equal): Likewise.
* pt.cc (tsubst_copy_and_build): Likewise.
(type_targs_deducible_from): New.
(alias_ctad_tweaks): Use it.
* semantics.cc (trait_expr_value): Handle CPTK_IS_DEDUCIBLE.
(finish_trait_expr): Likewise.
* constraint.cc (diagnose_trait_expr): Likewise.
* cp-tree.h (type_targs_deducible_from): Declare.
gcc/testsuite/ChangeLog:
* g++.dg/ext/is_deducible1.C: New test.
|
|
Compile a resource object that contains the utf8 manifest.
Then link that object into the driver and compiler proper.
For compiler proper the link has to be forced because the
resource object file gets into a static library (libbackend.a)
and gets eventually dropped because it has no symbols of
its own and nothing is referencing it inside the library.
Therefore, an artificial symbol is planted to force the link.
gcc/ChangeLog:
PR driver/108865
* config.host: add object for x86_64-*-mingw*.
* config/i386/sym-mingw32.cc: dummy file to attach
symbol.
* config/i386/utf8-mingw32.rc: windres resource file.
* config/i386/winnt-utf8.manifest: XML manifest to
enable UTF-8.
* config/i386/x-mingw32: reference to x-mingw32-utf8.
* config/i386/x-mingw32-utf8: Makefile fragment to
embed UTF-8 manifest.
Signed-off-by: Jonathan Yong <10walls@gmail.com>
|
|
LRA is too conservative in calculation of conflicts with clobbered regs by
using the biggest access mode. This results in failure of possible reg
coalescing and worse code. This patch solves the problem.
PR rtl-optimization/108999
gcc/ChangeLog:
* lra-constraints.cc (process_alt_operands): Use operand modes for
clobbered regs instead of the biggest access mode.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/pr108999.c: New.
|
|
The following plugs one place in extract_muldiv where it should avoid
folding when sanitizing overflow.
PR middle-end/108995
* fold-const.cc (extract_muldiv_1): Avoid folding
(CST * b) / CST2 when sanitizing overflow and we rely on
overflow being undefined.
* gcc.dg/ubsan/pr108995.c: New testcase.
|
|
The following testcase is reduced from miscompilation of scipy package.
If we have say lhs = [1., 1.] - [1., 1.] and want to compute the range
of lhs from it, we correctly determine it is [0., 0.] (if computations
are exact, we generally don't try to round them further in
frange_arithmetic). In the testcase it is about a reverse operation,
[1., 1.] = op1 + [1., 1.] and we want to compute range of op1 from that.
Right now we just perform the inverse operation (there are some corner
cases about NaN and infinities handling) and so arrive to range
[0., 0.] as well, and because it is a singleton, optimize return eps;
to return 0. That is incorrect though, for the reverse ops we need to
take into account also rounding, the right exact range is
[-0x1.0p-54, 0x1.0p-53] in this case when rounding to nearest, i.e.
all numbers which added to 1. with round to nearest still produce 1.
The problem isn't solely on singleton ranges, and isn't solely on
results around zero. We basically need to consider also values
where the result is up to 0.5ulp away from the lhs range boundaries
in each direction.
The following patch fixes it by extending the lhs range for the
reverse operations by 1ulp in each direction. The PR contains
a pseudo-random test generator I've used to generate 300000 tests
of + and - and then used the same test with * and / instead of + and -
together with a hack to print the discovered ranges by the patch in
a form that another test could then verify the range is conservatively
correct and how far it is from a minimal range.
I believe the results are good enough for now, though plan to look
incrementally into trying to do something better on the -XXX_MAX or
XXX_MAX boundaries (where I think frange_nextafter will use -inf or +inf)
and also try to increase the range just by 0.5ulp rather than 1ulp
if !flag_rounding_math. But dunno if either of those will be doable
and will pass the testing, so I think it is worth committing this fix
first.
2023-03-09 Jakub Jelinek <jakub@redhat.com>
Richard Biener <rguenther@suse.de>
PR tree-optimization/109008
* range-op-float.cc (float_widen_lhs_range): New function.
(foperator_plus::op1_range, foperator_minus::op1_range,
foperator_minus::op2_range, foperator_mult::op1_range,
foperator_div::op1_range, foperator_div::op2_range): Use it.
* gcc.c-torture/execute/ieee/pr109008.c: New test.
|
|
|
|
According to Haochen's finding in [1], currently ppc-fortran.exp
doesn't support Fortran specific warning or error messages well.
By looking into it, it's due to that gfortran uses some different
warning/error prefixes as follows:
set gcc_warning_prefix "\[Ww\]arning:"
set gcc_error_prefix "(Fatal )?\[Ee\]rror:"
comparing to:
set gcc_warning_prefix "warning:"
set gcc_error_prefix "(fatal )?error:"
So this is to override these two prefixes and make it support
dg-{warning,error} checks.
[1] https://gcc.gnu.org/pipermail/gcc-patches/2023-March/613302.html
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/ppc-fortran/ppc-fortran.exp: Override
gcc_{warning,error}_prefix with Fortran specific one used in
gfortran_init.
|
|
Test cases scalar-test-data-class-1[45].c adopts type __int128
which requires to check int128 effective target, otherwise the
testing on them will fail at -m32. This patch is to add int128
effective target requirement.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/bfp/scalar-test-data-class-14.c: Adjust with
int128 effective target requirement.
* gcc.target/powerpc/bfp/scalar-test-data-class-15.c: Likewise.
|
|
Two test cases scalar-test-data-class-12.c and vec-test-data-class-9.c
fail on Power9 BE testing at -m32, they adopts a built-in function
scalar_insert_exp which requires powerpc64 support. This patch
is to make them to check has_arch_ppc64 effective target requirement.
PR testsuite/108729
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/bfp/scalar-test-data-class-12.c: Adjust with
has_arch_ppc64 effective target.
* gcc.target/powerpc/bfp/vec-test-data-class-9.c: Likewise.
|
|
The built-in function scalar_test_neg_qp is under stanza
ieee128-hw, that is TARGET_FLOAT128_HW. Since we don't
have float128 hardware support on 32-bit as follows:
if (TARGET_FLOAT128_HW && !TARGET_64BIT)
{
if ((rs6000_isa_flags_explicit & OPTION_MASK_FLOAT128_HW) != 0)
error ("%qs requires %qs", "%<-mfloat128-hardware%>", "-m64");
rs6000_isa_flags &= ~OPTION_MASK_FLOAT128_HW;
}
So adjust the case with lp64 effective target accordingly.
PR testsuite/108730
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/bfp/scalar-test-neg-8.c: Adjust with lp64
effective target requirement.
|
|
Compiled with cpu type Power9 or later, GCC generates
xxspltib rather than vspltis*, so adjust the test
case scanning content accordingly.
PR testsuite/108813
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/pr101384-2.c: Adjust with xxspltib.
|
|
On BE, the extracted index for the leftmost element is 0
rather than 1, adjust the test case accordingly.
PR testsuite/108810
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/fold-vec-extract-double.p9.c (testd_cst): Adjust
the extracted index for BE.
|
|
The mips msa-ds.c test is trying to ensure that MSA branches can have their
delay slots filled. The regexp it used looked for the function name, a nop,
then the function name again. If found that sequence, then the test failed.
The problem is with Vlad's recent IRA work there's simply less code in the
test (good) and as a result one of the *other* branches in the test had an
unfilled delay slot -- the delay slot for the MSA branch was still being
filled.
This patch tightens up the regexp. In particular it looks for the MSA branch
and a nop on the next line (avoiding the over-eager .* construct). That
indicates that the MSA branch did not have its delay slot filled. When that
sequence is found, then the test fails.
This fixes the recent regressions for mips64 and mips64el in the tester.
Installing on the trunk,
gcc/testsuite:
* gcc.target/mips/msa-ds.c: Fix over eager pattern matching.
|
|
The recently added tests missed checking for "fopenmp" (see
other tests where "-fopenmp" is passed), which makes them
fail on non-openmp systems.
* gcc.dg/analyzer/omp-parallel-for-get-min.c,
gcc.dg/analyzer/omp-parallel-for-1.c: Require effective target fopenmp.
|
|
|
|
gcc/ChangeLog
PR sanitizer/81649
* doc/invoke.texi (Instrumentation Options): Clarify
LeakSanitizer behavior.
|
|
gcc/ChangeLog
* doc/install.texi (Prerequisites): Add link to gmplib.org.
|
|
A missed piece of the patch for static operator(): in tsubst_function_decl,
we don't want to replace the first parameter with a new closure pointer if
operator() is static.
PR c++/108526
PR c++/106651
gcc/cp/ChangeLog:
* pt.cc (tsubst_function_decl): Don't replace the closure
parameter if DECL_STATIC_FUNCTION_P.
gcc/testsuite/ChangeLog:
* g++.dg/cpp23/static-operator-call5.C: Pass -g.
|
|
Here, -Wdangling-reference triggers where it probably shouldn't, causing
some grief. The code in question uses a reference wrapper with a member
function returning a reference to a subobject of a non-temporary object:
const Plane & meta = fm.planes().inner();
I've tried a few approaches, e.g., checking that the member function's
return type is the same as the type of the enclosing class (which is
the case for member functions returning *this), but that then breaks
Wdangling-reference4.C with std::optional<std::string>.
This patch adjusts do_warn_dangling_reference so that we look through
reference wrapper classes (meaning, has a reference member and a
constructor taking the same reference type, or is std::reference_wrapper
or std::ranges::ref_view) and don't warn for them, supposing that the
member function returns a reference to a non-temporary object.
PR c++/107532
gcc/cp/ChangeLog:
* call.cc (reference_like_class_p): New.
(do_warn_dangling_reference): Add new bool parameter. See through
reference_like_class_p.
gcc/testsuite/ChangeLog:
* g++.dg/warn/Wdangling-reference8.C: New test.
* g++.dg/warn/Wdangling-reference9.C: New test.
|
|
This fixes another syntax error in slp-3.c. I missed a '{ ... }' in
order to properly exclude s390_vx.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/slp-3.c: Add '{ ... }'.
|
|
In my recent rtti.cc change I assumed when emitting the support tinfos
that the tinfos for the fundamental types haven't been created yet.
Normally (in libsupc++.a (fundamental_type_info.o)) that is the case,
but as can be seen on the testcase, one can violate it by using typeid
etc. in the same TU and do it before ~__fundamental_type_info ()
definition.
The following patch fixes that by popping from unemitted_tinfo_decls
only in the normal case when it is there, and treating non-NULL
DECL_INITIAL on a tinfo node as indication that emit_tinfo_decl has
processed it already.
2023-03-07 Jakub Jelinek <jakub@redhat.com>
PR c++/109042
* rtti.cc (emit_support_tinfo_1): Don't assert that last
unemitted_tinfo_decls element is tinfo, instead pop from it only in
that case.
* decl2.cc (c_parse_final_cleanups): Don't call emit_tinfo_decl
for unemitted_tinfO_decls which have already non-NULL DECL_INITIAL.
* g++.dg/rtti/pr109042.C: New test.
|
|
When processing a noexcept, constructors aren't elided: build_over_call
has
/* It's unsafe to elide the constructor when handling
a noexcept-expression, it may evaluate to the wrong
value (c++/53025). */
&& (force_elide || cp_noexcept_operand == 0))
so the assert I added recently needs to be relaxed a little bit.
PR c++/109030
gcc/cp/ChangeLog:
* constexpr.cc (cxx_eval_call_expression): Relax assert.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/noexcept77.C: New test.
|
|
Similarly to PR107938, this also started with r11-557, whereby cp_finish_decl
can call check_initializer even in a template for a constexpr initializer.
Here we are rejecting
extern const Q q;
template<int>
constexpr auto p = q(0);
even though q has a constexpr operator(). It's deemed non-const by
decl_maybe_constant_var_p because even though 'q' is const it is not
of integral/enum type.
If fun is not a function pointer, we don't know if we're using it as an
lvalue or rvalue, so with this patch we pass 'any' for want_rval. With
that, p_c_e/VAR_DECL doesn't flat out reject the underlying VAR_DECL.
PR c++/107939
gcc/cp/ChangeLog:
* constexpr.cc (potential_constant_expression_1) <case CALL_EXPR>: Pass
'any' when recursing on a VAR_DECL and not a pointer to function.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1y/var-templ74.C: Remove dg-error.
* g++.dg/cpp1y/var-templ77.C: New test.
|