Age | Commit message (Collapse) | Author | Files | Lines |
|
When inside a method then we know the this pointer points to
an object of at least the size of the methods base type. We
can use this to compute more references as not trapping and
enable invariant motion and in turn vectorization as for a
slightly modified version of the testcase in the PR.
PR tree-optimization/121685
* tree-eh.cc (ref_outside_object_p): Split out from ...
(tree_could_trap_p): ... here. Assume the this pointer
of a method refers to an object of at least size of its
base type.
* g++.dg/vect/pr121685-1.cc: New testcase.
|
|
Currently the code rejects:
```
tmp = *a;
*b = tmp;
```
(unless *a == *b). This can be improved such that if a and b are known to
share the same base, then only reject it if they overlap; that is the
difference of the offsets (from the base) is maybe less than the size.
This fixes the testcase in comment #0 of PR 107051.
Changes since v1:
* v2: Use ranges_maybe_overlap_p instead of manually checking the overlap.
Allow for the case where the alignment is known to be greater than
the size.
PR tree-optimization/107051
gcc/ChangeLog:
* tree-ssa-forwprop.cc (optimize_agr_copyprop_1): Allow for
memory sharing the same base if they known not to overlap over
the size.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/copy-prop-aggregate-union-1.c: New test.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
Previously, vector built-in functions were not properly registered during
the LTO pipeline, causing link failures when vector intrinsics were used
in LTO builds with mixed architecture options. This patch ensures all
vector built-in functions are always registered during LTO compilation.
The key changes include:
- Moving pragma intrinsic flag manipulation from riscv-c.cc to
riscv-vector-builtins.cc for better encapsulation
- Registering all vector built-in functions regardless of current ISA
extensions, deferring the actual extension checking to expansion time
- Adding proper support for built-in type registration during LTO
This approach is safe because we already perform extension requirement
checking at expansion time. The trade-off is a slight increase in
bootstrap time for LTO builds due to registering more built-in functions.
PR target/110812
gcc/ChangeLog:
* config/riscv/riscv-c.cc (pragma_intrinsic_flags): Remove struct.
(riscv_pragma_intrinsic_flags_pollute): Remove function.
(riscv_pragma_intrinsic_flags_restore): Remove function.
(riscv_pragma_intrinsic): Simplify to only call handle_pragma_vector.
* config/riscv/riscv-vector-builtins.cc (pragma_intrinsic_flags):
Move struct definition here from riscv-c.cc.
(riscv_pragma_intrinsic_flags_pollute): Move and adapt from
riscv-c.cc, add zvfbfmin, zvfhmin and vector_elen_bf_16 support.
(riscv_pragma_intrinsic_flags_restore): Move from riscv-c.cc.
(rvv_switcher::rvv_switcher): Add pollute_flags parameter to
control flag manipulation.
(rvv_switcher::~rvv_switcher): Restore flags conditionally.
(register_builtin_types): Use rvv_switcher without polluting flags.
(get_required_extensions): Remove function.
(check_required_extensions): Simplify to only check type validity.
(function_instance::function_returns_void_p): Move implementation
from header.
(function_builder::add_function): Register placeholder for LTO.
(init_builtins): Simplify and handle LTO case.
(reinit_builtins): Remove function.
(handle_pragma_vector): Remove extension checking.
* config/riscv/riscv-vector-builtins.h
(function_instance::function_returns_void_p): Add declaration.
(function_call_info::function_returns_void_p): Remove inline
implementation.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/lto/pr110812_0.c: New test.
* gcc.target/riscv/lto/pr110812_1.c: New test.
* gcc.target/riscv/lto/riscv-lto.exp: New test driver.
* gcc.target/riscv/lto/riscv_vector.h: New header wrapper.
|
|
The extension subset check logic in riscv_ext_is_subset was incorrectly
inverted, causing functions with more extensions to be incorrectly
rejected from being inlined into functions with fewer extensions.
This patch fixes the logic to correctly check if the callee's required
extensions are a subset of the caller's extensions. The corrected logic
now properly allows inlining when the caller has all the extensions that
the callee requires.
gcc/
* common/config/riscv/riscv-common.cc (riscv_ext_is_subset): Fix
inverted logic in extension subset check.
gcc/testsuite/
* gcc.target/riscv/can_inline_p_test-01.c: New test.
* gcc.target/riscv/can_inline_p_test-02.c: New test.
* gcc.target/riscv/can_inline_p_test-03.c: New test.
* gcc.target/riscv/can_inline_p_test-04.c: New test.
* gcc.target/riscv/riscv_vector.h: New header wrapper for vector
tests.
|
|
This patch fixes regressions of the gcc.dg/torture/bitint-* tests
caused by r16-3036-ga76a032354ee48 with --enable-checking=all.
The errors are similar to the following:
../../gcc/testsuite/gcc.dg/torture/bitint-14.c:54:1: error: type mismatch in 'array_ref'
<unnamed-signed:63>
unsigned long
_42 = VIEW_CONVERT_EXPR<unsigned long[10]>(r575[i_10])[8];
during GIMPLE pass: bitintlower0
../../gcc/testsuite/gcc.dg/torture/bitint-14.c:54:1: internal compiler error: verify_gimple failed
The first two hunks aren't strictly necessary, I'm just trying to
avoid calling build_qualified_type when it won't be needed.
At least on s390x-linux (tried cross) bitint-14.c doesn't ICE with it
anymore.
Though, I must say the more I look at the limb_access changes, the less
I like the abi_load_p stuff, so I think what we eventually should do instead
is return values with m_limb_type always.
For bitint_extended case (but only if we can prove that the extension there
is for the right precision and right sign) or !write_p just return it,
otherwise cast to lower precision and back to m_limb_type.
And on the other side on stores, for !bitint_extended happily store whatever
the whole m_limb_type value contains, for bitint_extended do the cast to
smaller precision and back on the writes.
2025-09-04 Jakub Jelinek <jakub@redhat.com>
PR target/117599
* gimple-lower-bitint.cc (bitint_large_huge::limb_access): Move
build_qualified_type calls into the if/else if/else bodies, for
the last one set ltype to m_limb_type first, drop limb_type_a
and use ltype instead.
|
|
The following handles SCEV analysis of a peeled converted IV if
that IV is known to not overflow. For
# _15 = PHI <_4(6), 0(5)>
# i_18 = PHI <i_11(6), 0(5)>
i_11 = i_18 + 1;
_4 = (long unsigned int) i_11;
we cannot analyze _15 directly since the SCC has a widening
conversion. But we can analyze _4 to (long unsigned int) {1, +, 1}_1
which is "peeled" (it's from after the first iteration of _15).
If the un-peeled IV {0, +, 1}_1 has the same initial value as _15
and it does not overflow then _15 can be analyzed as
{0ul, +, 1ul}_1.
The following implements this in simplify_peeled_chrec.
PR tree-optimization/61247
* tree-scalar-evolution.cc (simplify_peeled_chrec):
Handle the case of a converted peeled chrec.
* gcc.dg/vect/vect-pr61247.c: New testcase.
|
|
The following makes value-numbering handle a situation like
D.58046 = {};
SR.83_44->i = {};
pretmp_41 = MEM[(struct _Optional_payload_base &)&D.58046 + 8]._M_engaged;
where the intermediate may-def SR.83_44->i = {} prevents CSE of the
load to zero. The problem is two-fold here, one is that the code
skipping may-defs does not handle zeroing via a CTOR, the other is that
(partial) must-defs can be better handled by later code as otherwise
we may not find an appropriate definition to CSE to.
I've noticed we fail to guard against storage-order issues, so fixed
that on the fly.
PR tree-optimization/121740
* tree-ssa-sccvn.cc (vn_reference_lookup_3): Allow skipping
may-defs from CTORs. Do not skip may-defs with storage-order
issues or (partial) must-defs.
* gcc.dg/tree-ssa/ssa-fre-104.c: Un-XFAIL.
* gcc.dg/tree-ssa/ssa-fre-110.c: New testcase.
|
|
On looking again at [basic.lookup.argdep] p4, I believe GCC hasn't fully
implemented the wording here for ADL. This patch fixes two issues.
First, 4.3 indicates that a function exported from a named module should
be visible to ADL regardless of whether it's visible to normal name
lookup, as long as some restrictions are followed.
This patch implements this; for skipping declarations that "do not
appear in the TU containing the point of lookup" I don't think there's
anything special we need to do, as any declarations before the point of
lookup will be found in other ways anyway, and any remaining
declarations from the current TU cannot be seen regardless.
Secondly, currently we only add the exported functions along the
instantiation path of a lookup. But I don't think this is intended by
the current wording, so this patch adjusts that. I also clean up the
logic to do all different module processing in adl_namespace_fns so that
we don't duplicate work in traversing the module binding list
unnecessarily.
This new handling means we need to do some extra work to properly error
on overload sets containing TU-local entities (as this might actually
come up now!) but I'm leaving that for a later patch.
As a drive-by fix this also fixes an ICE for C++26 expansion statements
with finding the instantiation path.
PR c++/117658
gcc/cp/ChangeLog:
* cp-tree.h (get_originating_module): Adjust parameter names.
* module.cc (path_of_instantiation): Handle C++26 expansion
statements.
* name-lookup.cc (name_lookup::adl_namespace_fns): Handle
exported declarations attached to the same module of an
associated entity with the same innermost non-inline namespace,
and non-exported functions on the instantiation path.
(name_lookup::search_adl): Build mapping of namespace to modules
that associated entities are attached to; remove now-unneeded
instantiation path handling.
gcc/testsuite/ChangeLog:
* g++.dg/modules/adl-4_a.C: Test should pass.
* g++.dg/modules/adl-4_b.C: Test should pass.
* g++.dg/modules/adl-6_a.C: New test.
* g++.dg/modules/adl-6_b.C: New test.
* g++.dg/modules/adl-6_c.C: New test.
* g++.dg/modules/adl-7_a.C: New test.
* g++.dg/modules/adl-7_b.C: New test.
* g++.dg/modules/adl-7_c.C: New test.
* g++.dg/modules/adl-8_a.C: New test.
* g++.dg/modules/adl-8_b.C: New test.
* g++.dg/modules/adl-8_c.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
|
|
When we push an existing namespace within the module purview for the
first time, we also need to mark any parent inline namespaces as purview
to not confuse the streaming logic.
PR c++/121724
gcc/cp/ChangeLog:
* name-lookup.cc (push_namespace): Mark inline namespace
contexts as purview if needed.
gcc/testsuite/ChangeLog:
* g++.dg/modules/namespace-12_a.C: New test.
* g++.dg/modules/namespace-12_b.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
|
|
Currently, for Darwin unwind and EH frames are emitted without use
of .cfi_xxx instructions; the emitted frames also contain the
string 'ascii'. For the purpose of this test, omit them.
PR testsuite/112728
gcc/testsuite/ChangeLog:
* gcc.dg/scantest-lto.c: Omit unwind frames.
Signed-off-by: Iain Sandoe <iain@sandoe.co.uk>
|
|
|
|
This extension defines instructions to perform scalar floating-point
conversion between the BFLOAT16 floating-point data and the IEEE-754
32-bit single-precision floating-point (SP) data in a scalar
floating point register.
gcc/ChangeLog:
* config/riscv/andes.def: Add nds_fcvt_s_bf16 and nds_fcvt_bf16_s.
* config/riscv/riscv.md (truncsfbf2): Add TARGET_XANDESBFHCVT support.
(extendbfsf2): Ditto.
* config/riscv/riscv-builtins.cc: New AVAIL andesbfhcvt.
Add new define RISCV_ATYPE_BF and RISCV_ATYPE_SF.
* config/riscv/riscv-ftypes.def: New DEF_RISCV_FTYPE.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/xandes/xandesbfhcvt-1.c: New test.
* gcc.target/riscv/xandes/xandesbfhcvt-2.c: New test.
|
|
This patch adds support for the XAndesperf ISA extension.
The 32-bit AndeStar V5 extension includes branch instructions,
load effective address instructions, and string processing
instructions for performance improvement.
New INSN patterns are added into the new file andes.md
as a seprated vender extension.
gcc/ChangeLog:
* config/riscv/constraints.md (Ou07): New constraint.
(ads_Bext): New constraint.
* config/riscv/iterators.md (ANYLE32): New iterator.
(sizen): New iterator.
(sh_limit): New iterator.
(sh_bit): New iterator.
(cs): New iterator.
* config/riscv/predicates.md (ads_branch_bbcs_operand): New predicate.
(ads_branch_bimm_operand): New predicate.
(ads_imm_extract_operand): New predicate.
(ads_extract_size_imm_si): New predicate.
(ads_extract_size_imm_di): New predicate.
(const_int5_operand): New predicate.
* config/riscv/riscv-builtins.cc:
Add new AVAIL andesperf32 and andesperf64.
Add new define RISCV_ATYPE_DI.
* config/riscv/riscv-ftypes.def: New DEF_RISCV_FTYPE.
* config/riscv/riscv.cc
(riscv_extend_cost): Cost for pattern 'bfo'.
(riscv_rtx_costs): Cost for XAndesperf extension.
* config/riscv/riscv.md: Add support for XAndesperf to patterns
zero_extendsidi2_internal, zero_extendhi2, extendsidi2_internal,
extend<SHORT:mode><SUPERQI:mode>2, <any_extract:optab><GPR:mode>3
and branch_on_bit.
* config/riscv/vector-iterators.md
(sz): Add sign_extract and zero_extract.
* config/riscv/andes.def: New file for vender Andes.
* config/riscv/andes.md: New file for vender Andes.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/riscv.exp: Add runtest for subdir xandes.
* gcc.target/riscv/xandes/xandesperf-1.c: New test.
* gcc.target/riscv/xandes/xandesperf-10.c: New test.
* gcc.target/riscv/xandes/xandesperf-2.c: New test.
* gcc.target/riscv/xandes/xandesperf-3.c: New test.
* gcc.target/riscv/xandes/xandesperf-4.c: New test.
* gcc.target/riscv/xandes/xandesperf-5.c: New test.
* gcc.target/riscv/xandes/xandesperf-6.c: New test.
* gcc.target/riscv/xandes/xandesperf-7.c: New test.
* gcc.target/riscv/xandes/xandesperf-8.c: New test.
* gcc.target/riscv/xandes/xandesperf-9.c: New test.
|
|
This patch add basic support for the following XAndes ISA extensions:
XANDESPERF
XANDESBFHCVT
XANDESVBFHCVT
XANDESVSINTLOAD
XANDESVPACKFPH
XANDESVDOT
gcc/ChangeLog:
* config/riscv/riscv-ext.def: Include riscv-ext-andes.def.
* config/riscv/riscv-ext.opt (riscv_xandes_subext): New variable.
(XANDESPERF) : New mask.
(XANDESBFHCVT): Ditto.
(XANDESVBFHCVT): Ditto.
(XANDESVSINTLOAD): Ditto.
(XANDESVPACKFPH): Ditto.
(XANDESVDOT): Ditto.
* config/riscv/t-riscv: Add riscv-ext-andes.def.
* doc/riscv-ext.texi: Regenerated.
* config/riscv/riscv-ext-andes.def: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/xandes/xandes-predef-1.c: New test.
* gcc.target/riscv/xandes/xandes-predef-2.c: New test.
* gcc.target/riscv/xandes/xandes-predef-3.c: New test.
* gcc.target/riscv/xandes/xandes-predef-4.c: New test.
* gcc.target/riscv/xandes/xandes-predef-5.c: New test.
* gcc.target/riscv/xandes/xandes-predef-6.c: New test.
Co-author: Lino Hsing-Yu Peng (linopeng@andestech.com)
Co-author: Kai Kai-Yi Weng (kaiweng@andestech.com).
|
|
This pattern enables the combine pass (or late-combine, depending on the case)
to merge a vec_duplicate into an smax RTL instruction.
Before this patch, we have two instructions, e.g.:
vfmv.v.f v2,fa0
vfmax.vv v1,v1,v2
After, we get only one:
vfmax.vf v1,v1,fa0
In some cases, it also shaves off one vsetvli.
gcc/ChangeLog:
* config/riscv/autovec-opt.md (*vfmax_vf_<mode>): Rename into...
(*vf<optab>_vf_<mode>): New pattern to combine vec_duplicate +
vf{min,max}.vv into vf{max,min}.vf.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls/floating-point-max-2.c: Adjust scan
dump.
* gcc.target/riscv/rvv/autovec/vls/floating-point-max-4.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f16.c: Add vfmax. Also add
missing scan-dump for vfmul.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f32.c: Add vfmax.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f16.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f16.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f16.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_binop.h: Add max functions.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_binop_data.h: Add data for
vfmax.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmax-run-1-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmax-run-1-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmax-run-1-f64.c: New test.
|
|
PR fortran/121263
gcc/fortran/ChangeLog:
* trans-intrinsic.cc (gfc_conv_intrinsic_transfer): For an
unlimited polymorphic SOURCE to TRANSFER use saved descriptor
if possible.
gcc/testsuite/ChangeLog:
* gfortran.dg/transfer_class_5.f90: New test.
|
|
This is Austin's work to remove the redundant sign extension seen in pr121213.
--
The .w form of amoswap will sign extend its result from 32 to 64 bits, thus any
explicit sign extension insn doing the same is redundant.
This uses Jivan's approach of allocating a DI temporary for an extended result
and using a promoted subreg extraction to get that result into the final
destination.
Tested with no regressions on riscv32-elf and riscv64-elf and bootstrapped on
the BPI and pioneer systems.
PR target/121213
gcc/
* config/riscv/sync.md (amo_atomic_exchange_extended<mode>):
Separate insn with sign extension for 64 bit targets.
gcc/testsuite
* gcc.target/riscv/amo/pr121213.c: Remove xfail.
|
|
WPA currently does not print profile_info which might have been modified
by profile merging logic. this patch adds dumping logic to ipa-profile pass.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* ipa-profile.cc (ipa_profile): Dump profile_info.
|
|
With -O2 we automatically enable several loop optimizations with -fprofile-use.
The rationale is that those optimizations at -O3 only mainly since they may
hurt performance or not pay back in code size when used blindly on all loops.
Profile feedback gives us data on number of iterations which is used by heuristics
controlling those optimizations.
Currently auto-FDO is not that good on determining number of iterations so I think we
do not want to enable them until we can prove that those are useful.
This is affecting primarily -O2 codegen.
Theoretically auto-FdO with lbr can be pretty good on estimating # of
iterations, but to make it useful we will need to implement multiplicity for
discriminators at least.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* opts.cc (enable_fdo_optimizations): Do not auto-enabele loop
optimizations with AutoFDO.
|
|
Committing as obvious.
Signed-off-by: Kyrylo Tkachov <ktkachov@nvidia.com>
gcc/testsuite/
PR target/121749
* gcc.target/aarch64/simd/pr121749.c: Use dg-assemble directive.
|
|
The number of LTO partitions should exceed number of CPUs (or hyper-threads) of
commonly used CPUs. I think it is time to increase it again and as discussed
in the LTO and toplevel asm thread, doing so scales quite well. Tmp file usage
grows from 2.7 to 2.9MB which seems acceptable. Overall build time on machine
with 256 hyperthreads is comparable.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* params.opt (-param=lto-partitions=): INcrease default value from 128 to 512.
|
|
With g:d20b2ad845876eec0ee80a3933ad49f9f6c4ee30 the narrowing shift instructions
are now represented with standard RTL and more merging optimisations occur.
This exposed a wrong predicate for the shift amount operand.
The shift amount is the number of bits of the narrow destination, not the input
sources.
Correct this by using the vn_mode attribute when specifying the predicate, which
exists for this purpose.
I've spotted a few more narrowing shift patterns that need the restriction, so
they are updated as well.
Bootstrapped and tested on aarch64-none-linux-gnu.
Signed-off-by: Kyrylo Tkachov <ktkachov@nvidia.com>
gcc/
PR target/121749
* config/aarch64/aarch64-simd.md (aarch64_<shrn_op>shrn_n<mode>):
Use aarch64_simd_shift_imm_offset_<vn_mode> instead of
aarch64_simd_shift_imm_offset_<ve_mode> predicate.
(aarch64_<shrn_op>shrn_n<mode> VQN define_expand): Likewise.
(*aarch64_<shrn_op>rshrn_n<mode>_insn): Likewise.
(aarch64_<shrn_op>rshrn_n<mode>): Likewise.
(aarch64_<shrn_op>rshrn_n<mode> VQN define_expand): Likewise.
(aarch64_sqshrun_n<mode>_insn): Likewise.
(aarch64_sqshrun_n<mode>): Likewise.
(aarch64_sqshrun_n<mode> VQN define_expand): Likewise.
(aarch64_sqrshrun_n<mode>_insn): Likewise.
(aarch64_sqrshrun_n<mode>): Likewise.
(aarch64_sqrshrun_n<mode>): Likewise.
* config/aarch64/iterators.md (vn_mode): Handle DI, SI, HI modes.
gcc/testsuite/
PR target/121749
* gcc.target/aarch64/simd/pr121749.c: New test.
|
|
Here although the local templated variables x and y have the same
reduced constant value, only x's initializer {a.get()} is well-formed
as written since A::m has private access. We correctly reject y's
initializer {&a.m} (at instantiation time), but we also reject x's
initializer because we happen to constant fold it ahead of time, which
means at instantiation time it's already represented as a COMPONENT_REF
to a FIELD_DECL, and so when substituting this COMPONENT_REF we naively
double check that the given FIELD_DECL is accessible, which fails.
This patch sidesteps around this particular issue by not checking access
when substituting a COMPONENT_REF to a FIELD_DECL. If the target of a
COMPONENT_REF is already a FIELD_DECL (i.e. before substitution), then I
think we can assume access has been already checked appropriately.
PR c++/97740
gcc/cp/ChangeLog:
* pt.cc (tsubst_expr) <case COMPONENT_REF>: Don't check access
when the given member is already a FIELD_DECL.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/constexpr-97740a.C: New test.
* g++.dg/cpp0x/constexpr-97740b.C: New test.
Reviewed-by: Jason Merrill <jason@redhat.com>
|
|
The sinking code currently does not heuristically avoid placing
code into an irreducible region in the same way it avoids placing
into a deeper loop nest. Critically for the PR we may not insert
a VDEF into a irreducible region that does not contain a virtual
definition. The following adds the missing heuristic and also
a stop-gap for the VDEF issue - since we cannot determine
validity inside an irreducible region we have to reject any
VDEF movement with destination inside such region, even when
it originates there. In particular irreducible sub-cycles are
not tracked separately and can cause issues.
I chose to not complicate the already partly incomplete assert
but prune it down to essentials.
PR tree-optimization/121756
* tree-ssa-sink.cc (select_best_block): Avoid irreducible
regions in otherwise same loop depth.
(statement_sink_location): When sinking a VDEF, never place
that into an irreducible region.
* gcc.dg/torture/pr121756.c: New testcase.
|
|
This pattern doesn't do any target support check so no need to set
a vector type.
* tree-vect-patterns.cc (vect_recog_cond_expr_convert_pattern):
Do not set any vector types.
|
|
The a % b -> a - a / b pattern breaks reduction constraints, disable it
for reduction stmts.
PR tree-optimization/121767
* tree-vect-patterns.cc (vect_recog_mod_var_pattern): Disable
for reductions.
* gcc.dg/vect/pr121767.c: New testcase.
|
|
The following fixes a corner case of pattern stmt STMT_VINFO_REDUC_IDX
updating which happens auto-magically. When a 2nd pattern sequence
uses defs from inside a prior pattern sequence then the first guess
for the lookfor can be off. This happens when for example widening
patterns use vect_get_internal_def, which looks into earlier patterns.
PR tree-optimization/121758
* tree-vect-patterns.cc (vect_mark_pattern_stmts): Try
harder to find a reduction continuation.
* gcc.dg/vect/pr121758.c: New testcase.
|
|
split_address_to_core_and_offset [PR121355]
Inside split_address_to_core_and_offset, this calls get_inner_reference.
Take:
```
_6 = t_3(D) + 12;
_8 = &MEM[(struct s1 *)t_3(D) + 4B].t;
_1 = _6 - _8;
```
On the assignement of _8, get_inner_reference will return `MEM[(struct s1 *)t_3(D) + 4B]`
and an offset but that does not match up with `t_3(D)` which is how split_address_to_core_and_offset
handles pointer plus.
So this patch adds the unwrapping of the MEM_REF after the call to get_inner_reference
and have it act like a pointer plus.
Changes since v1:
* v2: Remove check on operand 1 for poly_int_tree_p, it is always.
Add before the check to see if it fits in shwi instead of after.
Bootstrapped and tested on x86_64-linux-gnu.
PR tree-optimization/121355
gcc/ChangeLog:
* fold-const.cc (split_address_to_core_and_offset): Handle an MEM_REF after the call
to get_inner_reference.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/ptrdiff-1.c: New test.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
|
|
This is a small cleanup by moving the optimization of memcmp to
memcmp_eq to fab from strlen pass. Since the copy of the other
part of the memcmp strlen optimization to forwprop, this was the
only thing left that strlen can do memcmp.
Note this move will cause memcmp_eq to be used for -Os too.
It also removes the optimization from strlen since both are now
handled elsewhere.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-ccp.cc (optimize_memcmp_eq): New function.
(pass_fold_builtins::execute): Call optimize_memcmp_eq
for memcmp.
* tree-ssa-strlen.cc (strlen_pass::handle_builtin_memcmp): Remove.
(strlen_pass::check_and_optimize_call): Don't call handle_builtin_memcmp.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
Like the previous commit but for strlen copy so we can backport
this commit. The loads should have the correct alignment on them
so we need to create newly aligned types when the alignment of the
pointer is less than the alignment of the current type.
Pushed as pre-approved by https://gcc.gnu.org/pipermail/gcc-patches/2025-September/694016.html
after a bootstrap/test on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-strlen.cc (strlen_pass::handle_builtin_memcmp): Create
unaligned types if the alignment of the pointers is less
than the alignment of the new type.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
I noticed that when looking into g++.dg/tree-ssa/vector-compare-1.C
failure on arm, the wrong alignment was being used for the load.
There needs to be an unaligned type here to get the correct alignment.
NOTE this means the code in strlen is also wrong but that is on its way
out so I am not sure if we should update it or not to backport to the
release branches; there could be wrong code happening too.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (simplify_builtin_memcmp): Create
unaligned types if the alignment of the pointers is less
than the alignment of the new type.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
2025-09-02 Paul Thomas <pault@gcc.gnu.org>
gcc/fortran
PR fortran/89707
* decl.cc (gfc_get_pdt_instance): Copy the typebound procedure
field from the PDT template. If the template interface has
kind=0, provide the new instance with an interface with a type
spec that points to that of the parameterized component.
(match_ppc_decl): When 'saved_kind_expr' this is a PDT and the
expression should be copied to the component kind_expr.
* gfortran.h: Define gfc_get_tbp.
gcc/testsuite/
PR fortran/89707
* gfortran.dg/pdt_43.f03: New test.
|
|
2025-09-02 Paul Thomas <pault@gcc.gnu.org>
gcc/fortran
PR fortran/87669
* expr.cc (gfc_spec_list_type): If no LEN components are seen,
unconditionally return 'SPEC_ASSUMED'. This suppresses an
invalid error in match.cc(gfc_match_type_is).
gcc/testsuite/
PR fortran/87669
* gfortran.dg/pdt_42.f03: New test.
libgfortran/
PR fortran/87669
* intrinsics/extends_type_of.c (is_extension_of): Use the vptr
rather than the hash value to identify the types.
|
|
On arm, overriding -march can lead to warnings if the testsuite
options try to pass -mcpu. Avoid these by ensuring the -mcpu is unset
before adding the architecture.
Also, improve the compatibility of asm-hard-reg-error-3.c for
hard-float environment by allowing FP instructions in the
architecture.
gcc/testsuite:
* gcc.dg/asm-hard-reg-4.c: On Arm, unset the CPU before
setting the arch.
* gcc.dg/asm-hard-reg-error-3.c: Similarly. Also add
floating-point instructions to aid hard-float variants.
Match on arm* not just arm.
|
|
The recent change to vect_synth_mult_by_constant missed to handle
the synth_shift_p case for alg_shift, so we still changed c * 4
to c + c + c + c. The following also amends alg_add_t2_m, alg_sub_t2_m,
alg_add_factor and alg_sub_factor appropriately.
PR tree-optimization/121753
* tree-vect-patterns.cc (vect_synth_mult_by_constant): Properly
bail when synth_shift_p and an alg_shift use. Handle other
problematic cases.
|
|
This patch changes is_vlmax_len_p to handle VLS modes properly.
Before we would check if len == GET_MODE_NUNITS (mode). This works vor
VLA modes but not necessarily for VLS modes. We regularly have e.g.
small VLS modes where LEN equals their number of units but which do not
span a full vector. Therefore now check if len * GET_MODE_UNIT_SIZE
(mode) equals BYTES_PER_RISCV_VECTOR * TARGET_MAX_LMUL.
Changing this uncovered an oversight in avlprop where we used
GET_MODE_NUNITS as AVL when GET_MODE_NUNITS / NF would be correct.
The testsuite is unchanged. I didn't bother to add a dedicated test
because we would have seen the fallout any way once the gather patch
lands.
gcc/ChangeLog:
* config/riscv/riscv-v.cc (is_vlmax_len_p): Properly handle VLS
modes.
(imm_avl_p): Fix VLS length check.
(expand_strided_load): Use is_vlmax_len_p.
(expand_strided_store): Ditto.
* config/riscv/riscv-avlprop.cc (pass_avlprop::execute):
Use GET_MODE_NUNITS / NF as avl.
|
|
In a two-source gather we unconditionally overwrite target with the
first gather's result already. If op1 == target this clobbers the
source operand for the second gather. This patch uses a temporary in
that case.
PR target/121742
gcc/ChangeLog:
* config/riscv/riscv-v.cc (expand_vec_perm): Use temporary if
op1 and target overlap.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr121742.c: New test.
|
|
The NoOffload flag was introduced recently (commit "Don't pass vector params
through to offload targets").
gcc/ChangeLog:
* doc/options.texi: Document NoOffload.
|
|
In r16-3414 libstdc++ changed ABI for (still experimental C++20) and uses
unordered value -128 instead of 2. Generally the change improved code
generation on all targets tested, see
https://gcc.gnu.org/pipermail/gcc-patches/2025-August/693534.html
for details.
In r16-3474 I've adjusted the middle-end and backends to use that value.
This apparently broke the gcc.target/s390/spaceship-fp-2.c test,
with -ffast-math the 2 value is unreachable and so the .SPACESHIP last
argument in that case is the default, which changed from 2 to -128.
But spaceship-fp-1.c test also doesn't test what libstdc++ uses anymore,
so the following patch uses -128 in all the spots.
2025-09-02 Jakub Jelinek <jakub@redhat.com>
* gcc.target/s390/spaceship-fp-1.c: Expect .SPACESHIP call with
-128 as last argument instead of 2.
(TEST): Use -128 instead of 2.
* gcc.target/s390/spaceship-fp-2.c: Expect .SPACESHIP call with
-128 as last argument instead of 2.
(TEST): Use -128 instead of 2.
|
|
We have contracts-related declarations and macros split between contracts.h
and cp-tree.h, and then contracts.h is included in the latter, which means
that it is included in all c++ front end files.
This patch:
- moves all the contracts-related material to contracts.h.
- makes some functions that are only used in contracts.cc static.
- tries to group the external API for contracts into related topics.
- includes contracts.h in the front end sources that need it.
gcc/cp/ChangeLog:
* constexpr.cc: Include contracts.h
* coroutines.cc: Likewise.
* cp-gimplify.cc: Likewise.
* decl.cc: Likewise.
* decl2.cc: Likewise.
* mangle.cc: Likewise.
* module.cc: Likewise.
* pt.cc: Likewise.
* search.cc: Likewise.
* semantics.cc: Likewise.
* contracts.cc (validate_contract_role, setup_default_contract_role,
add_contract_role, get_concrete_axiom_semantic,
get_default_contract_role): Make static.
* cp-tree.h (make_postcondition_variable, grok_contract,
finish_contract_condition, find_contract, set_decl_contracts,
get_contract_semantic, set_contract_semantic): Move to contracts.h.
* contracts.h (get_contract_role, add_contract_role,
validate_contract_role, setup_default_contract_role,
lookup_concrete_semantic, get_default_contract_role): Remove.
Signed-off-by: Iain Sandoe <iain@sandoe.co.uk>
|
|
This patch update RISC-V Zba extension 'sext' instructions generation.
Supplemented the instruction generation detection of 'sext.h' and
'sext.b'.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zbb-sext.c: New test.
|
|
This patch update RISC-V Zba extension 'shNadd.uw' instruction generation.
Supplemented the instruction generation detection of 'sh1add.uw' and
'sh3add.uw'.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zba-shadd.c: New test functions.
|
|
The new gcc.target/i386/memset-strategy-1[03].c tests FAIL on
Solaris/x86:
FAIL: gcc.target/i386/memset-strategy-10.c check-function-bodies foo
FAIL: gcc.target/i386/memset-strategy-13.c check-function-bodies foo
The issue is the same as several times previously: they need to be
compiled with -fasynchronous-unwind-tables -fdwarf2-cfi-asm, which this
patch does.
Tested on i386-pc-solaris2.11 (as and gas) and x86_64-pc-linux-gnu.
2025-09-01 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
gcc/testsuite:
* gcc.target/i386/memset-strategy-10.c (dg-options): Add
-fasynchronous-unwind-tables -fdwarf2-cfi-asm.
* gcc.target/i386/memset-strategy-13.c: Likewise.
|
|
The following makes vect_analyze_stmt call vectorizable_* with all
STMT_VINFO_VECTYPE NULL_TREE but restores the value for eventual
iteration with single-lane SLP. It clears it for every stmt during
vect_transform_stmt.
* tree-vect-stmts.cc (vect_transform_stmt): Clear
STMT_VINFO_VECTYPE for all stmts.
(vect_analyze_stmt): Likewise. But restore at the end again.
|
|
The reduction guard isn't correct, STMT_VINFO_REDUC_DEF also exists
for nested cycles not part of reductions but there's no reduction
info for them.
PR tree-optimization/121754
* tree-vectorizer.h (vect_reduc_type): Simplify to not ICE
on nested cycles.
* gcc.dg/vect/pr121754.c: New testcase.
* gcc.target/aarch64/vect-pr121754.c: Likewise.
|
|
bump is always specified, so remove the STMT_VINFO_VECTYPE touching
path.
* tree-vect-data-refs.cc (bump_vector_ptr): Remove the
STMT_VINFO_VECTYPE use, bump is always specified.
|
|
The strided-store path needs to have the SLP trees vector type so
the following patch passes dowm the vector type to be used to
vect_check_gather_scatter and adjusts all other callers. This
removes one of the last pieces requiring STMT_VINFO_VECTYPE
during SLP stmt analysis.
* tree-vectorizer.h (vect_check_gather_scatter): Add
vectype parameter.
* tree-vect-data-refs.cc (vect_check_gather_scatter): Get
vectype as parameter.
(vect_analyze_data_refs): Adjust.
* tree-vect-patterns.cc (vect_recog_gather_scatter_pattern): Likewise.
* tree-vect-slp.cc (vect_get_and_check_slp_defs): Get vectype
as parameter, pass down.
(vect_build_slp_tree_2): Adjust.
* tree-vect-stmts.cc (vect_mark_stmts_to_be_vectorized): Likewise.
(vect_use_strided_gather_scatters_p): Likewise.
|
|
gcc/ChangeLog:
* doc/extend.texi (Common Variable Attributes): Put counted_by
in alphabetical order.
|
|
As mentioned in the PR, LOCATION_LINE is represented in an int,
and while we have -pedantic diagnostics (and -pedantic-error error)
for too large #line, we can still overflow into negative line
numbers up to -2 and -1. We could overflow to that even with valid
source if it says has #line 2147483640 and then just has
2G+ lines after it.
Now, the ICE is because assign_discriminator{,s} uses a hash_map
with int_hash <int64_t, -1, -2>, so values -2 and -1 are reserved
for deleted and empty entries. We just need to make sure those aren't
valid. One possible fix would be just that
- discrim_entry &e = map.get_or_insert (LOCATION_LINE (loc), &existed);
+ discrim_entry &e
+ = map.get_or_insert ((unsigned) LOCATION_LINE (loc), &existed);
by adding unsigned cast when the key is signed 64-bit, it will never
be -1 or -2.
But I think that is wasteful, discrim_entry is a struct with 2 unsigned
non-static data members, so for lines which can only be 0 to 0xffffffff
(sure, with wrap-around), I think just using a hash_map with 96bit elts
is better than 128bit.
So, the following patch just doesn't assign any discriminators for lines
-1U and -2U, I think that is fine, normal programs never do that.
Another possibility would be to handle lines -1U and -2U as if it was say
-3U.
2025-09-02 Jakub Jelinek <jakub@redhat.com>
PR middle-end/121663
* tree-cfg.cc (assign_discriminator): Change map argument type
from hash_map with int_hash <int64_t, -1, -2> to one with
int_hash <unsigned, -1U, -2U>. Cast LOCATION_LINE to unsigned.
Return early for (unsigned) LOCATION_LINE above -3U.
(assign_discriminators): Change map type from hash_map with
int_hash <int64_t, -1, -2> to one with int_hash <unsigned, -1U, -2U>.
* gcc.dg/pr121663.c: New test.
|