Age | Commit message (Collapse) | Author | Files | Lines |
|
1) Fix predicate of operands[3] in cond_<insn><mode> since only
const_vec_dup_operand is excepted for masked operations, and pass real
count to ix86_vgf2p8affine_shift_matrix.
2) Pass operands[2] instead of operands[1] to
gen_vgf2p8affineqb_<mode>_mask which excepted the operand to shifted,
but operands[1] is mask operand in cond_<insn><mode>.
gcc/ChangeLog:
PR target/121699
* config/i386/predicates.md (const_vec_dup_operand): New
predicate.
* config/i386/sse.md (cond_<insn><mode>): Fix predicate of
operands[3], and fix wrong operands passed to
ix86_vgf2p8affine_shift_matrix and
gen_vgf2p8affineqb_<mode>_mask.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr121699.c: New test.
|
|
|
|
CLAMPS instruction
The CLAMPS instruction in Xtensa ISA, provided when the TARGET_CLAMPS
configuration is enabled (and also requires TARGET_MINMAX), returns a
value clamped the number in the specified register to between -(1<<N) and
(1<<N)-1 inclusive, where N is an immediate value from 7 to 22.
Therefore, when the above configurations are met, by comparing the clamped
result with the original value for equality, branching whether the value
is within the range mentioned above or not is implemented with fewer
instructions, especially when the upper and lower bounds of the range are
too large to fit into a single immediate assignment.
/* example (TARGET_MINMAX and TARGET_CLAMPS) */
extern void foo(void);
void test0(int a) {
if (a >= -(1 << 9) && a < (1 << 9))
foo();
}
void test1(int a) {
if (a < -(1 << 20) || a >= (1 << 20))
foo();
}
;; before
test0:
entry sp, 32
addmi a2, a2, 0x200
movi a8, 0x3ff
bltu a8, a2, .L1
call8 foo
.L1:
retw.n
test1:
entry sp, 32
movi.n a9, 1
movi.n a8, -1
slli a9, a9, 20
srli a8, a8, 11
add.n a2, a2, a9
bgeu a8, a2, .L4
call8 foo
.L4:
retw.n
;; after
test0:
entry sp, 32
clamps a8, a2, 9
bne a2, a8, .L1
call8 foo
.L1:
retw.n
test1:
entry sp, 32
clamps a8, a2, 20
beq a2, a8, .L4
call8 foo
.L4:
retw.n
(note: Currently, in the RTL instruction combination pass, the possible
const_int values are fundamentally constrained by
TARGET_LEGITIMATE_CONSTANT_P() if no bare large constant assignments are
possible (i.e., neither -mconst16 nor -mauto-litpools), so limiting N to
a range of 7 to only 10 instead of to 22. A series of forthcoming
patches will introduce an entirely new "xt_largeconst" pass that will
solve several issues including this.)
gcc/ChangeLog:
* config/xtensa/predicates.md (alt_ubranch_operator):
New predicate.
* config/xtensa/xtensa.md (*eqne_in_range):
New insn_and_split pattern.
|
|
2025-08-31 Paul Thomas <pault@gcc.gnu.org>
gcc/fortran
PR fortran/99709
* trans-array.cc (structure_alloc_comps): For the case
COPY_ALLOC_COMP, do a deep copy of non-allocatable PDT arrays
Suppress the use of 'duplicate_allocatable' for PDT arrays.
* trans-expr.cc (conv_dummy_value): When passing to a PDT dummy
with the VALUE attribute, do a deep copy to ensure that
parameterized components are reallocated.
gcc/testsuite/
PR fortran/99709
* gfortran.dg/pdt_41.f03: New test.
|
|
So this is the next chunk of Shreya's work to adjust our add expanders. In this
patch we're adding support for adding a 2*s12 immediate in SI for rv64.
To recap, the basic idea is reduce our reliance on the define_insn_and_split
that was added a year or so ago by synthesizing the more efficient sequence at
expansion time. By handling this early rather than late the synthesized
sequence participates in the various optimizer passes in the natural way. In
contrast using the define_insn_and_split bypasses the cost modeling in combine
and hides the synthesis until after reload as completed (which in turn leads to
the problems seen in pr120811).
This doesn't solve pr120811, but it is the last prerequisite patch before
directly tackling pr120811.
This has been bootstrapped & regression tested on the pioneer & bpi and been
through the usual testing on riscv32-elf and riscv64-elf. Waiting on
pre-commit CI before moving forward.
gcc/
* config/riscv/riscv-protos.h (synthesize_add_extended): Prototype.
* config/riscv/riscv.cc (synthesize_add_extended): New function.
* config/riscv/riscv.md (addsi3): For RV64, try synthesize_add_extended.
gcc/testsuite/
* gcc.target/riscv/add-synthesis-2.c: New test.
|
|
This has been unavailable for well over a year.
gcc:
* doc/install.texi (Binaries): Drop MinGW.
|
|
libstdc++-v3:
* doc/xml/manual/using_exceptions.xml: Update link to
Boost's "Exception-Safety"
* doc/html/manual/using_exceptions.html: Rebuild.
|
|
ptrace on Darwin requires <sys/types.h>.
The inline x86 asm doesn't work with the Solaris assembler.
libstdc++-v3/ChangeLog:
* src/c++26/debugging.cc [_GLIBCXX_HAVE_SYS_PTRACE_H]: Include
<sys/types.h>.
(breakpoint) [__i386__ || __x86_64__]: Use "int 0x03" instead of
"int3".
|
|
The form 4 of unsigned scalar SAT_MUL is covered in middle-expand
alreay, add test case here to cover form 4.
The below test suites are passed for this patch series.
* The rv64gcv fully regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat/sat_u_mul-5-u16-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u16-from-u32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u16-from-u64.rv32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u16-from-u64.rv64.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u32-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u32-from-u64.rv32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u32-from-u64.rv64.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u64-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u8-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u8-from-u16.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u8-from-u32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u8-from-u64.rv32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-5-u8-from-u64.rv64.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u16-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u16-from-u32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u16-from-u64.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u32-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u32-from-u64.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u64-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u8-from-u128.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u8-from-u16.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u8-from-u32.c: New test.
* gcc.target/riscv/sat/sat_u_mul-run-5-u8-from-u64.c: New test.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
|
|
recent libstdc++ changes [PR121698]
libstdc++ changed its ABI in <compare> for C++20 recently (under the
C++20 is still experimental rule). In addition to the -1, 0, 1 values
for less, equal, greater it now uses -128 for unordered instead of
former 2 and changes some of the operators, instead of checks like
(_M_value & ~1) == _M_value in some cases it now uses _M_reverse()
which is negation in unsigned char type + conversion back to the original
type. _M_reverse() thus turns the -1, 0, 1, -128 values into
1, 0, -1, -128. Note libc++ uses value -127 instead of 2/-128.
Now, the middle-end has some optimizations which rely on the particular
implementation and don't optimize if not. One is optimize_spaceship
which on some targets (currently x86, aarch64 and s390) attempts to use
better comparison instructions (ideally just one floating point comparison
to get all 4 possible outcomes plus some flag tests or magic instead of
2 or 3 floating point comparisons). This one can actually handle
arbitrary int non-[-1,1] values for unordered but still has a default
of 2. The patch changes that default to -128 so that even if something
is expanded as branches if it is later during RTL optimizations determined
to convert that into partial_ordering we get better code.
The other optimization (phiopt one) is about optimizing (x <=> y) < 0
etc. into just x < y. This one actually relies on the exact unordered
value (2) and has code to deal with that (_M_value & ~1) == _M_value
kind of tests and whatever match.pd lowers it. So, this patch partially
rewrites it to look for -128 instead of 2, drop those
(_M_value & ~1) == _M_value pattern recognitions and instead introduces
pattern recognition of _M_reverse(), i.e. cast to unsigned char, negation
in that type and cast back to the original signed type.
With all these changes we get back the desired optimizations for all
the cases we could optimize previously (note, for HONOR_NANS case
we don't try to optimize say (x <=> y) == 0 because the original
will raise exception if either x or y is a NaN, while turning it into
x == y will not, but (x <=> y) <= 0 is fine (x <= y), because it
does raise those exceptions.
2025-08-30 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/121698
* tree-ssa-phiopt.cc (spaceship_replacement): Adjust
to handle spaceship unordered value -128 rather than 2 and
stmts from the new std::partial_order::_M_reverse() instead
of (_M_value & ~1) == _M_value etc.
* doc/md.texi (spaceship@var{m}4): Use -128 instead of 2.
* tree-ssa-math-opts.cc (optimize_spaceship): Adjust comments
that libstdc++ unordered value is -128 rather than 2 and use
that as the default unordered value.
* config/i386/i386-expand.cc (ix86_expand_fp_spaceship): Use
GEN_INT (-128) instead of const2_rtx and adjust comment accordingly.
* config/aarch64/aarch64.cc (aarch64_expand_fp_spaceship): Likewise.
* config/s390/s390.cc (s390_expand_fp_spaceship): Likewise.
* gcc.dg/pr94589-2.c: Adjust for expected unordered value -128
rather than 2 and negations in unsigned char instead of and with
~1 and comparison against original value.
* gcc.dg/pr94589-4.c: Likewise.
* gcc.dg/pr94589-5.c: Likewise.
* gcc.dg/pr94589-6.c: Likewise.
|
|
gcc:
* doc/extend.texi (Vector Extensions): Improve markup for list
of operators.
|
|
gcc:
* doc/standards.texi (Standards): Update "Object-Oriented
Programming and the Objective-C Language" reference.
|
|
Since the first operand of PLUS in the source of TLS64_COMBINE pattern:
(set (reg/f:DI 128)
(plus:DI (unspec:DI [
(symbol_ref:DI ("_TLS_MODULE_BASE_") [flags 0x10])
(reg:DI 126)
(reg/f:DI 7 sp)
] UNSPEC_TLSDESC)
(const:DI (unspec:DI [
(symbol_ref:DI ("bfd_error") [flags 0x1a] <var_decl 0x7fffe99d6e40 bfd_error>)
] UNSPEC_DTPOFF))))
is unused, use the second operand of PLUS:
(const:DI (unspec:DI [
(symbol_ref:DI ("bfd_error") [flags 0x1a] <var_decl 0x7fffe99d6e40 bfd_error>)
] UNSPEC_DTPOFF))
to check if 2 TLS_COMBINE patterns have the same source.
gcc/
PR target/121725
* config/i386/i386-features.cc
(pass_x86_cse::candidate_gnu2_tls_p): Use the UNSPEC_DTPOFF
operand to check source operand in TLS64_COMBINE pattern.
gcc/testsuite/
PR target/121725
* gcc.target/i386/pr121725-1a.c: New test.
* gcc.target/i386/pr121725-1b.c: Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
To better optimize code dealing with `memcmp == 0` where we have
a small constant size, we can inline the memcmp in those cases.
There is code to do this in strlen but that is run too late in
the case where we can figure out the value of one of the arguments
to memcmp. So this copies the optimization to forwprop.
An example of where this helps is:
```
bool cmpvect(const std::vector<int> &a) { return a == std::vector<int>{10}; }
```
Where the above should be optimized to just `return a.size() == 1 && a[0] == 10;`.
Note pr44130.c testcase needed to change as now it will be optimized away otherwise.
Note the loop in pr44130.c os also vectorized which it was not before.
Note the optimization remains in strlen as the other part (memcmp -> memcmp_eq)
should move to either isel or fab and I didn't want to remove it just yet.
Bootstrapped and tested on x86_64-linux-gnu.
Changes since v1:
* v2: Add verification of arguments to memcmp to simplify_builtin_memcmp.
PR tree-optimization/116651
PR tree-optimization/93265
PR tree-optimization/103647
PR tree-optimization/52171
gcc/ChangeLog:
* tree-ssa-forwprop.cc (simplify_builtin_memcmp): New function.
(simplify_builtin_call): Call simplify_builtin_memcmp for memcmp
memcmp_eq builtins.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr44130.c: Add an inline-asm clobber.
* g++.dg/tree-ssa/vector-compare-1.C: New test.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
This reverts commit 50064b2898edfb83bc37f2597a35cbd3c1c853e3.
|
|
|
|
This patch is a followup to PR modula2/121629 which uses
the cpp_include_defaults array to configure the default search path
entries. In particular it creates default search paths
based on LOCAL_INCLUDE_DIR, PREFIX_INCLUDE_DIR, gcc version path
and NATIVE_SYSTEM_HEADER_DIR.
gcc/m2/ChangeLog:
PR modula2/121709
* gm2-lang.cc (concat_component): New function.
(find_cpp_entry): Ditto.
(lookup_cpp_default): Ditto.
(add_default_include_paths): Rewrite.
(m2_pathname_root): Remove.
gcc/ChangeLog:
PR modula2/121709
* doc/gm2.texi (Module Search Path): Reflect the new
search order.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
|
|
The following minimum reproducer would miscompile with vanilla gcc:
extern int x[10], y[10];
bool g();
void f() { 0[g() ? x : y] = 1; }
gcc would mistakenly treat the subexpression (g() ? x : y) as a prvalue and
move that array to stack. The following assignment would then write to the
stack instead of to the global arrays. When optimizations are enabled, this
assignment is discarded by dse and gcc generates the following code for the
f function:
"_Z1fi":
jmp "_Z1gv"
The miscompilation requires all the following conditions to be met:
- The array subscription expression is written as idx[array], instead of
the usual form array[idx];
- The "array" part must be a ternary expression (COND_EXPR in gcc tree)
and it must be an lvalue.
- The code must be compiled with -fstrong-eval-order which is the default
for -std=c++17 or later.
The cause of the issue lies in cp_build_array_ref, where it mistakenly
generates a COND_EXPR with ARRAY_TYPE to the IL when all the criteria above
are met. This patch tries to resolve this issue. It moves the
canonicalization step that transforms idx[array] to array[idx] early in
cp_build_array_ref to ensure we handle these two forms of array subscription
consistently.
Tested on x86_64-linux.
gcc/cp/ChangeLog:
* typeck.cc (cp_build_array_ref): Handle 0[arr] earlier.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1z/array-condition-expr.C: New test.
Signed-off-by: Sirui Mu <msrlancern@gmail.com>
|
|
Whilst experimenting with PR diagnostics/121039 (potentially capturing
suppressed diagnostics in SARIF output), I found it very useful to have
a text log from the diagnostic subsystem to track what it's doing and
the decisions it's making (e.g. exactly when and why a diagnostic is
being rejected).
This patch adds a simple logging mechanism to the diagnostics subsystem,
enabled by setting GCC_DIAGNOSTICS_LOG in the environment, which emits
nested text like this to stderr (or a named file):
warning (option_id: 668, gmsgid: "%<-Wformat-security%> ignored without %<-Wformat%>")
diagnostics::context::diagnostic_impl (option_id: 668, kind: warning, gmsgid: "%<-Wformat-security%> ignored without %<-Wformat%>")
diagnostics::context::report_diagnostic
rejecting: diagnostic not enabled
false <- diagnostics::context::diagnostic_impl
false <- warning
This logging mechanism doesn't use pretty_printer because it can be
helpful to use it to debug pretty_printer itself.
gcc/ChangeLog:
* Makefile.in (OBJS-libcommon): Add diagnostics/logging.o.
* diagnostic-global-context.cc: Include "diagnostics/logging.h".
(log_function_params, auto_inc_log_depth): New "using" decls.
(verbatim): Add logging.
(emit_diagnostic): Likewise.
(emit_diagnostic_valist): Likewise.
(emit_diagnostic_valist_meta): Likewise.
(inform): Likewise.
(inform_n): Likewise.
(warning): Likewise.
(warning_at): Likewise.
(warning_meta): Likewise.
(warning_n): Likewise.
(pedwarn): Likewise.
(permerror): Likewise.
(permerror_opt): Likewise.
* diagnostics/context.cc: Include "diagnostics/logging.h".
(context::initialize): Initialize m_logger. Add logging.
(context::finish): Add logging. Clean up m_logger.
(context::dump): Add indent param.
(context::set_sink): Add logging.
(context::add_sink): Add logging.
(diagnostic_kind_debug_text): New.
(get_debug_string_for_kind): New.
(context::report_diagnostic): Add logging.
(context::diagnostic_impl): Likewise.
(context::diagnostic_n_impl): Likewise.
(context::end_group): Likewise.
* diagnostics/context.h: Include "diagnostics/logging.h".
(context::dump): Add indent param.
(context::get_logger): New accessor.
(context::classify_diagnostics): Add logging.
(context::push_diagnostics): Likewise.
(context::pop_diagnostics): Likewise.
(context::m_logger): New field.
* diagnostics/html-sink.cc: Include "diagnostics/logging.h".
(html_builder::flush_to_file): Add logging.
(html_sink::on_report_diagnostic): Likewise.
* diagnostics/kinds.h (get_debug_string_for_kind): New decl.
* diagnostics/logging.cc: New file.
* diagnostics/logging.h: New file.
* diagnostics/output-file.h: Include "label-text.h".
* diagnostics/sarif-sink.cc: Include "diagnostics/logging.h".
(sarif_builder::flush_to_object): Add logging.
(sarif_builder::flush_to_file): Likewise.
(sarif_sink::on_report_diagnostic): Likewise.
* diagnostics/sink.h (sink::get_logger): New.
* diagnostics/text-sink.cc: Include "diagnostics/logging.h".
(text_sink::on_report_diagnostic): Add logging.
* doc/invoke.texi (Environment Variables): Document
GCC_DIAGNOSTICS_LOG.
* opts-diagnostic.cc: Include "diagnostics/logging.h".
(handle_OPT_fdiagnostics_add_output_): Add loggging.
(handle_OPT_fdiagnostics_set_output_): Likewise.
gcc/analyzer/ChangeLog:
* pending-diagnostic.cc: Include "diagnostics/logging.h".
(diagnostic_emission_context::warn): Add logging.
(diagnostic_emission_context::inform): Likewise.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
|
|
Also, the omission of the instruction that sets the shift amount register
(SAR) to 8 is now more efficient: it is omitted if there was a previous
bswapsi2 in the same BB, but not omitted if no bswapsi2 is found or another
insn that modifies SAR is found first (see below).
Note that the five instructions for writing to SAR are as follows, along
with the insns that use them (except for bswapsi2_internal itself):
- SSA8B
*shift_per_byte, *shlrd_per_byte
- SSA8L
*shift_per_byte, *shlrd_per_byte
- SSR
ashrsi3 (alt 1), lshrsi3 (alt 1), *shlrd_reg, rotrsi3 (alt 1)
- SSL
ashlsi3_internal (alt 1), *shlrd_reg, rotlsi3 (alt 1)
- SSAI
*shlrd_const, rotlsi3 (alt 0), rotrsi3 (alt 0)
gcc/ChangeLog:
* config/xtensa/xtensa-protos.h (xtensa_bswapsi2_output):
New function prototype.
* config/xtensa/xtensa.cc
(xtensa_bswapsi2_output_1, xtensa_bswapsi2_output):
New functions.
* config/xtensa/xtensa.md (bswapsi2_internal):
Rewrite in compact syntax and use xtensa_bswapsi2_output() as asm
output.
gcc/testsuite/ChangeLog:
* gcc.target/xtensa/bswap-SSAI8.c: New.
|
|
So the RISC-V port has attributes which indicate the index within the
recog_data where certain operands will be found.
For this BZ the default value for the merge_op_idx attribute on the given insn
is "2". But the insn only has operands 0 & 1. So we do an out of bounds array
access and boom the ICE/valgrind failure.
As we discussed in the patchwork meeting, this is all a bit clunky and has been
fairly error prone. This doesn't add any massive checking, but does introduce
some asserts to help catch problems a bit earlier and clearer.
In particular in cases where we're already asserting that the returned index is
valid (!= INVALID_ATTRIBUTE) we also assert that the index is less than the
total number of operands.
In the get_vlmax_ta_preferred_avl routine it appears like we need to handle
these two cases more gracefully as we apparently legitimately query for the
merge_op_idx on a fairly arbitrary insn. We just have to make sure to not
*use* the result if it's INVALID_ATTRIBUTE. So for that code we assert that
merge_op_idx is either INVALID_ATTRIBUTE or smaller than the number of
operands.
This patch also adds overrides for 3 patterns to return INVALID_ATTRIBUTE for
merge_op_idx, similar to how they already do for mode_idx and avl_type_idx.
This has been bootstrapped and regression tested on the bpi & pioneer systems
and regression tested for riscv32-elf and riscv64-elf. Waiting on CI before
pushing.
PR target/121548
gcc/
* config/riscv/riscv-avlprop.cc (get_insn_vtype_mode): Assert
MODE_IDX is smaller than the number of operands.
(simplify_replace_vlmax_avl): Similarly.
(pass_avlprop::get_vlmax_ta_preferred_avl): Similarly.
* config/riscv/vector.md: Override merge_op_idx computation
for simple moves, just like is done for avl_type_idx and mode_idx.
|
|
PR fortran/93330
gcc/fortran/ChangeLog:
* interface.cc (get_sym_storage_size): Add argument size_known to
indicate that the storage size could be successfully determined.
(get_expr_storage_size): Likewise.
(gfc_compare_actual_formal): Use them to handle zero-sized dummy
and actual arguments.
If a character formal argument has the pointer or allocatable
attribute, or is an array that is not assumed or explicit size,
we generate an error by default unless -std=legacy is specified,
which falls back to just giving a warning.
If -Wcharacter-truncation is given, warn on a character actual
argument longer than the dummy. Generate an error for too short
scalar character arguments if -std=f* is given instead of just a
warning.
gcc/testsuite/ChangeLog:
* gfortran.dg/argument_checking_15.f90: Adjust dg-pattern.
* gfortran.dg/bounds_check_strlen_7.f90: Add dg-pattern.
* gfortran.dg/char_length_3.f90: Adjust options.
* gfortran.dg/whole_file_24.f90: Add dg-pattern.
* gfortran.dg/whole_file_29.f90: Likewise.
* gfortran.dg/argument_checking_27.f90: New test.
|
|
This pattern enables the combine pass (or late-combine, depending on the case)
to merge a vec_duplicate into an unspec_vfmin RTL instruction.
Before this patch, we have two instructions, e.g.:
vfmv.v.f v2,fa0
vfmin.vv v1,v1,v2
After, we get only one:
vfmin.vf v1,v1,fa0
gcc/ChangeLog:
* config/riscv/autovec-opt.md
(*vfmin_vf_ieee_<mode>): Add new patterns to combine vec_duplicate +
vfmin.vv (unspec) into vfmin.vf.
(*vfmul_vf_<mode>, *vfrdiv_vf_<mode>, *vfmin_vf_<mode>): Fix attribute
types.
* config/riscv/vector.md (@pred_<ieee_fmaxmin_op><mode>_scalar): Allow
VLS modes.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f16.c: Add vfmin.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-4-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-5-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-5-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-5-f64.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-6-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-6-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-6-f64.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-7-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-7-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-7-f64.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-8-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-8-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-8-f64.c: New test.
|
|
Since
commit 401199377c50045ede560daf3f6e8b51749c2a87
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Tue Jun 17 10:17:17 2025 +0800
x86: Improve vector_loop/unrolled_loop for memset/memcpy
uses move_by_pieces and store_by_pieces to expand memcpy/memset epilogue
with vector_loop even when targetm.use_by_pieces_infrastructure_p returns
false, which triggers
gcc_assert (targetm.use_by_pieces_infrastructure_p
(len, align,
memsetp ? SET_BY_PIECES : STORE_BY_PIECES,
optimize_insn_for_speed_p ()));
in store_by_pieces. Fix it by:
1. Add by_pieces_in_use to machine_function to indicate that by_pieces op
is currently in use.
2. Set and clear by_pieces_in_use when expanding memcpy/memset epilogue
with move_by_pieces and store_by_pieces.
3. Define TARGET_USE_BY_PIECES_INFRASTRUCTURE_P to return true if
by_pieces_in_use is true.
gcc/
PR target/121096
* config/i386/i386-expand.cc (expand_cpymem_epilogue): Set and
clear by_pieces_in_use when using by_pieces op.
(expand_setmem_epilogue): Likewise.
* config/i386/i386.cc (ix86_use_by_pieces_infrastructure_p): New.
(TARGET_USE_BY_PIECES_INFRASTRUCTURE_P): Likewise.
* config/i386/i386.h (machine_function): Add by_pieces_in_use.
gcc/testsuite/
PR target/121096
* gcc.target/i386/memcpy-strategy-14.c: New test.
* gcc.target/i386/memcpy-strategy-15.c: Likewise.
* gcc.target/i386/memset-strategy-10.c: Likewise.
* gcc.target/i386/memset-strategy-11.c: Likewise.
* gcc.target/i386/memset-strategy-12.c: Likewise.
* gcc.target/i386/memset-strategy-13.c: Likewise.
* gcc.target/i386/memset-strategy-14.c: Likewise.
* gcc.target/i386/memset-strategy-15.c: Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
Since the constant passed to setmem_epilogue_gen_val may not be in
word_mode, update setmem_epilogue_gen_val to handle any integer modes.
gcc/
PR target/121108
* config/i386/i386-expand.cc (setmem_epilogue_gen_val): Don't
assert op_mode == word_mode and handle any integer modes.
gcc/testsuite/
PR target/121108
* gcc.target/i386/memset-strategy-16.c: New test.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
Source operands of 2 TLS_CALL patterns in
(insn 10 9 11 3 (set (reg:DI 100)
(unspec:DI [
(symbol_ref:DI ("caml_state") [flags 0x10] <var_decl 0x7fe10e1d9e40 caml_state>)
] UNSPEC_TLSDESC)) "x.c":7:16 1674 {*tls_dynamic_gnu2_lea_64_di}
(nil))
(insn 11 10 12 3 (parallel [
(set (reg:DI 99)
(unspec:DI [
(symbol_ref:DI ("caml_state") [flags 0x10] <var_decl 0x7fe10e1d9e40 caml_state>)
(reg:DI 100)
(reg/f:DI 7 sp)
] UNSPEC_TLSDESC))
(clobber (reg:CC 17 flags))
]) "x.c":7:16 1676 {*tls_dynamic_gnu2_call_64_di}
(expr_list:REG_DEAD (reg:DI 100)
(expr_list:REG_UNUSED (reg:CC 17 flags)
(nil))))
and
(insn 19 17 20 4 (set (reg:DI 104)
(unspec:DI [
(symbol_ref:DI ("caml_state") [flags 0x10] <var_decl 0x7fe10e1d9e40 caml_state>)
] UNSPEC_TLSDESC)) "x.c":6:10 discrim 1 1674 {*tls_dynamic_gnu2_lea_64_di}
(nil))
(insn 20 19 21 4 (parallel [
(set (reg:DI 103)
(unspec:DI [
(symbol_ref:DI ("caml_state") [flags 0x10] <var_decl 0x7fe10e1d9e40 caml_state>)
(reg:DI 104)
(reg/f:DI 7 sp)
] UNSPEC_TLSDESC))
(clobber (reg:CC 17 flags))
]) "x.c":6:10 discrim 1 1676 {*tls_dynamic_gnu2_call_64_di}
(expr_list:REG_DEAD (reg:DI 104)
(expr_list:REG_UNUSED (reg:CC 17 flags)
(nil))))
are the same even though rtx_equal_p returns false since (reg:DI 100)
and (reg:DI 104) are set from the same symbol. Use the UNSPEC_TLSDESC
symbol
(unspec:DI [(symbol_ref:DI ("caml_state") [flags 0x10])] UNSPEC_TLSDESC))
to check if 2 TLS_CALL patterns have the same source.
For TLS64_COMBINE, use both UNSPEC_TLSDESC and UNSPEC_DTPOFF unspecs to
check if 2 TLS64_COMBINE patterns have the same source.
gcc/
PR target/121694
* config/i386/i386-features.cc (redundant_pattern): Add
tlsdesc_val.
(pass_x86_cse): Likewise.
(pass_x86_cse::tls_set_insn_from_symbol): New member function.
(pass_x86_cse::candidate_gnu2_tls_p): Set tlsdesc_val. For
TLS64_COMBINE, match both UNSPEC_TLSDESC and UNSPEC_DTPOFF
symbols. For TLS64_CALL, match the UNSPEC_TLSDESC sumbol.
(pass_x86_cse::x86_cse): Initialize the tlsdesc_val field in
load. Pass the tlsdesc_val field to ix86_place_single_tls_call
for X86_CSE_TLSDESC.
gcc/testsuite/
PR target/121694
* gcc.target/i386/pr121668-1b.c: New test.
* gcc.target/i386/pr121694-1a.c: Likewise.
* gcc.target/i386/pr121694-1b.c: Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
If B::get is (implictly or explicitly) constexpr the individual b bindings
have constant initialization and get optimized away, so their symbols don't
appear in the assembly.
gcc/testsuite/ChangeLog:
* g++.dg/cpp26/decomp26.C: Add -fimplicit-constexpr.
|
|
GCC added generic support in r15-7406-gb5a29a93ee29a8 (Feb 2025) with an
'(experimental)' marker, also because ROCm only supported it in their
git repository and not in a released version. Since ROCm 6.4 (Apr 2025),
generic is also supported in released ROCm versions - and has been
meanwhile tested by us.
For architectures that have a well tested architecture, there is no
reason that a binary, compiled for the associated generic architecture,
performs any different to the specific version. Hence, this commit
removes the marker for gfx-9-generic (gfx900, gfx906, gfx90c are known
to work specific architectures), gfx10-3-generic (likewise for gfx1030
and gfx1036), and gfx11-generic (gfx1100 and gfx1103).
gcc/ChangeLog:
* doc/invoke.texi (AMD GCN Options: -march): Remove '(experimental)'
from gfx-{9,10-3,11}-generic.
|
|
Also remove future tense for ROCm as 6.4.0 has been released in April 2025
and it supports generic architectures.
gcc/ChangeLog:
* doc/install.texi (amdgcn): Clarify which binaries must be the
LLVM version and which must be installed. Update version data for
ROCm for generic architectures.
|
|
These 2 testcases were originally designed for the default -march= of
x86_64 so if you pass -march=native (on a target with AVX512 enabled),
they will fail. It fix this, we add `-mno-sse3 -mtune=generic`
to the options to force a specific arch to the testcase.
Changes since v1:
* v2: Use -mtune=generic instead of -mprefer-vector-width=512.
Tested on a skylake-avx512 machine with -march=native.
PR testsuite/120643
gcc/testsuite/ChangeLog:
* gcc.target/i386/vect-pragma-target-1.c: Add `-mno-sse3 -mtune=generic`
to the options.
* gcc.target/i386/vect-pragma-target-2.c: Likewise.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
After r16-3201-gee67004474d521, this testcase started to fail as
we can copy prop into arguments now so the number of "after previous"
check has doubled.
Pushed after a quick check to make sure the testcase is now passing.
PR testsuite/121713
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/vld2-1.c: Update the number of "after previous"
checks.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
gcc/ChangeLog:
* doc/invoke.texi: Document -param=ix86-vect-unroll-limit.
|
|
cost 0, 1 and 15
Add asm dump check and run test for vec_duplicate + vnmsac.vvm
combine to vnmsac.vx, with the GR2VR cost is 0, 2 and 15.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u16.c: Add asm check
for vnmsac.vx.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-u8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-u16.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-u32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-u64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-u8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-u16.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-u32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-u64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-u8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-u16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-u32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-u64.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-u8.c: New test.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
cost 0, 1 and 15
Add asm dump check and run test for vec_duplicate + vnmsac.vvm
combine to vnmsac.vx, with the GR2VR cost is 0, 2 and 15.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-i16.c: Add asm check
for vnmsac.vx.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-i32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-i64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-1-i8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-i16.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-i32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-i64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-2-i8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-i16.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-i32.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-i64.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx-3-i8.c: Ditto.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_ternary.h: Add test
helper macros.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_ternary_data.h: Add test
data for run test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-i16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-i32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-i64.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vx_vnmsac-run-1-i8.c: New test.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
This patch would like to combine the vec_duplicate + vnmsac.vv to the
vnmsac.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have example code like below, GR2VR cost is 0.
#define DEF_VX_TERNARY_CASE_0(T, OP_1, OP_2, NAME)
\
void
\
test_vx_ternary_##NAME##_##T##_case_0 (T * restrict vd, T * restrict
vs2, \NAME T rs1, unsigned n)
\
{
\
for (unsigned i = 0; i < n; i++)
\
vd[i] = vd[i] OP_2 vs2[i] OP_1 rs1;
\
}
DEF_VX_TERNARY_CASE_0(int32_t, *, +, macc)
Before this patch:
11 │ beq a3,zero,.L8
12 │ vsetvli a5,zero,e32,m1,ta,ma
13 │ vmv.v.x v2,a2
...
16 │ .L3:
17 │ vsetvli a5,a3,e32,m1,ta,ma
...
22 │ vnmsac.vv v1,v2,v3
...
25 │ bne a3,zero,.L3
After this patch:
11 │ beq a3,zero,.L8
...
14 │ .L3:
15 │ vsetvli a5,a3,e32,m1,ta,ma
...
20 │ vnmsac.vx v1,a2,v3
...
23 │ bne a3,zero,.L3
gcc/ChangeLog:
* config/riscv/autovec-opt.md (*vnmsac_vx_<mode>): Add new
pattern to combine to vx.
* config/riscv/vector.md (@pred_vnmsac_vx_<mode>): Add new
pattern to generate rtl.
(*pred_nmsac_<mode>_scalar_undef): Ditto.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
libgcc/config/libbid/ChangeLog:
PR target/120691
* bid128_div.c: Fix _Decimal128 arithmetic error under
FE_UPWARD.
* bid128_rem.c: Ditto.
* bid128_sqrt.c: Ditto.
* bid64_div.c (bid64_div): Ditto.
* bid64_sqrt.c (bid64_sqrt): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr120691.c: New test.
|
|
|
|
The pthread_incomplete_struct_argument fix was intended for ancient
versions of Glibc (only 2.3.3 and 2.3.4, I believe). From Glibc 2.3.5
the pthread.h header already included the change to use a pointer
instead of an array, so the fixinclude was no longer used.
However, the https://sourceware.org/bugzilla/show_bug.cgi?id=26647 fix
changed the __setjmpbuf declaration to use struct __jmp_buf_tag __env[1]
again, which caused this fixinclude to start matching again. This means
that GCC now installs a "fixed" pthread.h with a change to a declaration
that guarded by #if ! __GNUC_PREREQ (11, 0), i.e. it's not even relevant
for modern versions of GCC. The "fixed" pthread.h causes problems for
users because of changes to internal implementation details of the
pthread_cond_t type, which require the "fixed" pthread.h to be updated
with mkheaders if Glibc is updated.
This change adds a bypass to the fixinclude, so that it no longer
matches modern Glibc versions, and only applies to glibc versions 2.3.3
and 2.3.4 as originally intended.
Also remove outdated reference to svn in the comment at the top of the
generated file.
fixincludes/ChangeLog:
PR bootstrap/118009
PR bootstrap/119089
* inclhack.def (pthread_incomplete_struct_argument): Add bypass.
* fixincl.tpl: Remove reference to svn in comment.
* fixincl.x: Regenerate.
Reviewed-by: Jason Merrill <jason@redhat.com>
|
|
This implements P2546R5 (Debugging Support), including the P2810R4
(is_debugger_present is_replaceable) changes, allowing
std::is_debugger_present to be replaced by the program.
It would be good to provide a macOS definition of is_debugger_present as
per https://developer.apple.com/library/archive/qa/qa1361/_index.html
but that isn't included in this change.
The src/c++26/debugging.cc file defines a global volatile int which can
be set by debuggers to indicate when they are attached and detached from
a running process. This allows std::is_debugger_present() to give a
reliable answer, and additionally allows a debugger to choose how
std::breakpoint() should behave. Setting the global to a positive value
will cause std::breakpoint() to use that value as an argument to
std::raise, so debuggers that prefer SIGABRT for breakpoints can select
that. By default std::breakpoint() will use a platform-specific action
such as the INT3 instruction on x86, or GCC's __builtin_trap().
On Linux the std::is_debugger_present() function checks whether the
process is being traced by a process named "gdb", "gdbserver" or
"lldb-server", to try to avoid interpreting other tracing processes
(such as strace) as a debugger. There have been comments suggesting this
isn't desirable and that std::is_debugger_present() should just return
true for any tracing process (which is the case for non-Linux targets
that support the ptrace system call).
libstdc++-v3/ChangeLog:
PR libstdc++/119670
* acinclude.m4 (GLIBCXX_CHECK_DEBUGGING): Check for facilities
needed by <debugging>.
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac: Use GLIBCXX_CHECK_DEBUGGING.
* include/Makefile.am: Add new header.
* include/Makefile.in: Regenerate.
* include/bits/version.def (debugging): Add.
* include/bits/version.h: Regenerate.
* include/precompiled/stdc++.h: Add new header.
* src/c++26/Makefile.am: Add new file.
* src/c++26/Makefile.in: Regenerate.
* include/std/debugging: New file.
* src/c++26/debugging.cc: New file.
* testsuite/19_diagnostics/debugging/breakpoint.cc: New test.
* testsuite/19_diagnostics/debugging/breakpoint_if_debugging.cc:
New test.
* testsuite/19_diagnostics/debugging/is_debugger_present.cc: New
test.
* testsuite/19_diagnostics/debugging/is_debugger_present-2.cc:
New test.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
|
|
As with PR116928, we need to set greater_than_is_operator_p within the
lambda delimiters.
PR c++/107953
gcc/cp/ChangeLog:
* parser.cc (cp_parser_lambda_expression): Set
greater_than_is_operator_p.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/lambda-targ18.C: New test.
|
|
So the current pass order is:
```
NEXT_PASS (pass_tail_recursion);
NEXT_PASS (pass_if_to_switch);
NEXT_PASS (pass_convert_switch);
NEXT_PASS (pass_cleanup_eh);
```
But nothing in if_to_switch nor convert_switch will change the IR
such that cleanup eh will take into account.
tail_recusion benifits the most by not having "almost" empty landing pads.
This order was originally done when cleanup_eh was added in r0-92178-ga8da523f8a442f
but it looks like it was just done just before inlining rather than thinking it
could improve passes before hand.
An example where this helps is PR 115201 where we have:
```
;; basic block 5, loop depth 0, maybe hot
;; prev block 4, next block 6, flags: (NEW, REACHABLE, VISITED)
;; pred: 4 (TRUE_VALUE,EXECUTABLE)
[LP 1] # .MEM_19 = VDEF <.MEM_45>
# USE = nonlocal escaped
# CLB = nonlocal escaped
D.4770 = _Z12binarySearchIi2itIiEET0_RKT_S2_S2_D.4690 (item_15(D), startD.4711, midD.4717);
goto <bb 7>; [INV]
;; succ: 8 (EH,EXECUTABLE)
;; 7 (FALLTHRU,EXECUTABLE)
...
;; basic block 8, loop depth 0, maybe hot
;; prev block 7, next block 1, flags: (NEW, REACHABLE, VISITED)
;; pred: 5 (EH,EXECUTABLE)
;; 6 (EH,EXECUTABLE)
# .MEM_7 = PHI <.MEM_19(5), .MEM_18(6)>
<L6>: [LP 1]
# .MEM_20 = VDEF <.MEM_7>
midD.4717 ={v} {CLOBBER(eos)};
resx 1
;; succ:
```
As you can see the empty landing pad should be able to remove away and
then a tail recursion can happen.
Bootstrapped and tested x86_64-linux-gnu.
PR tree-optimization/115201
gcc/ChangeLog:
* passes.def: Move cleanup_eh before first tail_recursion.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
|
|
ChangeLog:
* MAINTAINERS: add myself to write after approval
|
|
This pattern enables the combine pass (or late-combine, depending on the case)
to merge a vec_duplicate into an smin RTL instruction.
Before this patch, we have two instructions, e.g.:
vfmv.v.f v2,fa0
vfmin.vv v1,v1,v2
After, we get only one:
vfmin.vf v1,v1,fa0
gcc/ChangeLog:
* config/riscv/autovec-opt.md (*vfmin_vf_<mode>): Add new pattern to
combine vec_duplicate + vfmin.vv into vfmin.vf.
* config/riscv/vector.md (@pred_<optab><mode>_scalar): Allow VLS modes.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls/floating-point-min-2.c: Adjust scan
dump.
* gcc.target/riscv/rvv/autovec/vls/floating-point-min-4.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f16.c: Add vfmin.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-1-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f16.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-2-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f16.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f32.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf-3-f64.c: Likewise.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_binop.h: Add support for
function variants.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_binop_data.h: Add data for
vfmin.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmin-run-1-f16.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmin-run-1-f32.c: New test.
* gcc.target/riscv/rvv/autovec/vx_vf/vf_vfmin-run-1-f64.c: New test.
|
|
The following emits the assumption that is used for versioning from
niter analysis.
* tree-vect-loop.cc (vect_analyze_loop_form): Dump
niter assumption used for versioning.
|
|
Add an expander for isinf using integer arithmetic. This is
typically faster and avoids generating spurious exceptions on
signaling NaNs. This fixes part of PR66462.
int isinf1 (float x) { return __builtin_isinf (x); }
Before:
fabs s0, s0
mov w0, 2139095039
fmov s31, w0
fcmp s0, s31
cset w0, le
eor w0, w0, 1
ret
After:
fmov w1, s0
mov w0, -16777216
cmp w0, w1, lsl 1
cset w0, eq
ret
gcc:
PR middle-end/66462
* config/aarch64/aarch64.md (isinf<mode>2): Add new expander.
* config/aarch64/iterators.md (mantissa_bits): Add new mode_attr.
gcc/testsuite:
PR middle-end/66462
* gcc.target/aarch64/pr66462.c: Add new test.
|
|
libstdc++-v3/ChangeLog:
* testsuite/18_support/comparisons/categories/zero_neg.cc: New test.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
|
|
Instead of going via the PHI node accessible through the reduc-dec
link, use the scalar def of the reduction SLP node. Compute this
in vectorize_fold_left_reduction itself.
* tree-vect-loop.cc (vectorize_fold_left_reduction): Do not get
reduc_var as argument, instead compute it here.
(vect_transform_reduction): Adjust.
|
|
The current implementation of `complex<_Tp>` assumes that int
`int` is implicitly convertible to `_Tp`, e.g., when using
`complex<_Tp>(1)`.
This patch transforms the implicit conversions into explicit type casts.
As a result, `std::complex` is now able to support more types. One
example is the type `Eigen::Half` from
https://eigen.tuxfamily.org/dox-devel/Half_8h_source.html which does not
implement implicit type conversions.
libstdc++-v3/ChangeLog:
* include/std/complex (polar, __complex_sqrt, pow)
(__complex_pow_unsigned): Use explicit conversions from int to
the complex value_type.
|
|
Asking std::is_constructible_v<std::bitset<1>, NonTrivial*> gives an
error, rather than answering the query. The problem is that the
constructor for std::bitset("010101") is not constrained to only accept
pointers to char-like types, and for the second parameter (which has a
default argument) std::basic_string_view<CharT> gets instantiated. If
the type is not char-like then that has undefined behaviour, and might
trigger a static_assert to fail in the body of std::basic_string_view.
We can fix it by constraining that constructor using the requirements
for char-like types from [strings.general] p1. I've submitted LWG 4294
and proposed making this change in the standard.
libstdc++-v3/ChangeLog:
PR libstdc++/121046
* include/std/bitset (bitset(const CharT*, ...)): Add
constraints on CharT type.
* testsuite/23_containers/bitset/lwg4294.cc: New test.
|