aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-08-07GCC: Check if AR works with --plugin and rcH.J. Lu22-61/+430
AR from older binutils doesn't work with --plugin and rc: [hjl@gnu-cfl-2 bin]$ touch foo.c [hjl@gnu-cfl-2 bin]$ ar --plugin /usr/libexec/gcc/x86_64-redhat-linux/10/liblto_plugin.so rc libfoo.a foo.c [hjl@gnu-cfl-2 bin]$ ./ar --plugin /usr/libexec/gcc/x86_64-redhat-linux/10/liblto_plugin.so rc libfoo.a foo.c ./ar: no operation specified [hjl@gnu-cfl-2 bin]$ ./ar --version GNU ar (Linux/GNU Binutils) 2.29.51.0.1.20180112 Copyright (C) 2018 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. [hjl@gnu-cfl-2 bin]$ Check if AR works with --plugin and rc before passing --plugin to AR and RANLIB. ChangeLog: * configure: Regenerated. * libtool.m4 (_LT_CMD_OLD_ARCHIVE): Check if AR works with --plugin and rc before enabling --plugin. config/ChangeLog: * gcc-plugin.m4 (GCC_PLUGIN_OPTION): Check if AR works with --plugin and rc before enabling --plugin. gcc/ChangeLog: * configure: Regenerate. libatomic/ChangeLog: * configure: Regenerate. libbacktrace/ChangeLog: * configure: Regenerate. libcc1/ChangeLog: * configure: Regenerate. libffi/ChangeLog: * configure: Regenerate. libgfortran/ChangeLog: * configure: Regenerate. libgm2/ChangeLog: * configure: Regenerate. libgomp/ChangeLog: * configure: Regenerate. libiberty/ChangeLog: * configure: Regenerate. libitm/ChangeLog: * configure: Regenerate. libobjc/ChangeLog: * configure: Regenerate. libphobos/ChangeLog: * configure: Regenerate. libquadmath/ChangeLog: * configure: Regenerate. libsanitizer/ChangeLog: * configure: Regenerate. libssp/ChangeLog: * configure: Regenerate. libstdc++-v3/ChangeLog: * configure: Regenerate. libvtv/ChangeLog: * configure: Regenerate. lto-plugin/ChangeLog: * configure: Regenerate. zlib/ChangeLog: * configure: Regenerate.
2023-08-07Sync with binutils: GCC: Pass --plugin to AR and RANLIBH.J. Lu28-49/+621
Sync with binutils for building binutils with LTO: 50ad1254d50 GCC: Pass --plugin to AR and RANLIB Detect GCC LTO plugin. Pass --plugin to AR and RANLIB to support LTO build. ChangeLog: * Makefile.tpl (AR): Add @AR_PLUGIN_OPTION@ (RANLIB): Add @RANLIB_PLUGIN_OPTION@. * configure.ac: Include config/gcc-plugin.m4. AC_SUBST AR_PLUGIN_OPTION and RANLIB_PLUGIN_OPTION. * libtool.m4 (_LT_CMD_OLD_ARCHIVE): Pass --plugin to AR and RANLIB if possible. * Makefile.in: Regenerated. * configure: Likewise. config/ChangeLog: * gcc-plugin.m4 (GCC_PLUGIN_OPTION): New. libiberty/ChangeLog: * Makefile.in (AR): Add @AR_PLUGIN_OPTION@ (RANLIB): Add @RANLIB_PLUGIN_OPTION@. (configure_deps): Depend on ../config/gcc-plugin.m4. * configure.ac: AC_SUBST AR_PLUGIN_OPTION and RANLIB_PLUGIN_OPTION. * aclocal.m4: Regenerated. * configure: Likewise. zlib/ChangeLog: * configure: Regenerated. gcc/ChangeLog: * configure: Regenerate. libatomic/ChangeLog: * configure: Regenerate. libbacktrace/ChangeLog: * configure: Regenerate. libcc1/ChangeLog: * configure: Regenerate. libffi/ChangeLog: * configure: Regenerate. libgfortran/ChangeLog: * configure: Regenerate. libgm2/ChangeLog: * configure: Regenerate. libgomp/ChangeLog: * configure: Regenerate. libitm/ChangeLog: * configure: Regenerate. libobjc/ChangeLog: * configure: Regenerate. libphobos/ChangeLog: * configure: Regenerate. libquadmath/ChangeLog: * configure: Regenerate. libsanitizer/ChangeLog: * configure: Regenerate. libssp/ChangeLog: * configure: Regenerate. libstdc++-v3/ChangeLog: * configure: Regenerate. libvtv/ChangeLog: * configure: Regenerate. lto-plugin/ChangeLog: * configure: Regenerate.
2023-08-07gcc-4.5 build fixesAlan Modra1-2/+0
Trying to build binutils with an older gcc currently fails. Working around these gcc bugs is not onerous so let's fix them. include/ChangeLog: * xtensa-dynconfig.h (xtensa_isa_internal): Delete unnecessary forward declaration.
2023-08-07PR29961, plugin-api.h: "Could not detect architecture endianess"Alan Modra1-22/+23
Found when attempting to build binutils on sparc sunos-5.8 where sys/byteorder.h defines _BIG_ENDIAN but not any of the BYTE_ORDER variants. This patch adds the extra tests to cope with the old machine, and tidies the header a little. include/ChangeLog: * plugin-api.h: When handling non-gcc or gcc < 4.6.0 include necessary header files before testing macros. Make more use of #elif. Test _LITTLE_ENDIAN and _BIG_ENDIAN in final tests.
2023-08-07toplevel: Substitute GDCFLAGS instead of using CFLAGSArsen Arsenović1-1/+1
r14-2875-g1ed21e23d6d4da ("Use substituted GDCFLAGS") already implemented this change, but only on the generated file rather than in the template it is generated from. ChangeLog: * Makefile.tpl: Substitute @GDCFLAGS@ instead of using $(CFLAGS).
2023-08-07[committed][RISC-V]Don't reject constants in cmov conditionJeff Law1-1/+2
This test is too aggressive. Constants have VOIDmode, so we need to let the through this phase of conditional move support. Fixes several missed conditional moves with the trunk. gcc/ * config/riscv/riscv.cc (riscv_expand_conditional_move): Allow VOIDmode operands to conditional before canonicalization.
2023-08-07cprop_hardreg: Allow propagation of stack pointer in more cases.Manolis Tsamis1-33/+23
The stack pointer propagation fix 736f8fd3 turned out to be more restrictive than needed by rejecting propagation of the stack pointer when REG_POINTER didn't match. This commit removes this check: When the stack pointer is propagated it is fine for this to result in REG_POINTER becoming true from false, which is what the original code checked. This simplification makes the previously introduced function maybe_copy_reg_attrs obsolete and the logic can be inlined at the call sites, as it was before 736f8fd3. gcc/ChangeLog: * regcprop.cc (maybe_copy_reg_attrs): Remove unnecessary function. (find_oldest_value_reg): Inline stack_pointer_rtx check. (copyprop_hardreg_forward_1): Inline stack_pointer_rtx check.
2023-08-07MAINTAINERS: Add myself as a BPF port reviewerDavid Faust1-1/+1
ChangeLog: * MAINTAINERS: Add the BPF port to my reviewer listing.
2023-08-07ipa-sra: Don't consider CLOBBERS as writes preventing splittingMartin Jambor4-6/+94
When IPA-SRA detects whether a parameter passed by reference is written to, it does not special case CLOBBERs which means it often bails out unnecessarily, especially when dealing with C++ destructors. Fixed by the obvious continue in the two relevant loops and by adding a simple function that marks the clobbers in the transformation code as statements to be removed. gcc/ChangeLog: 2023-08-04 Martin Jambor <mjambor@suse.cz> PR ipa/110378 * ipa-param-manipulation.h (class ipa_param_body_adjustments): New members get_ddef_if_exists_and_is_used and mark_clobbers_dead. * ipa-sra.cc (isra_track_scalar_value_uses): Ignore clobbers. (ptr_parm_has_nonarg_uses): Likewise. * ipa-param-manipulation.cc (ipa_param_body_adjustments::get_ddef_if_exists_and_is_used): New. (ipa_param_body_adjustments::mark_dead_statements): Move initial checks to get_ddef_if_exists_and_is_used. (ipa_param_body_adjustments::mark_clobbers_dead): New. (ipa_param_body_adjustments::common_initialization): Call mark_clobbers_dead when splitting. gcc/testsuite/ChangeLog: 2023-07-31 Martin Jambor <mjambor@suse.cz> PR ipa/110378 * g++.dg/ipa/pr110378-1.C: New test.
2023-08-07[committed] [RISC-V] Handle more cases in riscv_expand_conditional_moveRaphael Zinsly2-4/+42
As I've mentioned in the main zicond thread, Ventana has had patches that support more cases by first emitting a suitable scc instruction essentially as a canonicalization step of the condition for zicond. For example if we have (set (target) (if_then_else (op (reg1) (reg2)) (true_value) (false_value))) The two register comparison isn't handled by zicond directly. But we can generate something like this instead (set (temp) (op (reg1) (reg2))) (set (target) (if_then_else (op (temp) (const_int 0)) (true_value) (false_value) Then let the remaining code from Xiao handle the true_value/false_value to make sure it's zicond compatible. This is primarily Raphael's work. My involvement has been mostly to move it from its original location (in the .md file) into the expander function and fix minor problems with the FP case. gcc/ * config/riscv/riscv.cc (riscv_expand_int_scc): Add invert_ptr as an argument and pass it to riscv_emit_int_order_test. (riscv_expand_conditional_move): Handle cases where the condition is not EQ/NE or the second argument to the conditional is not (const_int 0). * config/riscv/riscv-protos.h (riscv_expand_int_scc): Update prototype. Co-authored-by: Jeff Law <jlaw@ventanamicro.com>
2023-08-07MATCH: [PR109959] `(uns <= 1) & uns` could be optimized to `uns == 1`Andrew Pinski6-6/+117
I noticed while looking into some code generation of bitmap_single_bit_set_p, that sometimes: ``` if (uns > 1) return 0; return uns == 1; ``` Would not optimize down to just: ``` return uns == 1; ``` In this case, VRP likes to change `a == 1` into `(bool)a` if a has a range of [0,1] due to `a <= 1` side of the branch. We might end up with this similar code even without VRP, in the case of builtin-sprintf-warn-23.c (and Wrestrict.c), we had: ``` if (s < 0 || 1 < s) s = 0; ``` Which is the same as `s = ((unsigned)s) <= 1 ? s : 0`; So we should be able to catch that also. This adds 2 patterns to catch `(uns <= 1) & uns` and `(uns > 1) ? 0 : uns` and convert those into: `(convert) uns == 1`. OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions. PR tree-optimization/109959 gcc/ChangeLog: * match.pd (`(a > 1) ? 0 : (cast)a`, `(a <= 1) & (cast)a`): New patterns. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/builtin-sprintf-warn-23.c: Remove xfail. * c-c++-common/Wrestrict.c: Update test and remove some xfail. * gcc.dg/tree-ssa/cmpeq-1.c: New test. * gcc.dg/tree-ssa/cmpeq-2.c: New test. * gcc.dg/tree-ssa/cmpeq-3.c: New test.
2023-08-07Use RPO order for sinkingRichard Biener1-14/+5
The following makes us use RPO order instead of walking post-dominators. This ensures we visit a block before any predecessors. I've seen some extra sinking because of this in a larger testcase but failed to reduce a smaller one (processing of post-dominator sons is unordered so I failed to have "luck"). * tree-ssa-sink.cc (pass_sink_code::execute): Do not calculate post-dominators. Calculate RPO on the inverted graph and process blocks in that order.
2023-08-07Fix ICE in rtl check when bootstrap.liuhongt3-6/+6
/var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/libgfortran/generated/matmul_i1.c: In function ‘matmul_i1_avx512f’: /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/libgfortran/generated/matmul_i1.c:1781:1: internal compiler error: RTL check: expected elt 0 type 'i' or 'n', have 'w' (rtx const_int) in vpternlog_redundant_operand_mask, at config/i386/i386.cc:19460 1781 | } | ^ 0x5559de26dc2d rtl_check_failed_type2(rtx_def const*, int, int, int, char const*, int, char const*) /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/rtl.cc:761 0x5559de340bfe vpternlog_redundant_operand_mask(rtx_def**) /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/config/i386/i386.cc:19460 0x5559dfec67a6 split_44 /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/config/i386/sse.md:12730 0x5559dfec67a6 split_63 /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/config/i386/sse.md:28428 0x5559deb8a682 try_split(rtx_def*, rtx_insn*, int) /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/emit-rtl.cc:3800 0x5559deb8adf2 try_split(rtx_def*, rtx_insn*, int) /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/emit-rtl.cc:3972 0x5559def69194 split_insn /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/recog.cc:3385 0x5559def70c57 split_all_insns() /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/recog.cc:3489 0x5559def70d0c execute /var/tmp/portage/sys-devel/gcc-14.0.0_pre20230806/work/gcc-14-20230806/gcc/recog.cc:4413 Use INTVAL (imm_op) instead of XINT (imm_op, 0). gcc/ChangeLog: PR target/110926 * config/i386/i386-protos.h (vpternlog_redundant_operand_mask): Adjust parameter type. * config/i386/i386.cc (vpternlog_redundant_operand_mask): Use INTVAL instead of XINT, also adjust parameter type from rtx* to rtx since the function only needs operands[4] in vpternlog pattern. (substitute_vpternlog_operands): Pass operands[4] instead of operands to vpternlog_redundant_operand_mask. * config/i386/sse.md: Ditto.
2023-08-07Improve -fopt-info-vec for basic-block vectorizationRichard Biener1-0/+8
We currently dump notes like flow_lam.f:65:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:65:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:65:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:65:72: optimized: basic block part vectorized using 32 byte vectors .. repeating the same location for multiple instances because we clobber vect_location during BB vectorization. The following avoids this, improving things to flow_lam.f:15:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:16:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:17:72: optimized: basic block part vectorized using 32 byte vectors flow_lam.f:18:72: optimized: basic block part vectorized using 32 byte vectors ... * tree-vect-slp.cc (vect_slp_region): Save/restore vect_location around dumping code.
2023-08-07i386: Clear upper bits of XMM register for V4HFmode/V2HFmode operations ↵liuhongt3-29/+177
[PR110762] Similar like r14-2786-gade30fad6669e5, the patch is for V4HF/V2HFmode. gcc/ChangeLog: PR target/110762 * config/i386/mmx.md (<insn><mode>3): Changed from define_insn to define_expand and break into .. (<insn>v4hf3): .. this. (divv4hf3): .. this. (<insn>v2hf3): .. this. (divv2hf3): .. this. (movd_v2hf_to_sse): New define_expand. (movq_<mode>_to_sse): Extend to V4HFmode. (mmxdoublevecmode): Ditto. (V2FI_V4HF): New mode iterator. * config/i386/sse.md (*vec_concatv4sf): Extend to hanlde V8HF by using mode iterator V4SF_V8HF, renamed to .. (*vec_concat<mode>): .. this. (*vec_concatv4sf_0): Extend to handle V8HF by using mode iterator V4SF_V8HF, renamed to .. (*vec_concat<mode>_0): .. this. (*vec_concatv8hf_movss): New define_insn. (V4SF_V8HF): New mode iterator. gcc/testsuite/ChangeLog: * gcc.target/i386/pr110762-v4hf.c: New test.
2023-08-07ada: Refactor multiple returnsSheri Bernstein1-15/+15
Replace multiple returns by a single return statement with a conditional expression. This is more readable and maintainable, and also conformant with a Highly Recommended design principle of ISO 26262-6. gcc/ada/ * libgnat/s-parame__qnx.adb: Refactor multiple returns.
2023-08-07ada: Extend precondition of Interfaces.C.String.Value with LengthPiotr Trojanek1-4/+4
The existing precondition guarded against exception Dereference_Error, but not against Constraint_Error. The RM rule B.3.1(36/3) only mentions Constraint_Error for the Value function which returns char_array, but the one which returns String has the same restriction, because it is equivalent to calling the variant which returns char_array and then converted. gcc/ada/ * libgnat/i-cstrin.ads (Value): Extend preconditions; adapt comment for the package.
2023-08-07ada: Crash in GNATprove due to wrong detection of inliningYannick Moy1-8/+10
When a function is called in a predicate, it was not properly detected as not always inlined in GNATprove mode, which led to crashes later during analysis. Fixed now. gcc/ada/ * sem_res.adb (Resolve_Call): Always call Cannot_Inline so that subprogram called is marked as not always inlined.
2023-08-07ada: Spurious error on class-wide preconditionsJavier Miranda1-0/+12
The compiler reports an spurious error when a class-wide precondition expression has a class-wide type conversion. gcc/ada/ * sem_res.adb (Resolve_Type_Conversion): Do not warn on conversion to class-wide type on internally build helpers of class-wide preconditions.
2023-08-07tree-optimization/110897 - Fix missed vectorization of shift on both RISC-V ↵Juzhe-Zhong2-3/+4
and aarch64 Consider this following case: #include <stdint.h> #define TEST2_TYPE(TYPE) \ __attribute__((noipa)) \ void vshiftr_##TYPE (TYPE *__restrict dst, TYPE *__restrict a, TYPE *__restrict b, int n) \ { \ for (int i = 0; i < n; i++) \ dst[i] = (a[i]) >> b[i]; \ } #define TEST_ALL() \ TEST2_TYPE(uint8_t) \ TEST2_TYPE(uint16_t) \ TEST2_TYPE(uint32_t) \ TEST2_TYPE(uint64_t) \ TEST_ALL() Both RISC-V and aarch64 of trunk GCC failed vectorize uint8_t/uint16_t with following missed report: <source>:17:1: missed: couldn't vectorize loop <source>:17:1: missed: not vectorized: relevant stmt not supported: patt_46 = MIN_EXPR <_6, 7>; <source>:17:1: missed: couldn't vectorize loop <source>:17:1: missed: not vectorized: relevant stmt not supported: patt_47 = MIN_EXPR <_7, 15>; Compiler returned: 0 Both GCC 13.1 can vectorize, see: https://godbolt.org/z/6vaMK5M1o Bootstrap and regression on X86 passed. Ok for trunk ? gcc/ChangeLog: * tree-vect-patterns.cc (vect_recog_over_widening_pattern): Add op vectype. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/binop/narrow-1.c: Adapt testcase.
2023-08-07x86: drop redundant "prefix_data16" attributesJan Beulich2-44/+0
The attribute defaults to 1 for TI-mode insns of type sselog, sselog1, sseiadd, sseimul, and sseishft. In *<code>v8hi3 [smaxmin] and *<code>v16qi3 [umaxmin] also drop the similarly stray "prefix_extra" at this occasion. These two max/min flavors are encoded in 0f space. gcc/ * config/i386/mmx.md (*mmx_pinsrd): Drop "prefix_data16". (*mmx_pinsrb): Likewise. (*mmx_pextrb): Likewise. (*mmx_pextrb_zext): Likewise. (mmx_pshufbv8qi3): Likewise. (mmx_pshufbv4qi3): Likewise. (mmx_pswapdv2si2): Likewise. (*pinsrb): Likewise. (*pextrb): Likewise. (*pextrb_zext): Likewise. * config/i386/sse.md (*sse4_1_mulv2siv2di3<mask_name>): Likewise. (*sse2_eq<mode>3): Likewise. (*sse2_gt<mode>3): Likewise. (<sse2p4_1>_pinsr<ssemodesuffix>): Likewise. (*vec_extract<mode>): Likewise. (*vec_extract<PEXTR_MODE12:mode>_zext): Likewise. (*vec_extractv16qi_zext): Likewise. (ssse3_ph<plusminus_mnemonic>wv8hi3): Likewise. (ssse3_pmaddubsw128): Likewise. (*<ssse3_avx2>_pmulhrsw<mode>3<mask_name>): Likewise. (<ssse3_avx2>_pshufb<mode>3<mask_name>): Likewise. (<ssse3_avx2>_psign<mode>3): Likewise. (<ssse3_avx2>_palignr<mode>): Likewise. (*abs<mode>2): Likewise. (sse4_2_pcmpestr): Likewise. (sse4_2_pcmpestri): Likewise. (sse4_2_pcmpestrm): Likewise. (sse4_2_pcmpestr_cconly): Likewise. (sse4_2_pcmpistr): Likewise. (sse4_2_pcmpistri): Likewise. (sse4_2_pcmpistrm): Likewise. (sse4_2_pcmpistr_cconly): Likewise. (vgf2p8affineinvqb_<mode><mask_name>): Likewise. (vgf2p8affineqb_<mode><mask_name>): Likewise. (vgf2p8mulb_<mode><mask_name>): Likewise. (*<code>v8hi3 [smaxmin]): Drop "prefix_data16" and "prefix_extra". (*<code>v16qi3 [umaxmin]): Likewise.
2023-08-07x86: correct "length_immediate" in a few casesJan Beulich2-3/+3
When first added explicitly in 3ddffba914b2 ("i386.md (sse4_1_round<mode>2): Add avx512f alternative"), "*" should not have been used for the pre-existing alternative. The attribute was plain missing. Subsequent changes adding more alternatives then generously extended the bogus pattern. Apparently something similar happened to the two mmx_pblendvb_* insns. gcc/ * config/i386/i386.md (sse4_1_round<mode>2): Make "length_immediate" uniformly 1. * config/i386/mmx.md (mmx_pblendvb_v8qi): Likewise. (mmx_pblendvb_<mode>): Likewise.
2023-08-07x86: add missing "prefix" attribute to VF{,C}MULCJan Beulich1-0/+2
gcc/ * config/i386/sse.md (<avx512>_<complexopname>_<mode><maskc_name><round_name>): Add "prefix" attribute. (avx512fp16_<complexopname>sh_v8hf<mask_scalarc_name><round_scalarcz_name>): Likewise.
2023-08-07x86: add (adjust) XOP insn attributesJan Beulich1-15/+50
Many were lacking "prefix" and "prefix_extra", some had a bogus value of 2 for "prefix_extra" (presumably inherited from their SSE5 counterparts, which are long gone) and a meaningless "prefix_data16" one. Where missing, "mode" attributes are also added. (Note that "sse4arg" and "ssemuladd" ones don't need further adjustment in this regard.) gcc/ * config/i386/sse.md (xop_phadd<u>bw): Add "prefix", "prefix_extra", and "mode" attributes. (xop_phadd<u>bd): Likewise. (xop_phadd<u>bq): Likewise. (xop_phadd<u>wd): Likewise. (xop_phadd<u>wq): Likewise. (xop_phadd<u>dq): Likewise. (xop_phsubbw): Likewise. (xop_phsubwd): Likewise. (xop_phsubdq): Likewise. (xop_rotl<mode>3): Add "prefix" and "prefix_extra" attributes. (xop_rotr<mode>3): Likewise. (xop_frcz<mode>2): Likewise. (*xop_vmfrcz<mode>2): Likewise. (xop_vrotl<mode>3): Add "prefix" attribute. Change "prefix_extra" to 1. (xop_sha<mode>3): Likewise. (xop_shl<mode>3): Likewise.
2023-08-07x86: drop stray "prefix_extra"Jan Beulich1-29/+2
While the attribute is relevant for legacy- and VEX-encoded insns, it is of no relevance for EVEX-encoded ones. While there in <mask_codefor>avx512dq_broadcast<mode><mask_name>_1 add the missing "length_immediate". gcc/ * config/i386/sse.md (*<avx512>_eq<mode>3<mask_scalar_merge_name>_1): Drop "prefix_extra". (avx512dq_vextract<shuffletype>64x2_1_mask): Likewise. (*avx512dq_vextract<shuffletype>64x2_1): Likewise. (avx512f_vextract<shuffletype>32x4_1_mask): Likewise. (*avx512f_vextract<shuffletype>32x4_1): Likewise. (vec_extract_lo_<mode>_mask [AVX512 forms]): Likewise. (vec_extract_lo_<mode> [AVX512 forms]): Likewise. (vec_extract_hi_<mode>_mask [AVX512 forms]): Likewise. (vec_extract_hi_<mode> [AVX512 forms]): Likewise. (@vec_extract_lo_<mode> [AVX512 forms]): Likewise. (@vec_extract_hi_<mode> [AVX512 forms]): Likewise. (vec_extract_lo_v64qi): Likewise. (vec_extract_hi_v64qi): Likewise. (*vec_widen_umult_even_v16si<mask_name>): Likewise. (*vec_widen_smult_even_v16si<mask_name>): Likewise. (*avx512f_<code><mode>3<mask_name>): Likewise. (*vec_extractv4ti): Likewise. (avx512bw_<code>v32qiv32hi2<mask_name>): Likewise. (<mask_codefor>avx512dq_broadcast<mode><mask_name>_1): Likewise. Add "length_immediate".
2023-08-07x86: replace/correct bogus "prefix_extra"Jan Beulich3-14/+19
In the rdrand and rdseed cases "prefix_0f" is meant instead. For mmx_floatv2siv2sf2 1 is correct only for the first alternative. For the integer min/max cases 1 uniformly applies to legacy and VEX encodings (the UB and SW variants are dealt with separately anyway). Same for {,V}MOVNTDQA. Unlike {,V}PEXTRW, which has two encoding forms, {,V}PINSRW only has a single form in 0f space. (In *vec_extract<mode> note that the dropped part if the condition also referenced non-existing alternative 2.) Of the integer compare insns, only the 64-bit element forms are encoded in 0f38 space. gcc/ * config/i386/i386.md (@rdrand<mode>): Add "prefix_0f". Drop "prefix_extra". (@rdseed<mode>): Likewise. * config/i386/mmx.md (<code><mode>3 [smaxmin and umaxmin cases]): Adjust "prefix_extra". * config/i386/sse.md (@vec_set<mode>_0): Likewise. (*sse4_1_<code><mode>3<mask_name>): Likewise. (*avx2_eq<mode>3): Likewise. (avx2_gt<mode>3): Likewise. (<sse2p4_1>_pinsr<ssemodesuffix>): Likewise. (*vec_extract<mode>): Likewise. (<vi8_sse4_1_avx2_avx512>_movntdqa): Likewise.
2023-08-07x86: "prefix_extra" can't really be "2"Jan Beulich1-3/+6
In the three remaining instances separate "prefix_0f" and "prefix_rep" are what is wanted instead. gcc/ * config/i386/i386.md (rd<fsgs>base<mode>): Add "prefix_0f" and "prefix_rep". Drop "prefix_extra". (wr<fsgs>base<mode>): Likewise. (ptwrite<mode>): Likewise.
2023-08-07x86: "ssemuladd" adjustmentsJan Beulich2-12/+66
They're all VEX3- (also covering XOP) or EVEX-encoded. Express that in the default calculation of "prefix". FMA4 insns also all have a 1-byte immediate operand. Where the default calculation is not sufficient / applicable, add explicit "prefix" attributes. While there also add a "mode" attribute to fma_<complexpairopname>_<mode>_pair. gcc/ * config/i386/i386.md (isa): Move up. (length_immediate): Handle "fma4". (prefix): Handle "ssemuladd". * config/i386/sse.md (*fma_fmadd_<mode>): Add "prefix" attribute. (<sd_mask_codefor>fma_fmadd_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fmadd_<mode>_mask<round_name>): Likewise. (<avx512>_fmadd_<mode>_mask3<round_name>): Likewise. (<sd_mask_codefor>fma_fmsub_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fmsub_<mode>_mask<round_name>): Likewise. (<avx512>_fmsub_<mode>_mask3<round_name>): Likewise. (*fma_fnmadd_<mode>): Likewise. (<sd_mask_codefor>fma_fnmadd_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fnmadd_<mode>_mask<round_name>): Likewise. (<avx512>_fnmadd_<mode>_mask3<round_name>): Likewise. (<sd_mask_codefor>fma_fnmsub_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fnmsub_<mode>_mask<round_name>): Likewise. (<avx512>_fnmsub_<mode>_mask3<round_name>): Likewise. (<sd_mask_codefor>fma_fmaddsub_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fmaddsub_<mode>_mask<round_name>): Likewise. (<avx512>_fmaddsub_<mode>_mask3<round_name>): Likewise. (<sd_mask_codefor>fma_fmsubadd_<mode><sd_maskz_name><round_name>): Likewise. (<avx512>_fmsubadd_<mode>_mask<round_name>): Likewise. (<avx512>_fmsubadd_<mode>_mask3<round_name>): Likewise. (*fmai_fmadd_<mode>): Likewise. (*fmai_fmsub_<mode>): Likewise. (*fmai_fnmadd_<mode><round_name>): Likewise. (*fmai_fnmsub_<mode><round_name>): Likewise. (avx512f_vmfmadd_<mode>_mask<round_name>): Likewise. (avx512f_vmfmadd_<mode>_mask3<round_name>): Likewise. (avx512f_vmfmadd_<mode>_maskz_1<round_name>): Likewise. (*avx512f_vmfmsub_<mode>_mask<round_name>): Likewise. (avx512f_vmfmsub_<mode>_mask3<round_name>): Likewise. (*avx512f_vmfmsub_<mode>_maskz_1<round_name>): Likewise. (avx512f_vmfnmadd_<mode>_mask<round_name>): Likewise. (avx512f_vmfnmadd_<mode>_mask3<round_name>): Likewise. (avx512f_vmfnmadd_<mode>_maskz_1<round_name>): Likewise. (*avx512f_vmfnmsub_<mode>_mask<round_name>): Likewise. (*avx512f_vmfnmsub_<mode>_mask3<round_name>): Likewise. (*avx512f_vmfnmsub_<mode>_maskz_1<round_name>): Likewise. (*fma4i_vmfmadd_<mode>): Likewise. (*fma4i_vmfmsub_<mode>): Likewise. (*fma4i_vmfnmadd_<mode>): Likewise. (*fma4i_vmfnmsub_<mode>): Likewise. (fma_<complexopname>_<mode><sdc_maskz_name><round_name>): Likewise. (<avx512>_<complexopname>_<mode>_mask<round_name>): Likewise. (avx512fp16_fma_<complexopname>sh_v8hf<mask_scalarcz_name><round_scalarcz_name>): Likewise. (avx512fp16_<complexopname>sh_v8hf_mask<round_name>): Likewise. (xop_p<macs><ssemodesuffix><ssemodesuffix>): Likewise. (xop_p<macs>dql): Likewise. (xop_p<macs>dqh): Likewise. (xop_p<macs>wd): Likewise. (xop_p<madcs>wd): Likewise. (fma_<complexpairopname>_<mode>_pair): Likewise. Add "mode" attribute.
2023-08-07x86: "sse4arg" adjustmentsJan Beulich3-40/+17
Record common properties in other attributes' default calculations: There's always a 1-byte immediate, and they're always encoded in a VEX3- like manner (note that "prefix_extra" already evaluates to 1 in this case). The drop now (or already previously) redundant explicit attributes, adding "mode" ones where they were missing. Furthermore use "sse4arg" consistently for all VPCOM* insns; so far signed comparisons did use it, while unsigned ones used "ssecmp". Note that while they have (not counting the explicit or implicit immediate operand) they really only have 3 operands, the operator is also counted in those patterns. That's relevant for establishing the "memory" attribute's value, and at the same time benign when there are only register operands. Note that despite also having 4 operands, multiply-add insns aren't affected by this change, as they use "ssemuladd" for "type". gcc/ * config/i386/i386.md (length_immediate): Handle "sse4arg". (prefix): Likewise. (*xop_pcmov_<mode>): Add "mode" attribute. * config/i386/mmx.md (*xop_maskcmp<mode>3): Drop "prefix_data16", "prefix_rep", "prefix_extra", and "length_immediate" attributes. (*xop_maskcmp_uns<mode>3): Likewise. Switch "type" to "sse4arg". (*xop_pcmov_<mode>): Add "mode" attribute. * config/i386/sse.md (xop_pcmov_<mode><avxsizesuffix>): Add "mode" attribute. (xop_maskcmp<mode>3): Drop "prefix_data16", "prefix_rep", "prefix_extra", and "length_immediate" attributes. (xop_maskcmp_uns<mode>3): Likewise. Switch "type" to "sse4arg". (xop_maskcmp_uns2<mode>3): Drop "prefix_data16", "prefix_extra", and "length_immediate" attributes. Switch "type" to "sse4arg". (xop_pcom_tf<mode>3): Likewise. (xop_vpermil2<mode>3): Drop "length_immediate" attribute.
2023-08-07x86: "prefix_extra" tidyingJan Beulich1-5/+3
Drop SSE5 leftovers from both its comment and its default calculation. A value of 2 simply cannot occur anymore. Instead extend the comment to mention the use of the attribute in "length_vex", clarifying why "prefix_extra" can actually be meaningful on VEX-encoded insns despite those not having any real prefixes except possibly segment overrides. gcc/ * config/i386/i386.md (prefix_extra): Correct comment. Fold cases yielding 2 into ones yielding 1.
2023-08-07libsanitizer: Fix SPARC stacktracesRainer Orth2-12/+0
As detailed in LLVM Issue #57624 (https://github.com/llvm/llvm-project/issues/57624), a patch to sanitizer_internal_defs.h broke SPARC stacktraces in the sanitizers. The issue has now been fixed upstream (https://reviews.llvm.org/D156504) and I'd like to cherry-pick that patch. Bootstrapped without regressions on sparc-sun-solaris2.11. 2023-07-27 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE> libsanitizer: * sanitizer_common/sanitizer_stacktrace_sparc.cpp, sanitizer_common/sanitizer_unwind_linux_libcdep.cpp: Cherry-pick llvm-project revision 679c076ae446af81eba81ce9b94203a273d4b88a.
2023-08-07Fix profile update after versioning ifconverted loopJan Hubicka4-5/+28
If loop is ifconverted and later versioning by vectorizer, vectorizer will reuse the scalar loop produced by ifconvert. Curiously enough it does not seem to do so for versions produced by loop distribution while for loop distribution this matters (since since both ldist versions survive to final code) while after ifcvt it does not (since we remove non-vectorized path). This patch fixes associated profile update. Here it is necessary to scale both arms of the conditional according to runtime checks inserted. We got partly right the loop body, but not the preheader block and block after exit. The first is particularly bad since it changes loop iterations estimates. So we now turn 4 original loops: loop 1: iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979) loop 2: iterations by profile: 100.000000 (reliable) entry count:39848881 (precise, freq 468.8104) loop 3: iterations by profile: 100.000000 (reliable) entry count:39848881 (precise, freq 468.8104) loop 4: iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902) Into following loops iterations by profile: 5.312499 (unreliable, maybe flat) entry count:12742188 (guessed, freq 149.9081) vectorized and split loop 1, peeled iterations by profile: 0.009496 (unreliable, maybe flat) entry count:374798 (guessed, freq 4.4094) split loop 1 (last iteration), peeled iterations by profile: 100.000008 (unreliable) entry count:3945039 (guessed, freq 46.4122) scalar version of loop 1 iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420) redundant scalar version of loop 1 which we could eliminate if vectorizer understood ldist iterations by profile: 100.000000 (unreliable) entry count:35505353 (guessed, freq 417.7100) unvectorized loop 2 iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512) vectorized loop 2, not peeled (hits max-peel-insns) iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420) unvectorized loop 3 iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512) vectorized loop 3, not peeled (hits max-peel-insns) iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979) loop 1 iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902) loop 4 With this change we are on 0 profile erros on hmmer benchmark: Pass dump id |dynamic mismatch |overall | |in count |size |time | 172t ch_vect | 0 | 996 | 385812023346 | 173t ifcvt | 71010686 +71010686| 1021 +2.5%| 468361969416 +21.4%| 174t vect | 210830784 +139820098| 1497 +46.6%| 216073467874 -53.9%| 175t dce | 210830784 | 1387 -7.3%| 205273170281 -5.0%| 176t pcom | 210830784 | 1387 | 201722634966 -1.7%| 177t cunroll | 0 -210830784| 1443 +4.0%| 180441501289 -10.5%| 182t ivopts | 0 | 1385 -4.0%| 136412345683 -24.4%| 183t lim | 0 | 1389 +0.3%| 135093950836 -1.0%| 192t reassoc | 0 | 1381 -0.6%| 134778347700 -0.2%| 193t slsr | 0 | 1380 -0.1%| 134738100330 -0.0%| 195t tracer | 0 | 1521 +10.2%| 134738179146 +0.0%| 196t fre | 2680654 +2680654| 1489 -2.1%| 134659672725 -0.1%| 198t dom | 5361308 +2680654| 1473 -1.1%| 134449553658 -0.2%| 201t vrp | 5361308 | 1474 +0.1%| 134489004050 +0.0%| 202t ccp | 5361308 | 1472 -0.1%| 134440752274 -0.0%| 204t dse | 5361308 | 1444 -1.9%| 133802300525 -0.5%| 206t forwprop| 5361308 | 1433 -0.8%| 133542828370 -0.2%| 207t sink | 5361308 | 1431 -0.1%| 133542658728 -0.0%| 211t store-me| 5361308 | 1430 -0.1%| 133542573728 -0.0%| 212t cddce | 5361308 | 1428 -0.1%| 133541776728 -0.0%| 258r expand | 5361308 |----------------|--------------------| 260r into_cfg| 5361308 | 9334 -0.8%| 885820707913 -0.6%| 261r jump | 5361308 | 9330 -0.0%| 885820367913 -0.0%| 265r fwprop1 | 5361308 | 9206 -1.3%| 876756504385 -1.0%| 267r rtl pre | 5361308 | 9210 +0.0%| 876914305953 +0.0%| 269r cprop | 5361308 | 9202 -0.1%| 876756165101 -0.0%| 271r cse_loca| 5361308 | 9198 -0.0%| 876727760821 -0.0%| 272r ce1 | 5361308 | 9126 -0.8%| 875726815885 -0.1%| 276r loop2_in| 5361308 | 9167 +0.4%| 873573110570 -0.2%| 282r cprop | 5361308 | 9095 -0.8%| 871937317262 -0.2%| 284r cse2 | 5361308 | 9091 -0.0%| 871936977978 -0.0%| 285r dse1 | 5361308 | 9067 -0.3%| 871437031602 -0.1%| 290r combine | 5361308 | 9071 +0.0%| 869206278202 -0.3%| 292r stv | 5361308 | 17157 +89.1%| 2111071925708+142.9%| 295r bbpart | 5361308 | 17161 +0.0%| 2111071925708 | 296r outof_cf| 5361308 | 17233 +0.4%| 2111655121000 +0.0%| 297r split1 | 5361308 | 17245 +0.1%| 2111656138852 +0.0%| 306r ira | 5361308 | 19189 +11.3%| 2136098398308 +1.2%| 307r reload | 5361308 | 12101 -36.9%| 981091222830 -54.1%| 309r postrelo| 5361308 | 12019 -0.7%| 978750345475 -0.2%| 310r gcse2 | 5361308 | 12027 +0.1%| 978329108320 -0.0%| 311r split2 | 5361308 | 12023 -0.0%| 978507631352 +0.0%| 312r ree | 5361308 | 12027 +0.0%| 978505414244 -0.0%| 313r cmpelim | 5361308 | 11979 -0.4%| 977531601988 -0.1%| 314r pro_and_| 5361308 | 12091 +0.9%| 977541801988 +0.0%| 315r dse2 | 5361308 | 12091 | 977541801988 | 316r csa | 5361308 | 12087 -0.0%| 977541461988 -0.0%| 317r jump2 | 5361308 | 12039 -0.4%| 977683176572 +0.0%| 318r compgoto| 5361308 | 12039 | 977683176572 | 320r peephole| 5361308 | 12047 +0.1%| 977362727612 -0.0%| 321r ce3 | 5361308 | 12047 | 977362727612 | 323r cprop_ha| 5361308 | 11907 -1.2%| 968751076676 -0.9%| 324r rtl_dce | 5361308 | 11903 -0.0%| 968593274820 -0.0%| 325r bbro | 5361308 | 11883 -0.2%| 967964046644 -0.1%| Bootstrapped/regtested x86_64-linux, plan to commit it tomorrow if there are no complains. gcc/ChangeLog: PR tree-optimization/106293 * tree-vect-loop-manip.cc (vect_loop_versioning): Fix profile update. * tree-vect-loop.cc (vect_transform_loop): Likewise. gcc/testsuite/ChangeLog: PR tree-optimization/106293 * gcc.dg/vect/vect-cond-11.c: Check profile consistency. * gcc.dg/vect/vect-widen-mult-extern-1.c: Check profile consistency.
2023-08-07MATCH: Extend min_value/max_value to pointer typesAndrew Pinski13-2/+234
Since we already had the infrastructure to optimize `(x == 0) && (x > y)` to false for integer types, this extends the same to pointer types as indirectly requested by PR 96695. OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions. gcc/ChangeLog: PR tree-optimization/96695 * match.pd (min_value, max_value): Extend to pointer types too. gcc/testsuite/ChangeLog: PR tree-optimization/96695 * gcc.dg/pr96695-1.c: New test. * gcc.dg/pr96695-10.c: New test. * gcc.dg/pr96695-11.c: New test. * gcc.dg/pr96695-12.c: New test. * gcc.dg/pr96695-2.c: New test. * gcc.dg/pr96695-3.c: New test. * gcc.dg/pr96695-4.c: New test. * gcc.dg/pr96695-5.c: New test. * gcc.dg/pr96695-6.c: New test. * gcc.dg/pr96695-7.c: New test. * gcc.dg/pr96695-8.c: New test. * gcc.dg/pr96695-9.c: New test.
2023-08-07Daily bump.GCC Administrator4-1/+43
2023-08-06[Committed] Avoid FAIL of gcc.target/i386/pr110792.cRoger Sayle1-1/+0
My apologies (again), I managed to mess up the 64-bit version of the test case for PR 110792. Unlike the 32-bit version, the 64-bit case contains exactly the same load instructions, just in a different order making the correct and incorrect behaviours impossible to distinguish with a scan-assembler-not. Somewhere between checking that this test failed in a clean tree without the patch, and getting the escaping correct, I'd failed to notice that this also FAILs in the patched tree. Doh! Instead of removing the test completely, I've left it as a compilation test. The original fix is tested by the 32-bit test case. Committed to mainline as obvious. Sorry for the incovenience. 2023-08-06 Roger Sayle <roger@nextmovesoftware.com> gcc/testsuite/ChangeLog PR target/110792 * gcc.target/i386/pr110792.c: Remove dg-final scan-assembler-not.
2023-08-06Add builtin_expect to predict that CPU supports cpuid to cpuid.hJan Hubicka1-2/+2
This is needed to avoid impossible threading update in vectorizer testcase, but should also reflect reality on most CPUs we care about. gcc/ChangeLog: * config/i386/cpuid.h (__get_cpuid_count, __get_cpuid_max): Add __builtin_expect that CPU likely supports cpuid.
2023-08-06Disable loop distribution for loops with estimated iterations 0Jan Hubicka1-2/+13
This prevents useless loop distribiton produced in hmmer. With FDO we now correctly work out that the loop created for last iteraiton is not going to iterate however loop distribution still produces a verioned loop that has no chance to survive loop vectorizer since we only keep distributed loops when loop vectorization suceeds and it requires number of (header) iterations to exceed the vectorization factor. gcc/ChangeLog: * tree-loop-distribution.cc (loop_distribution::execute): Disable distribution for loops with estimated iterations 0.
2023-08-06Fix profile update after peeled epiloguesJan Hubicka16-2/+41
Epilogue peeling expects the scalar loop to have same number of executions as the vector loop which is true at the beggining of vectorization. However if the epilogues are vectorized, this is no longer the case. In this situation the loop preheader is replaced by new guard code with correct profile, however loop body is left unscaled. This leads to loop that exists more often then it is entered. This patch add slogic to scale the frequencies down and also to fix profile of original preheader where necesary. Bootstrapped/regtested x86_64-linux, comitted. gcc/ChangeLog: * tree-vect-loop-manip.cc (vect_do_peeling): Fix profile update of peeled epilogues. gcc/testsuite/ChangeLog: * gcc.dg/vect/vect-bitfield-read-1.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-2.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-3.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-4.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-5.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-6.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-read-7.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-write-1.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-write-2.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-write-3.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-write-4.c: Check profile consistency. * gcc.dg/vect/vect-bitfield-write-5.c: Check profile consistency. * gcc.dg/vect/vect-epilogues-2.c: Check profile consistency. * gcc.dg/vect/vect-epilogues.c: Check profile consistency. * gcc.dg/vect/vect-mask-store-move-1.c: Check profile consistency.
2023-08-06libstdc++: [_GLIBCXX_INLINE_VERSION] Add __cxa_call_terminate symbol exportFrançois Dumont1-0/+1
libstdc++-v3/ChangeLog: * config/abi/pre/gnu-versioned-namespace.ver: Add __cxa_call_terminate symbol export.
2023-08-06Daily bump.GCC Administrator6-1/+48
2023-08-05PR modula2/110779 SysClock can not read the clockGaius Mulley12-113/+812
This patch completes the implementation of the ISO module SysClock.mod. Three new testcases are provided. wrapclock.{cc,def} are new support files providing access to clock_settime, clock_gettime and glibc timezone variables. gcc/m2/ChangeLog: PR modula2/110779 * gm2-libs-iso/SysClock.mod: Re-implement using wrapclock. * gm2-libs-iso/wrapclock.def: New file. libgm2/ChangeLog: PR modula2/110779 * config.h.in: Regenerate. * configure: Regenerate. * configure.ac (GM2_CHECK_LIB): Check for clock_gettime and clock_settime. * libm2iso/Makefile.am (M2DEFS): Add wrapclock.def. * libm2iso/Makefile.in: Regenerate. * libm2iso/wraptime.cc: Replace HAVE_TIMEVAL with HAVE_STRUCT_TIMEVAL. * libm2iso/wrapclock.cc: New file. gcc/testsuite/ChangeLog: PR modula2/110779 * gm2/iso/run/pass/m2date.mod: New test. * gm2/iso/run/pass/testclock.mod: New test. * gm2/iso/run/pass/testclock2.mod: New test. Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
2023-08-05c: Less warnings for parameters declared as arrays [PR98536]Martin Uecker3-32/+3
To avoid false positivies, tune the warnings for parameters declared as arrays with size expressions. Do not warn when more bounds are specified in the declaration than before. PR c/98536 gcc/c-family/: * c-warn.cc (warn_parm_array_mismatch): Do not warn if more bounds are specified. gcc/testsuite: * gcc.dg/Wvla-parameter-4.c: Adapt test. * gcc.dg/attr-access-2.c: Adapt test.
2023-08-05c: _Generic should not warn in non-active branches [PR68193,PR97100,PR110703]Martin Uecker2-1/+26
To avoid false diagnostics, use c_inhibit_evaluation_warnings when a generic association is known to not match during parsing. We may still generate false positives if the default branch comes earler than a specific association that matches. PR c/68193 PR c/97100 PR c/110703 gcc/c/: * c-parser.cc (c_parser_generic_selection): Inhibit evaluation warnings branches that are known not be taken during parsing. gcc/testsuite/ChangeLog: * gcc.dg/pr68193.c: New test.
2023-08-05Daily bump.GCC Administrator7-1/+1149
2023-08-04[PATCH v3] [RISC-V] Generate Zicond instruction for select pattern with ↵Xiao Zeng1-15/+102
condition eq or neq to 0 This patch recognizes Zicond patterns when the select pattern with condition eq or neq to 0 (using eq as an example), namely: 1 rd = (rs2 == 0) ? non-imm : 0 2 rd = (rs2 == 0) ? non-imm : non-imm 3 rd = (rs2 == 0) ? reg : non-imm 4 rd = (rs2 == 0) ? reg : reg gcc/ChangeLog: * config/riscv/riscv.cc (riscv_expand_conditional_move): Recognize more Zicond patterns. Fix whitespace typo. (riscv_rtx_costs): Remove accidental code duplication. Co-authored-by: Jeff Law <jlaw@ventanamicro.com>
2023-08-04analyzer: handle function attribute "alloc_size" [PR110426]David Malcolm21-63/+458
This patch makes -fanalyzer make use of the function attribute "alloc_size", allowing -fanalyzer to emit -Wanalyzer-allocation-size, -Wanalyzer-out-of-bounds, and -Wanalyzer-tainted-allocation-size on execution paths involving allocations using such functions. gcc/analyzer/ChangeLog: PR analyzer/110426 * bounds-checking.cc (region_model::check_region_bounds): Handle symbolic base regions. * call-details.cc: Include "stringpool.h" and "attribs.h". (call_details::lookup_function_attribute): New function. * call-details.h (call_details::lookup_function_attribute): New function decl. * region-model-manager.cc (region_model_manager::maybe_fold_binop): Add reference to PR analyzer/110902. * region-model-reachability.cc (reachable_regions::handle_sval): Add symbolic regions for pointers that are conjured svalues for the LHS of a stmt. * region-model.cc (region_model::canonicalize): Purge dynamic extents for regions that aren't referenced. (get_result_size_in_bytes): New function. (region_model::on_call_pre): Use get_result_size_in_bytes and potentially set the dynamic extents of the region pointed to by the return value. (region_model::deref_rvalue): Add param "add_nonnull_constraint" and use it to conditionalize adding the constraint. (pending_diagnostic_subclass::dubious_allocation_size): Add "stmt" param to both ctors and use it to initialize new "m_stmt" field. (pending_diagnostic_subclass::operator==): Use m_stmt; don't use m_lhs or m_rhs. (pending_diagnostic_subclass::m_stmt): New field. (region_model::check_region_size): Generalize to any kind of pointer svalue by using deref_rvalue rather than checking for region_svalue. Pass stmt to dubious_allocation_size ctor. * region-model.h (region_model::deref_rvalue): Add param "add_nonnull_constraint". * svalue.cc (conjured_svalue::lhs_value_p): New function. * svalue.h (conjured_svalue::lhs_value_p): New decl. gcc/testsuite/ChangeLog: PR analyzer/110426 * gcc.dg/analyzer/allocation-size-1.c: Update expected message to reflect consolidation of size and assignment into a single event. * gcc.dg/analyzer/allocation-size-2.c: Likewise. * gcc.dg/analyzer/allocation-size-3.c: Likewise. * gcc.dg/analyzer/allocation-size-4.c: Likewise. * gcc.dg/analyzer/allocation-size-multiline-1.c: Likewise. * gcc.dg/analyzer/allocation-size-multiline-2.c: Likewise. * gcc.dg/analyzer/allocation-size-multiline-3.c: Likewise. * gcc.dg/analyzer/attr-alloc_size-1.c: New test. * gcc.dg/analyzer/attr-alloc_size-2.c: New test. * gcc.dg/analyzer/attr-alloc_size-3.c: New test. * gcc.dg/analyzer/explode-4.c: New test. * gcc.dg/analyzer/taint-size-1.c: Add test coverage for __attribute__ alloc_size. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2023-08-04analyzer: fix some svalue::dump_to_pp implementationsDavid Malcolm1-7/+20
gcc/analyzer/ChangeLog: * svalue.cc (region_svalue::dump_to_pp): Support NULL type. (constant_svalue::dump_to_pp): Likewise. (initial_svalue::dump_to_pp): Likewise. (conjured_svalue::dump_to_pp): Likewise. Fix missing print of the type. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2023-08-04i386: eliminate redundant operands of VPTERNLOGYan Simonaytes5-0/+121
As mentioned in PR 110202, GCC may be presented with input where control word of the VPTERNLOG intrinsic implies that some of its operands do not affect the result. In that case, we can eliminate redundant operands of the instruction by substituting any other operand in their place. This removes false dependencies. For instance, instead of (252 = 0xfc = _MM_TERNLOG_A | _MM_TERNLOG_B) vpternlogq $252, %zmm2, %zmm1, %zmm0 emit vpternlogq $252, %zmm0, %zmm1, %zmm0 When VPTERNLOG is invariant w.r.t first and second operands, and the third operand is memory, load memory into the output operand first, i.e. instead of (85 = 0x55 = ~_MM_TERNLOG_C) vpternlogq $85, (%rdi), %zmm1, %zmm0 emit vmovdqa64 (%rdi), %zmm0 vpternlogq $85, %zmm0, %zmm0, %zmm0 gcc/ChangeLog: PR target/110202 * config/i386/i386-protos.h (vpternlog_redundant_operand_mask): Declare. (substitute_vpternlog_operands): Declare. * config/i386/i386.cc (vpternlog_redundant_operand_mask): New helper. (substitute_vpternlog_operands): New function. Use them... * config/i386/sse.md: ... here in new VPTERNLOG define_splits. gcc/testsuite/ChangeLog: PR target/110202 * gcc.target/i386/invariant-ternlog-1.c: New test. * gcc.target/i386/invariant-ternlog-2.c: New test.
2023-08-04Specify signed/unsigned/dontcare in calls to extract_bit_field_1.Roger Sayle2-2/+26
This patch is inspired by Jakub's work on PR rtl-optimization/110717. The bitfield example described in comment #2, looks like: struct S { __int128 a : 69; }; unsigned type bar (struct S *p) { return p->a; } which on x86_64 with -O2 currently generates: bar: movzbl 8(%rdi), %ecx movq (%rdi), %rax andl $31, %ecx movq %rcx, %rdx salq $59, %rdx sarq $59, %rdx ret The ANDL $31 is interesting... we first extract an unsigned 69-bit bitfield by masking/clearing the top bits of the most significant word, and then it gets sign-extended, by left shifting and arithmetic right shifting. Obviously, this bit-wise AND is redundant, for signed bit-fields, we don't require these bits to be cleared, if we're about to set them appropriately. This patch eliminates this redundancy in the middle-end, during RTL expansion, but extending the extract_bit_field APIs so that the integer UNSIGNEDP argument takes a special value; 0 indicates the field should be sign extended, 1 (any non-zero value) indicates the field should be zero extended, but -1 indicates a third option, that we don't care how or whether the field is extended. By passing and checking this sentinel value at the appropriate places we avoid the useless bit masking (on all targets). For the test case above, with this patch we now generate: bar: movzbl 8(%rdi), %ecx movq (%rdi), %rax movq %rcx, %rdx salq $59, %rdx sarq $59, %rdx ret 2023-08-04 Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog * expmed.cc (extract_bit_field_1): Document that an UNSIGNEDP value of -1 is equivalent to don't care. (extract_integral_bit_field): Indicate that we don't require the most significant word to be zero extended, if we're about to sign extend it. (extract_fixed_bit_field_1): Document that an UNSIGNEDP value of -1 is equivalent to don't care. Don't clear the most significant bits with AND mask when UNSIGNEDP is -1. gcc/testsuite/ChangeLog * gcc.target/i386/pr110717-2.c: New test case.
2023-08-04i386: Split SUBREGs of SSE vector registers into vec_select insns.Roger Sayle2-0/+18
This patch is the final piece in the series to improve the ABI issues affecting PR 88873. The previous patches tackled inserting DFmode values into V2DFmode registers, by introducing insvti_{low,high}part patterns. This patch improves the extraction of DFmode values from V2DFmode registers via TImode intermediates. I'd initially thought this would require new extvti_{low,high}part patterns to be defined, but all that's required is to recognize that the SUBREG idioms produced by combine are equivalent to (forms of) vec_select patterns. The target-independent middle-end can't be sure that the appropriate vec_select instruction exists on the target, hence doesn't canonicalize a SUBREG of a vector mode as a vec_select, but the backend can provide a define_split stating where and when this is useful, for example, considering whether the operand is in memory, or whether !TARGET_SSE_MATH and the destination is i387. For pr88873.c, gcc -O2 -march=cascadelake currently generates: foo: vpunpcklqdq %xmm3, %xmm2, %xmm7 vpunpcklqdq %xmm1, %xmm0, %xmm6 vpunpcklqdq %xmm5, %xmm4, %xmm2 vmovdqa %xmm7, -24(%rsp) vmovdqa %xmm6, %xmm1 movq -16(%rsp), %rax vpinsrq $1, %rax, %xmm7, %xmm4 vmovapd %xmm4, %xmm6 vfmadd132pd %xmm1, %xmm2, %xmm6 vmovapd %xmm6, -24(%rsp) vmovsd -16(%rsp), %xmm1 vmovsd -24(%rsp), %xmm0 ret with this patch, we now generate: foo: vpunpcklqdq %xmm1, %xmm0, %xmm6 vpunpcklqdq %xmm3, %xmm2, %xmm7 vpunpcklqdq %xmm5, %xmm4, %xmm2 vmovdqa %xmm6, %xmm1 vfmadd132pd %xmm7, %xmm2, %xmm1 vmovsd %xmm1, %xmm1, %xmm0 vunpckhpd %xmm1, %xmm1, %xmm1 ret The improvement is even more dramatic when compared to the original 29 instructions shown in comment #8. GCC 13, for example, required 12 transfers to/from memory. 2023-08-04 Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog * config/i386/sse.md (define_split): Convert highpart:DF extract from V2DFmode register into a sse2_storehpd instruction. (define_split): Likewise, convert lowpart:DF extract from V2DF register into a sse2_storelpd instruction. gcc/testsuite/ChangeLog * gcc.target/i386/pr88873.c: Tweak to check for improved code.