aboutsummaryrefslogtreecommitdiff
path: root/gcc
AgeCommit message (Collapse)AuthorFilesLines
2023-12-26testsuite: Disable strub on AIX.David Edelsohn4-0/+7
AIX does not support stack scrubbing. Set strub as unsupported on AIX and ensure that testcases check for strub support. gcc/testsuite/ChangeLog: * c-c++-common/strub-unsupported-2.c: Require strub. * c-c++-common/strub-unsupported-3.c: Same. * c-c++-common/strub-unsupported.c: Same. * lib/target-supports.exp (check_effective_target_strub): Return 0 for AIX. Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-26RISC-V: Fix typoJuzhe-Zhong1-1/+1
gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Fix typo.
2023-12-26RISC-V: Some minior tweak on dynamic LMUL cost modelJuzhe-Zhong33-15/+166
Tweak some codes of dynamic LMUL cost model to make computation more predictable and accurate. Tested on both RV32 and RV64 no regression. Committed. PR target/113112 gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Tweak LMUL estimation. (has_unexpected_spills_p): Ditto. (costs::record_potential_unexpected_spills): Ditto. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Add more checks. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-12.c: New test. * gcc.dg/vect/costmodel/riscv/rvv/pr113112-2.c: New test.
2023-12-26Fix compile options of pr110279-1.c and pr110279-2.cDi Zhao2-4/+5
The two testcases are for targets that support FMA. And pr110279-2.c assumes reassoc_width of FMUL to be 4. This patch adds missing options, to fix regression test failures on nvptx/GCN (default reassoc_width of FMUL is 1) and x86_64 (need "-mfma"). gcc/testsuite/ChangeLog: * gcc.dg/pr110279-1.c: Add "-mcpu=generic" for aarch64; add "-mfma" for x86_64. * gcc.dg/pr110279-2.c: Replace "-march=armv8.2-a" with "-mcpu=generic"; limit the check to be on aarch64.
2023-12-25testsuite: Add dg-require-effective-target powerpc_pcrel for testcase [PR110320]Jeevitha1-0/+1
Add dg-require-effective-target directive that allows the test case to run specifically on powerpc_pcrel target. 2023-12-26 Jeevitha Palanisamy <jeevitha@linux.ibm.com> gcc/testsuite/ PR target/110320 * gcc.target/powerpc/pr110320-1.c: Add dg-require-effective-target powerpc_pcrel.
2023-12-26Daily bump.GCC Administrator3-1/+96
2023-12-25testsuite: Skip analyzer tests on AIX.David Edelsohn6-0/+7
Some new analyzer tests fail on AIX. gcc/testsuite/ChangeLog: * c-c++-common/analyzer/capacity-1.c: Skip on AIX. * c-c++-common/analyzer/capacity-2.c: Same. * c-c++-common/analyzer/fd-glibc-byte-stream-socket.c: Same. * c-c++-common/analyzer/fd-manpage-getaddrinfo-client.c: Same. * c-c++-common/analyzer/fd-mappage-getaddrinfo-server.c: Same. * gcc.dg/analyzer/fd-glibc-byte-stream-connection-server.c: Same. Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-25RISC-V: Move RVV V_REGS liveness computation into analyze_loop_vinfoJuzhe-Zhong38-184/+113
Currently, we compute RVV V_REGS liveness during better_main_loop_than_p which is not appropriate time to do that since we for example, when have the codes will finally pick LMUL = 8 vectorization factor, we compute liveness for LMUL = 8 multiple times which are redundant. Since we have leverage the current ARM SVE COST model: /* Do one-time initialization based on the vinfo. */ loop_vec_info loop_vinfo = dyn_cast<loop_vec_info> (m_vinfo); if (!m_analyzed_vinfo) { if (loop_vinfo) analyze_loop_vinfo (loop_vinfo); m_analyzed_vinfo = true; } Analyze COST model only once for each cost model. So here we move dynamic LMUL liveness information into analyze_loop_vinfo. /* Do one-time initialization of the costs given that we're costing the loop vectorization described by LOOP_VINFO. */ void costs::analyze_loop_vinfo (loop_vec_info loop_vinfo) { ... /* Detect whether the LOOP has unexpected spills. */ record_potential_unexpected_spills (loop_vinfo); } So that we can avoid redundant computations and the current dynamic LMUL cost model flow is much more reasonable and consistent with others. Tested on RV32 and RV64 no regressions. gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Allow fractional vecrtor. (preferred_new_lmul_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo. (has_unexpected_spills_p): New function. (costs::record_potential_unexpected_spills): Ditto. (costs::better_main_loop_than_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo. * config/riscv/riscv-vector-costs.h: New functions and variables. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul-mixed-1.c: Robostify test. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/no-dynamic-lmul-1.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/pr111848.c: Ditto. * gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Ditto.
2023-12-25middle-end: explicitly initialize vec_stmts [PR113132]Tamar Christina1-1/+1
when configured with --enable-checking=release we get a false positive on the use of vec_stmts as the compiler seems unable to notice it gets initialized through the pass-by-reference. This explicitly initializes the local. gcc/ChangeLog: PR bootstrap/113132 * tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize vec_stmts;
2023-12-25rs6000: Change GPR2 to volatile & non-fixed register for function that does ↵Jeevitha5-2/+70
not use TOC [PR110320] Normally, GPR2 is the TOC pointer and is defined as a fixed and non-volatile register. However, it can be used as volatile for PCREL addressing. Therefore, modified r2 to be non-fixed in FIXED_REGISTERS and set it to fixed if it is not PCREL and also when the user explicitly requests TOC or fixed. If the register r2 is fixed, it is made as non-volatile. Changes in register preservation roles can be accomplished with the help of available target hooks (TARGET_CONDITIONAL_REGISTER_USAGE). 2023-12-24 Jeevitha Palanisamy <jeevitha@linux.ibm.com> gcc/ PR target/110320 * config/rs6000/rs6000.cc (rs6000_conditional_register_usage): Change GPR2 to volatile and non-fixed register for PCREL. * config/rs6000/rs6000.h (FIXED_REGISTERS): Modify GPR2 to not fixed. gcc/testsuite/ PR target/110320 * gcc.target/powerpc/pr110320-1.c: New testcase. * gcc.target/powerpc/pr110320-2.c: New testcase. * gcc.target/powerpc/pr110320-3.c: New testcase. Co-authored-by: Peter Bergner <bergner@linux.ibm.com>
2023-12-25RISC-V: Add one more ASM check in PR113112-1.cJuzhe-Zhong1-0/+1
gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Add one more ASM check.
2023-12-24match: Improve `(a != b) ? (a + b) : (2 * a)` pattern [PR19832]Andrew Pinski2-1/+20
In the testcase provided, we would match f_plus but not g_plus due to a missing `:c` on the plus operator. This fixes the oversight there. Note this was noted in https://github.com/llvm/llvm-project/issues/76318 . Committed as obvious after bootstrap/test on x86_64-linux-gnu. PR tree-optimization/19832 gcc/ChangeLog: * match.pd (`(a != b) ? (a + b) : (2 * a)`): Add `:c` on the plus operator. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/phi-opt-same-2.c: New test. Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
2023-12-25Daily bump.GCC Administrator3-1/+249
2023-12-24testsuite: un-xfail TSVC loops that check for exit control flow vectorizationTamar Christina3-3/+6
The following three tests now correctly work for targets that have an implementation of cbranch for vectors so XFAILs are conditionally removed gated on vect_early_break support. gcc/testsuite/ChangeLog: * gcc.dg/vect/tsvc/vect-tsvc-s332.c: Remove xfail when early break supported. * gcc.dg/vect/tsvc/vect-tsvc-s481.c: Likewise. * gcc.dg/vect/tsvc/vect-tsvc-s482.c: Likewise.
2023-12-24testsuite: Add tests for early break vectorizationTamar Christina110-0/+4025
This adds new test to check for all the early break functionality. It includes a number of codegen and runtime tests checking the values at different needles in the array. They also check the values on different array sizes and peeling positions, datatypes, VL, ncopies and every other variant I could think of. Additionally it also contains reduced cases from issues found running over various codebases. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Also regtested with: -march=armv8.3-a+sve -march=armv8.3-a+nosve -march=armv9-a Bootstrapped Regtested x86_64-pc-linux-gnu and no issues. On the tests I have disabled x86_64 on it's because the target is missing cbranch for all types. I think it should be possible to add them for the missing type since all we care about is if a bit is set or not. Bootstrap and Regtest on arm-none-linux-gnueabihf still running and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running. gcc/ChangeLog: * doc/sourcebuild.texi (check_effective_target_vect_early_break_hw, check_effective_target_vect_early_break): Document. gcc/testsuite/ChangeLog: * lib/target-supports.exp (add_options_for_vect_early_break, check_effective_target_vect_early_break_hw, check_effective_target_vect_early_break): New. * g++.dg/vect/vect-early-break_1.cc: New test. * g++.dg/vect/vect-early-break_2.cc: New test. * g++.dg/vect/vect-early-break_3.cc: New test. * gcc.dg/vect/vect-early-break-run_1.c: New test. * gcc.dg/vect/vect-early-break-run_10.c: New test. * gcc.dg/vect/vect-early-break-run_2.c: New test. * gcc.dg/vect/vect-early-break-run_3.c: New test. * gcc.dg/vect/vect-early-break-run_4.c: New test. * gcc.dg/vect/vect-early-break-run_5.c: New test. * gcc.dg/vect/vect-early-break-run_6.c: New test. * gcc.dg/vect/vect-early-break-run_7.c: New test. * gcc.dg/vect/vect-early-break-run_8.c: New test. * gcc.dg/vect/vect-early-break-run_9.c: New test. * gcc.dg/vect/vect-early-break-template_1.c: New test. * gcc.dg/vect/vect-early-break-template_2.c: New test. * gcc.dg/vect/vect-early-break_1.c: New test. * gcc.dg/vect/vect-early-break_10.c: New test. * gcc.dg/vect/vect-early-break_11.c: New test. * gcc.dg/vect/vect-early-break_12.c: New test. * gcc.dg/vect/vect-early-break_13.c: New test. * gcc.dg/vect/vect-early-break_14.c: New test. * gcc.dg/vect/vect-early-break_15.c: New test. * gcc.dg/vect/vect-early-break_16.c: New test. * gcc.dg/vect/vect-early-break_17.c: New test. * gcc.dg/vect/vect-early-break_18.c: New test. * gcc.dg/vect/vect-early-break_19.c: New test. * gcc.dg/vect/vect-early-break_2.c: New test. * gcc.dg/vect/vect-early-break_20.c: New test. * gcc.dg/vect/vect-early-break_21.c: New test. * gcc.dg/vect/vect-early-break_22.c: New test. * gcc.dg/vect/vect-early-break_23.c: New test. * gcc.dg/vect/vect-early-break_24.c: New test. * gcc.dg/vect/vect-early-break_25.c: New test. * gcc.dg/vect/vect-early-break_26.c: New test. * gcc.dg/vect/vect-early-break_27.c: New test. * gcc.dg/vect/vect-early-break_28.c: New test. * gcc.dg/vect/vect-early-break_29.c: New test. * gcc.dg/vect/vect-early-break_3.c: New test. * gcc.dg/vect/vect-early-break_30.c: New test. * gcc.dg/vect/vect-early-break_31.c: New test. * gcc.dg/vect/vect-early-break_32.c: New test. * gcc.dg/vect/vect-early-break_33.c: New test. * gcc.dg/vect/vect-early-break_34.c: New test. * gcc.dg/vect/vect-early-break_35.c: New test. * gcc.dg/vect/vect-early-break_36.c: New test. * gcc.dg/vect/vect-early-break_37.c: New test. * gcc.dg/vect/vect-early-break_38.c: New test. * gcc.dg/vect/vect-early-break_39.c: New test. * gcc.dg/vect/vect-early-break_4.c: New test. * gcc.dg/vect/vect-early-break_40.c: New test. * gcc.dg/vect/vect-early-break_41.c: New test. * gcc.dg/vect/vect-early-break_42.c: New test. * gcc.dg/vect/vect-early-break_43.c: New test. * gcc.dg/vect/vect-early-break_44.c: New test. * gcc.dg/vect/vect-early-break_45.c: New test. * gcc.dg/vect/vect-early-break_46.c: New test. * gcc.dg/vect/vect-early-break_47.c: New test. * gcc.dg/vect/vect-early-break_48.c: New test. * gcc.dg/vect/vect-early-break_49.c: New test. * gcc.dg/vect/vect-early-break_5.c: New test. * gcc.dg/vect/vect-early-break_50.c: New test. * gcc.dg/vect/vect-early-break_51.c: New test. * gcc.dg/vect/vect-early-break_52.c: New test. * gcc.dg/vect/vect-early-break_53.c: New test. * gcc.dg/vect/vect-early-break_54.c: New test. * gcc.dg/vect/vect-early-break_55.c: New test. * gcc.dg/vect/vect-early-break_56.c: New test. * gcc.dg/vect/vect-early-break_57.c: New test. * gcc.dg/vect/vect-early-break_58.c: New test. * gcc.dg/vect/vect-early-break_59.c: New test. * gcc.dg/vect/vect-early-break_6.c: New test. * gcc.dg/vect/vect-early-break_60.c: New test. * gcc.dg/vect/vect-early-break_61.c: New test. * gcc.dg/vect/vect-early-break_62.c: New test. * gcc.dg/vect/vect-early-break_63.c: New test. * gcc.dg/vect/vect-early-break_64.c: New test. * gcc.dg/vect/vect-early-break_65.c: New test. * gcc.dg/vect/vect-early-break_66.c: New test. * gcc.dg/vect/vect-early-break_67.c: New test. * gcc.dg/vect/vect-early-break_68.c: New test. * gcc.dg/vect/vect-early-break_69.c: New test. * gcc.dg/vect/vect-early-break_7.c: New test. * gcc.dg/vect/vect-early-break_70.c: New test. * gcc.dg/vect/vect-early-break_71.c: New test. * gcc.dg/vect/vect-early-break_72.c: New test. * gcc.dg/vect/vect-early-break_73.c: New test. * gcc.dg/vect/vect-early-break_74.c: New test. * gcc.dg/vect/vect-early-break_75.c: New test. * gcc.dg/vect/vect-early-break_76.c: New test. * gcc.dg/vect/vect-early-break_77.c: New test. * gcc.dg/vect/vect-early-break_78.c: New test. * gcc.dg/vect/vect-early-break_79.c: New test. * gcc.dg/vect/vect-early-break_8.c: New test. * gcc.dg/vect/vect-early-break_80.c: New test. * gcc.dg/vect/vect-early-break_81.c: New test. * gcc.dg/vect/vect-early-break_82.c: New test. * gcc.dg/vect/vect-early-break_83.c: New test. * gcc.dg/vect/vect-early-break_84.c: New test. * gcc.dg/vect/vect-early-break_85.c: New test. * gcc.dg/vect/vect-early-break_86.c: New test. * gcc.dg/vect/vect-early-break_87.c: New test. * gcc.dg/vect/vect-early-break_88.c: New test. * gcc.dg/vect/vect-early-break_89.c: New test. * gcc.dg/vect/vect-early-break_9.c: New test. * gcc.dg/vect/vect-early-break_90.c: New test. * gcc.dg/vect/vect-early-break_91.c: New test. * gcc.dg/vect/vect-early-break_92.c: New test. * gcc.dg/vect/vect-early-break_93.c: New test.
2023-12-24AArch64: Add implementation for vector cbranch for Advanced SIMDTamar Christina3-0/+274
Hi All, This adds an implementation for conditional branch optab for AArch64. For e.g. void f1 () { for (int i = 0; i < N; i++) { b[i] += a[i]; if (a[i] > 0) break; } } For 128-bit vectors we generate: cmgt v1.4s, v1.4s, #0 umaxp v1.4s, v1.4s, v1.4s fmov x3, d1 cbnz x3, .L8 and of 64-bit vector we can omit the compression: cmgt v1.2s, v1.2s, #0 fmov x2, d1 cbz x2, .L13 gcc/ChangeLog: * config/aarch64/aarch64-simd.md (cbranch<mode>4): New. gcc/testsuite/ChangeLog: * gcc.target/aarch64/sve/vect-early-break-cbranch.c: New test. * gcc.target/aarch64/vect-early-break-cbranch.c: New test.
2023-12-24middle-end: Support vectorization of loops with multiple exits.Tamar Christina8-233/+1330
Hi All, This patch adds initial support for early break vectorization in GCC. In other words it implements support for vectorization of loops with multiple exits. The support is added for any target that implements a vector cbranch optab, this includes both fully masked and non-masked targets. Depending on the operation, the vectorizer may also require support for boolean mask reductions using Inclusive OR/Bitwise AND. This is however only checked then the comparison would produce multiple statements. This also fully decouples the vectorizer's notion of exit from the existing loop infrastructure's exit. Before this patch the vectorizer always picked the natural loop latch connected exit as the main exit. After this patch the vectorizer is free to choose any exit it deems appropriate as the main exit. This means that even if the main exit is not countable (i.e. the termination condition could not be determined) we might still be able to vectorize should one of the other exits be countable. In such situations the loop is reflowed which enabled vectorization of many other loop forms. Concretely the kind of loops supported are of the forms: for (int i = 0; i < N; i++) { <statements1> if (<condition>) { ... <action>; } <statements2> } where <action> can be: - break - return - goto Any number of statements can be used before the <action> occurs. Since this is an initial version for GCC 14 it has the following limitations and features: - Only fixed sized iterations and buffers are supported. That is to say any vectors loaded or stored must be to statically allocated arrays with known sizes. N must also be known. This limitation is because our primary target for this optimization is SVE. For VLA SVE we can't easily do cross page iteraion checks. The result is likely to also not be beneficial. For that reason we punt support for variable buffers till we have First-Faulting support in GCC 15. - any stores in <statements1> should not be to the same objects as in <condition>. Loads are fine as long as they don't have the possibility to alias. More concretely, we block RAW dependencies when the intermediate value can't be separated fromt the store, or the store itself can't be moved. - Prologue peeling, alignment peelinig and loop versioning are supported. - Fully masked loops, unmasked loops and partially masked loops are supported - Any number of loop early exits are supported. - No support for epilogue vectorization. The only epilogue supported is the scalar final one. Peeling code supports it but the code motion code cannot find instructions to make the move in the epilog. - Early breaks are only supported for inner loop vectorization. With the help of IPA and LTO this still gets hit quite often. During bootstrap it hit rather frequently. Additionally TSVC s332, s481 and s482 all pass now since these are tests for support for early exit vectorization. This implementation does not support completely handling the early break inside the vector loop itself but instead supports adding checks such that if we know that we have to exit in the current iteration then we branch to scalar code to actually do the final VF iterations which handles all the code in <action>. For the scalar loop we know that whatever exit you take you have to perform at most VF iterations. For vector code we only case about the state of fully performed iteration and reset the scalar code to the (partially) remaining loop. That is to say, the first vector loop executes so long as the early exit isn't needed. Once the exit is taken, the scalar code will perform at most VF extra iterations. The exact number depending on peeling and iteration start and which exit was taken (natural or early). For this scalar loop, all early exits are treated the same. When we vectorize we move any statement not related to the early break itself and that would be incorrect to execute before the break (i.e. has side effects) to after the break. If this is not possible we decline to vectorize. The analysis and code motion also takes into account that it doesn't introduce a RAW dependency after the move of the stores. This means that we check at the start of iterations whether we are going to exit or not. During the analyis phase we check whether we are allowed to do this moving of statements. Also note that we only move the scalar statements, but only do so after peeling but just before we start transforming statements. With this the vector flow no longer necessarily needs to match that of the scalar code. In addition most of the infrastructure is in place to support general control flow safely, however we are punting this to GCC 15. Codegen: for e.g. unsigned vect_a[N]; unsigned vect_b[N]; unsigned test4(unsigned x) { unsigned ret = 0; for (int i = 0; i < N; i++) { vect_b[i] = x + i; if (vect_a[i] > x) break; vect_a[i] = x; } return ret; } We generate for Adv. SIMD: test4: adrp x2, .LC0 adrp x3, .LANCHOR0 dup v2.4s, w0 add x3, x3, :lo12:.LANCHOR0 movi v4.4s, 0x4 add x4, x3, 3216 ldr q1, [x2, #:lo12:.LC0] mov x1, 0 mov w2, 0 .p2align 3,,7 .L3: ldr q0, [x3, x1] add v3.4s, v1.4s, v2.4s add v1.4s, v1.4s, v4.4s cmhi v0.4s, v0.4s, v2.4s umaxp v0.4s, v0.4s, v0.4s fmov x5, d0 cbnz x5, .L6 add w2, w2, 1 str q3, [x1, x4] str q2, [x3, x1] add x1, x1, 16 cmp w2, 200 bne .L3 mov w7, 3 .L2: lsl w2, w2, 2 add x5, x3, 3216 add w6, w2, w0 sxtw x4, w2 ldr w1, [x3, x4, lsl 2] str w6, [x5, x4, lsl 2] cmp w0, w1 bcc .L4 add w1, w2, 1 str w0, [x3, x4, lsl 2] add w6, w1, w0 sxtw x1, w1 ldr w4, [x3, x1, lsl 2] str w6, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 add w4, w2, 2 str w0, [x3, x1, lsl 2] sxtw x1, w4 add w6, w1, w0 ldr w4, [x3, x1, lsl 2] str w6, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 str w0, [x3, x1, lsl 2] add w2, w2, 3 cmp w7, 3 beq .L4 sxtw x1, w2 add w2, w2, w0 ldr w4, [x3, x1, lsl 2] str w2, [x5, x1, lsl 2] cmp w0, w4 bcc .L4 str w0, [x3, x1, lsl 2] .L4: mov w0, 0 ret .p2align 2,,3 .L6: mov w7, 4 b .L2 and for SVE: test4: adrp x2, .LANCHOR0 add x2, x2, :lo12:.LANCHOR0 add x5, x2, 3216 mov x3, 0 mov w1, 0 cntw x4 mov z1.s, w0 index z0.s, #0, #1 ptrue p1.b, all ptrue p0.s, all .p2align 3,,7 .L3: ld1w z2.s, p1/z, [x2, x3, lsl 2] add z3.s, z0.s, z1.s cmplo p2.s, p0/z, z1.s, z2.s b.any .L2 st1w z3.s, p1, [x5, x3, lsl 2] add w1, w1, 1 st1w z1.s, p1, [x2, x3, lsl 2] add x3, x3, x4 incw z0.s cmp w3, 803 bls .L3 .L5: mov w0, 0 ret .p2align 2,,3 .L2: cntw x5 mul w1, w1, w5 cbz w5, .L5 sxtw x1, w1 sub w5, w5, #1 add x5, x5, x1 add x6, x2, 3216 b .L6 .p2align 2,,3 .L14: str w0, [x2, x1, lsl 2] cmp x1, x5 beq .L5 mov x1, x4 .L6: ldr w3, [x2, x1, lsl 2] add w4, w0, w1 str w4, [x6, x1, lsl 2] add x4, x1, 1 cmp w0, w3 bcs .L14 mov w0, 0 ret On the workloads this work is based on we see between 2-3x performance uplift using this patch. Follow up plan: - Boolean vectorization has several shortcomings. I've filed PR110223 with the bigger ones that cause vectorization to fail with this patch. - SLP support. This is planned for GCC 15 as for majority of the cases build SLP itself fails. This means I'll need to spend time in making this more robust first. Additionally it requires: * Adding support for vectorizing CFG (gconds) * Support for CFG to differ between vector and scalar loops. Both of which would be disruptive to the tree and I suspect I'll be handling fallouts from this patch for a while. So I plan to work on the surrounding building blocks first for the remainder of the year. Additionally it also contains reduced cases from issues found running over various codebases. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Also regtested with: -march=armv8.3-a+sve -march=armv8.3-a+nosve -march=armv9-a -mcpu=neoverse-v1 -mcpu=neoverse-n2 Bootstrapped Regtested x86_64-pc-linux-gnu and no issues. Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues. gcc/ChangeLog: * tree-if-conv.cc (idx_within_array_bound): Expose. * tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New. (vect_analyze_data_ref_dependences): Use it. * tree-vect-loop-manip.cc (vect_iv_increment_position): New. (vect_set_loop_controls_directly, vect_set_loop_condition_partial_vectors, vect_set_loop_condition_partial_vectors_avx512, vect_set_loop_condition_normal): Support multiple exits. (slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for multiple exits. (slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB count and instead look at loop shape. (vect_update_ivs_after_vectorizer): Drop asserts. (vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations. (vect_do_peeling): Support multiple exits. (vect_loop_versioning): Likewise. * tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise early_breaks. (vect_analyze_loop_form): Support loop flows with more than single BB loop body. (vect_create_loop_vinfo): Support niters analysis for multiple exits. (vect_analyze_loop): Likewise. (vect_get_vect_def): New. (vect_create_epilog_for_reduction): Support early exit reductions. (vectorizable_live_operation_1): New. (find_connected_edge): New. (vectorizable_live_operation): Support early exit live operations. (move_early_exit_stmts): New. (vect_transform_loop): Use it. * tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond. (vect_recog_bitfield_ref_pattern): Support gconds and bools. (vect_recog_gcond_pattern): New. (possible_vector_mask_operation_p): Support gcond masks. (vect_determine_mask_precision): Likewise. (vect_mark_pattern_stmts): Set gcond def type. (can_vectorize_live_stmts): Force early break inductions to be live. * tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for early breaks. (vect_mark_stmts_to_be_vectorized): Process gcond usage. (perm_mask_for_reverse): Expose. (vectorizable_comparison_1): New. (vectorizable_early_exit): New. (vect_analyze_stmt): Support early break and gcond. (vect_transform_stmt): Likewise. (vect_is_simple_use): Likewise. (vect_get_vector_types_for_stmt): Likewise. * tree-vectorizer.cc (pass_vectorize::execute): Update exits for value numbering. * tree-vectorizer.h (enum vect_def_type): Add vect_condition_def. (LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES, LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB, LOOP_VINFO_EARLY_BRK_VUSES): New. (is_loop_header_bb_p): Drop assert. (class loop): Add early_breaks, early_break_stores, early_break_dest_bb, early_break_vuses. (vect_iv_increment_position, perm_mask_for_reverse, ref_within_array_bound): New. (slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
2023-12-24middle-end: prevent LIM from hoising vector compares from gconds if target ↵Tamar Christina1-0/+13
does not support it. LIM notices that in some cases the condition and the results are loop invariant and tries to move them out of the loop. While the resulting code is operationally sound, moving the compare out of the gcond results in generating code that no longer branches, so cbranch is no longer applicable. As such I now add code to check during this motion to see if the target supports flag setting vector comparison as general operation. I have tried writing a GIMPLE testcase for this but the gimple FE seems to be having some trouble with the vector types. It seems to fail parsing. The early break code testsuite however has a test for this (vect-early-break_67.c). gcc/ChangeLog: * tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h and optabs-tree.h and check for vector compare motion out of gcond.
2023-12-24testsuite: Add more pragma novector to new testsTamar Christina27-3/+37
This updates the testsuite and adds more #pragma GCC novector to various tests that would otherwise vectorize the vector result checking code. This cleans out the testsuite since the last rebase and prepares for the landing of the early break patch. gcc/testsuite/ChangeLog: * gcc.dg/vect/no-scevccp-slp-30.c: Add pragma GCC novector to abort loop. * gcc.dg/vect/no-scevccp-slp-31.c: Likewise. * gcc.dg/vect/no-section-anchors-vect-69.c: Likewise. * gcc.target/aarch64/vect-xorsign_exec.c: Likewise. * gcc.target/i386/avx512er-vrcp28ps-3.c: Likewise. * gcc.target/i386/avx512er-vrsqrt28ps-3.c: Likewise. * gcc.target/i386/avx512er-vrsqrt28ps-5.c: Likewise. * gcc.target/i386/avx512f-ceil-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-ceil-vec-1.c: Likewise. * gcc.target/i386/avx512f-ceilf-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-ceilf-vec-1.c: Likewise. * gcc.target/i386/avx512f-floor-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-floor-vec-1.c: Likewise. * gcc.target/i386/avx512f-floorf-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-floorf-vec-1.c: Likewise. * gcc.target/i386/avx512f-rint-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-rintf-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-round-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-roundf-sfix-vec-1.c: Likewise. * gcc.target/i386/avx512f-trunc-vec-1.c: Likewise. * gcc.target/i386/avx512f-truncf-vec-1.c: Likewise. * gcc.target/i386/vect-alignment-peeling-1.c: Likewise. * gcc.target/i386/vect-alignment-peeling-2.c: Likewise. * gcc.target/i386/vect-pack-trunc-1.c: Likewise. * gcc.target/i386/vect-pack-trunc-2.c: Likewise. * gcc.target/i386/vect-perm-even-1.c: Likewise. * gcc.target/i386/vect-unpack-1.c: Likewise.
2023-12-24hppa: Fix pr110279-1.c on hppaJohn David Anglin1-0/+1
2023-12-24 John David Anglin <danglin@gcc.gnu.org> gcc/testsuite/ChangeLog: * gcc.dg/pr110279-1.c: Add -march=2.0 option on hppa*-*-*.
2023-12-24RISC-V: XFail the signbit-5 run test for RVVPan Li1-0/+1
This patch would like to XFail the signbit-5 run test case for the RVV. Given the case has one limitation like "This test does not work when the truth type does not match vector type." in the beginning of the test file. Aka, the RVV vector truth type is not integer type. The target board of riscv-sim like below will pick up `-march=rv64gcv` when building the run test elf. Thus, the RVV cannot bypass this test case like aarch64_sve with additional option `-march=armv8-a`. riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow For RVV, we leverage dg-xfail-run-if for this case like `amdgcn`. The signbit-5.c passed test with below configurations but we need further investigation for the failures of other configurations. * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2 * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4 * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8 * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2 * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4 * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8 * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2 * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4 * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8 * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2 * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4 * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8 * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax * riscv-sim/-march=rv64imafdcv/-mabi=lp64d/-mcmodel=medlow gcc/testsuite/ChangeLog: * gcc.dg/signbit-5.c: XFail for the riscv_v. Signed-off-by: Pan Li <pan2.li@intel.com>
2023-12-24CRIS: Fix PR middle-end/113109; "throw" failingHans-Peter Nilsson3-2/+18
TL;DR: the "dse1" pass removed the eh-return-address store. The PA also marks its EH_RETURN_HANDLER_RTX as volatile, for the same reason, as does visum. See PR32769 - it's the same thing on PA. Conceptually, it's logical that stores to incoming args are optimized out on the return path or if no loads are seen - at least before epilogue expansion, when the subsequent load isn't seen in the RTL, as is the case for the "dse1" pass. I haven't looked into why this problem, that appeared for the PA already in 2007, was seen for CRIS only recently (with r14-6674-g4759383245ac97). PR middle-end/113109 * config/cris/cris.cc (cris_eh_return_handler_rtx): New function. * config/cris/cris-protos.h (cris_eh_return_handler_rtx): Prototype. * config/cris/cris.h (EH_RETURN_HANDLER_RTX): Redefine to call cris_eh_return_handler_rtx.
2023-12-24Daily bump.GCC Administrator3-1/+70
2023-12-23LoongArch: Add sign_extend pattern for 32-bit rotate shiftXi Ruoyao2-0/+27
Remove a redundant sign extension. gcc/ChangeLog: * config/loongarch/loongarch.md (rotrsi3_extend): New define_insn. gcc/testsuite/ChangeLog: * gcc.target/loongarch/rotrw.c: New test.
2023-12-23LoongArch: Implement FCCmode reload and cstore<ANYF:mode>4Xi Ruoyao7-33/+157
We used a branch to load floating-point comparison results into GPR. This is very slow when the branch is not predictable. Implement movfcc so we can reload FCCmode into GPRs, FPRs, and MEM. Then implement cstore<ANYF:mode>4. gcc/ChangeLog: * config/loongarch/loongarch-tune.h (loongarch_rtx_cost_data::movcf2gr): New field. (loongarch_rtx_cost_data::movcf2gr_): New method. (loongarch_rtx_cost_data::use_movcf2gr): New method. * config/loongarch/loongarch-def.cc (loongarch_rtx_cost_data::loongarch_rtx_cost_data): Set movcf2gr to COSTS_N_INSNS (7) and movgr2cf to COSTS_N_INSNS (15), based on timing on LA464. (loongarch_cpu_rtx_cost_data): Set movcf2gr and movgr2cf to COSTS_N_INSNS (1) for LA664. (loongarch_rtx_cost_optimize_size): Set movcf2gr and movgr2cf to COSTS_N_INSNS (1) + 1. * config/loongarch/predicates.md (loongarch_fcmp_operator): New predicate. * config/loongarch/loongarch.md (movfcc): Change to define_expand. (movfcc_internal): New define_insn. (fcc_to_<X:mode>): New define_insn. (cstore<ANYF:mode>4): New define_expand. * config/loongarch/loongarch.cc (loongarch_hard_regno_mode_ok_uncached): Allow FCCmode in GPRs and GPRs. (loongarch_secondary_reload): Reload FCCmode via FPR and/or GPR. (loongarch_emit_float_compare): Call gen_reg_rtx instead of loongarch_allocate_fcc. (loongarch_allocate_fcc): Remove. (loongarch_move_to_gpr_cost): Handle FCC_REGS -> GR_REGS. (loongarch_move_from_gpr_cost): Handle GR_REGS -> FCC_REGS. (loongarch_register_move_cost): Handle FCC_REGS -> FCC_REGS, FCC_REGS -> FP_REGS, and FP_REGS -> FCC_REGS. gcc/testsuite/ChangeLog: * gcc.target/loongarch/movcf2gr.c: New test. * gcc.target/loongarch/movcf2gr-via-fr.c: New test.
2023-12-23MIPS: Don't add nan2008 option for -mtune=nativeYunQiang Su1-1/+2
Users may wish just use -mtune=native for performance tuning only. Let's don't make trouble for its case. gcc/ * config/mips/driver-native.cc (host_detect_local_cpu): don't add nan2008 option for -mtune=native.
2023-12-23MIPS: Put the ret to the end of args of reconcat [PR112759]YunQiang Su1-2/+5
The function `reconcat` cannot append string(s) to NULL, as the concat process will stop at the first NULL. Let's always put the `ret` to the end, as it may be NULL. We keep use reconcat here, due to that reconcat can make it easier if we add more hardware features detecting, for example by hwcap. gcc/ PR target/112759 * config/mips/driver-native.cc (host_detect_local_cpu): Put the ret to the end of args of reconcat.
2023-12-23RISC-V: Make PHI initial value occupy live V_REG in dynamic LMUL cost model ↵Juzhe-Zhong2-5/+71
analysis Consider this following case: foo: ble a0,zero,.L11 lui a2,%hi(.LANCHOR0) addi sp,sp,-128 addi a2,a2,%lo(.LANCHOR0) mv a1,a0 vsetvli a6,zero,e32,m8,ta,ma vid.v v8 vs8r.v v8,0(sp) ---> spill .L3: vl8re32.v v16,0(sp) ---> reload vsetvli a4,a1,e8,m2,ta,ma li a3,0 vsetvli a5,zero,e32,m8,ta,ma vmv8r.v v0,v16 vmv.v.x v8,a4 vmv.v.i v24,0 vadd.vv v8,v16,v8 vmv8r.v v16,v24 vs8r.v v8,0(sp) ---> spill .L4: addiw a3,a3,1 vadd.vv v8,v0,v16 vadd.vi v16,v16,1 vadd.vv v24,v24,v8 bne a0,a3,.L4 vsetvli zero,a4,e32,m8,ta,ma sub a1,a1,a4 vse32.v v24,0(a2) slli a4,a4,2 add a2,a2,a4 bne a1,zero,.L3 li a0,0 addi sp,sp,128 jr ra .L11: li a0,0 ret Pick unexpected LMUL = 8. The root cause is we didn't involve PHI initial value in the dynamic LMUL calculation: # j_17 = PHI <j_11(9), 0(5)> ---> # vect_vec_iv_.8_24 = PHI <_25(9), { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }(5)> We didn't count { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } in consuming vector register but it does allocate an vector register group for it. This patch fixes this missing count. Then after this patch we pick up perfect LMUL (LMUL = M4) foo: ble a0,zero,.L9 lui a4,%hi(.LANCHOR0) addi a4,a4,%lo(.LANCHOR0) mv a2,a0 vsetivli zero,16,e32,m4,ta,ma vid.v v20 .L3: vsetvli a3,a2,e8,m1,ta,ma li a5,0 vsetivli zero,16,e32,m4,ta,ma vmv4r.v v16,v20 vmv.v.i v12,0 vmv.v.x v4,a3 vmv4r.v v8,v12 vadd.vv v20,v20,v4 .L4: addiw a5,a5,1 vmv4r.v v4,v8 vadd.vi v8,v8,1 vadd.vv v4,v16,v4 vadd.vv v12,v12,v4 bne a0,a5,.L4 slli a5,a3,2 vsetvli zero,a3,e32,m4,ta,ma sub a2,a2,a3 vse32.v v12,0(a4) add a4,a4,a5 bne a2,zero,.L3 .L9: li a0,0 ret Tested on --with-arch=gcv no regression. PR target/113112 gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (max_number_of_live_regs): Refine dump information. (preferred_new_lmul_p): Make PHI initial value into live regs calculation. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: New test.
2023-12-23Daily bump.GCC Administrator5-1/+222
2023-12-22c23: construct composite type for tagged typesMartin Uecker19-27/+601
Support for constructing composite types for structs and unions in C23. gcc/c: * c-typeck.cc (composite_type_internal): Adapted from composite_type to support structs and unions. (composite_type): New wrapper function. (build_conditional_operator): Return composite type. * c-decl.cc (finish_struct): Allow NULL for enclosing_struct_parse_info. gcc/testsuite: * gcc.dg/c23-tag-alias-6.c: New test. * gcc.dg/c23-tag-alias-7.c: New test. * gcc.dg/c23-tag-composite-1.c: New test. * gcc.dg/c23-tag-composite-2.c: New test. * gcc.dg/c23-tag-composite-3.c: New test. * gcc.dg/c23-tag-composite-4.c: New test. * gcc.dg/c23-tag-composite-5.c: New test. * gcc.dg/c23-tag-composite-6.c: New test. * gcc.dg/c23-tag-composite-7.c: New test. * gcc.dg/c23-tag-composite-8.c: New test. * gcc.dg/c23-tag-composite-9.c: New test. * gcc.dg/c23-tag-composite-10.c: New test. * gcc.dg/gnu23-tag-composite-1.c: New test. * gcc.dg/gnu23-tag-composite-2.c: New test. * gcc.dg/gnu23-tag-composite-3.c: New test. * gcc.dg/gnu23-tag-composite-4.c: New test. * gcc.dg/gnu23-tag-composite-5.c: New test.
2023-12-22OpenMP: Add prettyprinter support for context selectors.Sandra Loosemore5-1/+87
With the change to use enumerators instead of strings to represent context selector and selector-set names, the default tree-list output for dumping selectors is less helpful for debugging and harder to use in test cases. This patch adds support for dumping context selectors using syntax similar to that used for input to the compiler. gcc/ChangeLog * omp-general.cc (omp_context_name_list_prop): Remove static qualifer. * omp-general.h (omp_context_name_list_prop): Declare. * tree-cfg.cc (dump_function_to_file): Intercept "omp declare variant base" attribute for special handling. * tree-pretty-print.cc: Include omp-general.h. (dump_omp_context_selector): New. (print_omp_context_selector): New. * tree-pretty-print.h (print_omp_context_selector): Declare.
2023-12-22combine: Don't optimize paradoxical SUBREG AND CONST_INT on ↵Jakub Jelinek2-2/+25
WORD_REGISTER_OPERATIONS targets [PR112758] As discussed in the PR, the following testcase is miscompiled on RISC-V 64-bit, because num_sign_bit_copies in one spot pretends the bits in a paradoxical SUBREG beyond SUBREG_REG SImode are all sign bit copies: 5444 /* For paradoxical SUBREGs on machines where all register operations 5445 affect the entire register, just look inside. Note that we are 5446 passing MODE to the recursive call, so the number of sign bit 5447 copies will remain relative to that mode, not the inner mode. 5448 5449 This works only if loads sign extend. Otherwise, if we get a 5450 reload for the inner part, it may be loaded from the stack, and 5451 then we lose all sign bit copies that existed before the store 5452 to the stack. */ 5453 if (WORD_REGISTER_OPERATIONS 5454 && load_extend_op (inner_mode) == SIGN_EXTEND 5455 && paradoxical_subreg_p (x) 5456 && MEM_P (SUBREG_REG (x))) and then optimizes based on that in one place, but then the r7-1077 optimization triggers in and treats all the upper bits in paradoxical SUBREG as undefined and performs based on that another optimization. The r7-1077 optimization is done only if SUBREG_REG is either a REG or MEM, from the discussions in the PR seems that if it is a REG, the upper bits in paradoxical SUBREG on WORD_REGISTER_OPERATIONS targets aren't really undefined, but we can't tell what values they have because we don't see the operation which computed that REG, and for MEM it depends on load_extend_op - if it is SIGN_EXTEND, the upper bits are sign bit copies and so something not really usable for the optimization, if ZERO_EXTEND, they are zeros and it is usable for the optimization, for UNKNOWN I think it is better to punt as well. So, the following patch basically disables the r7-1077 optimization on WORD_REGISTER_OPERATIONS unless we know it is still ok for sure, which is either if sub_width is >= BITS_PER_WORD because then the WORD_REGISTER_OPERATIONS rules don't apply, or load_extend_op on a MEM is ZERO_EXTEND. 2023-12-22 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/112758 * combine.cc (make_compopund_operation_int): Optimize AND of a SUBREG based on nonzero_bits of SUBREG_REG and constant mask on WORD_REGISTER_OPERATIONS targets only if it is a zero extending MEM load. * gcc.c-torture/execute/pr112758.c: New test.
2023-12-22symtab-thunks: Use aggregate_value_p even on is_gimple_reg_type returns ↵Jakub Jelinek2-12/+26
[PR112941] Large/huge _BitInt types are returned in memory and the bitint lowering pass right now relies on that. The gimplification etc. use aggregate_value_p to see if it should be returned in memory or not and use <retval> = _123; return <retval>; rather than return _123; But expand_thunk used e.g. by IPA-ICF was performing an optimization, assuming is_gimple_reg_type is always passed in registers and not calling aggregate_value_p in that case. The following patch changes it to match what the gimplification etc. are doing. 2023-12-22 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/112941 * symtab-thunks.cc (expand_thunk): Check aggregate_value_p regardless of whether is_gimple_reg_type (restype) or not. * gcc.dg/bitint-60.c: New test.
2023-12-22lower-bitint: Handle unreleased SSA_NAMEs from earlier passes gracefully ↵Jakub Jelinek2-1/+20
[PR113102] On the following testcase earlier passes leave around an unreleased SSA_NAME - non-GIMPLE_NOP SSA_NAME_DEF_STMT which isn't in any bb. The following patch makes bitint lowering resistent against those, the first hunk is where we'd for certain kinds of stmts try to ammend them and the latter is where we'd otherwise try to remove them, neither of which works. The other loops over all SSA_NAMEs either already also check gimple_bb (SSA_NAME_DEF_STMT (s)) or it doesn't matter that much if we process it or not (worst case it means e.g. the pass wouldn't return early even when it otherwise could). 2023-12-22 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/113102 * gimple-lower-bitint.cc (gimple_lower_bitint): Handle unreleased large/huge _BitInt SSA_NAMEs. * gcc.dg/bitint-59.c: New test.
2023-12-22lower-bitint: Fix handle_cast ICE [PR113102]Jakub Jelinek2-1/+32
My recent change to use m_data[save_data_cnt] instead of m_data[save_data_cnt + 1] when inside of a loop (m_bb is non-NULL) broke the following testcase. When we create a PHI node on the loop using prepare_data_in_out, both m_data[save_data_cnt{, + 1}] are computed and the fix was right, but there are also cases when we in a loop (m_bb non-NULL) emit a nested cast with too few limbs and then just use constant indexes for all accesses - in that case only m_data[save_data_cnt + 1] is initialized and m_data[save_data_cnt] is NULL. In those cases, we want to use the former. 2023-12-22 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/113102 * gimple-lower-bitint.cc (bitint_large_huge::handle_cast): Only use m_data[save_data_cnt] if it is non-NULL. * gcc.dg/bitint-58.c: New test.
2023-12-22Allow overriding EXPECTChristophe Lyon1-0/+3
While investigating possible race conditions in the GCC testsuites caused by bufferization issues, I wanted to investigate workarounds similar to GDB's READ1 [1], and I noticed it was not always possible to override EXPECT when running 'make check'. This patch adds the missing support in various Makefiles. I was not able to test the patch for all the libraries updated here, but I confirmed it works as intended/needed for libstdc++. libatomic, libitm, libgomp already work as intended because their Makefiles do not have: MAKEOVERRIDES= Tested on (native) aarch64-linux-gnu, confirmed the patch introduces the behaviour I want in gcc, g++, gfortran and libstdc++. I updated (but could not test) libgm2, libphobos, libquadmath and libssp for consistency since their Makefiles have MAKEOVERRIDES= libffi, libgo, libsanitizer seem to need a similar update, but they are imported from their respective upstream repo, so should not be patched here. [1] https://github.com/bminor/binutils-gdb/blob/master/gdb/testsuite/README#L269 2023-12-21 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * Makefile.in: Allow overriding EXEPCT. libgm2/ * Makefile.am: Allow overriding EXEPCT. * Makefile.in: Regenerate. libphobos/ * Makefile.am: Allow overriding EXEPCT. * Makefile.in: Regenerate. libquadmath/ * Makefile.am: Allow overriding EXEPCT. * Makefile.in: Regenerate. libssp/ * Makefile.am: Allow overriding EXEPCT. * Makefile.in: Regenerate. libstdc++-v3/ * Makefile.am: Allow overriding EXEPCT. * Makefile.in: Regenerate.
2023-12-22c++: testsuite: Remove testsuite_tr1.h includesKen Matsui9-105/+101
This patch removes the testsuite_tr1.h dependency from g++.dg/ext/is_*.C tests since the header is supposed to be used only by libstdc++, not front-end. This also includes test code consistency fixes. For the record this fixes the test failures reported at https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641058.html gcc/testsuite/ChangeLog: * g++.dg/ext/is_array.C: Remove testsuite_tr1.h. Add necessary definitions accordingly. Tweak macros for consistency across test codes. * g++.dg/ext/is_bounded_array.C: Likewise. * g++.dg/ext/is_function.C: Likewise. * g++.dg/ext/is_member_function_pointer.C: Likewise. * g++.dg/ext/is_member_object_pointer.C: Likewise. * g++.dg/ext/is_member_pointer.C: Likewise. * g++.dg/ext/is_object.C: Likewise. * g++.dg/ext/is_reference.C: Likewise. * g++.dg/ext/is_scoped_enum.C: Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Patrick Palka <ppalka@redhat.com> Reviewed-by: Jason Merrill <jason@redhat.com>
2023-12-22LoongArch: Add asm modifiers to the LSX and LASX directives in the doc.chenxiaolong2-1/+48
gcc/ChangeLog: * doc/extend.texi:Add modifiers to the vector of asm in the doc. * doc/md.texi:Refine the description of the modifier 'f' in the doc.
2023-12-21c++: computed goto from catch block [PR81438]Jason Merrill3-8/+69
As with 37722, we don't clean up the exception object if a computed goto leaves a catch block, but we can warn about that. PR c++/81438 gcc/cp/ChangeLog: * decl.cc (poplevel_named_label_1): Handle leaving catch. (check_previous_goto_1): Likewise. (check_goto_1): Likewise. gcc/testsuite/ChangeLog: * g++.dg/ext/label15.C: Require indirect_jumps. * g++.dg/ext/label16.C: New test.
2023-12-22Testsuite: Fix failures in g++.dg/analyzer/placement-new-size.CSandra Loosemore1-1/+2
This testcase was failing on uses of int8_t, int64_t, etc without including <stdint.h>. gcc/testsuite/ChangeLog * g++.dg/analyzer/placement-new-size.C: Include <stdint.h>. Also add missing newline to end of file.
2023-12-21c++: sizeof... mangling with alias template [PR95298]Jason Merrill6-3/+86
We were getting sizeof... mangling wrong when the argument after substitution was a pack expansion that is not a simple T..., such as list<T>... in variadic-mangle4.C or (A+1)... in variadic-mangle5.C. In the former case we ICEd; in the latter case we wrongly mangled it as sZ <expression>. PR c++/95298 gcc/cp/ChangeLog: * mangle.cc (write_expression): Handle v18 sizeof... bug. * pt.cc (tsubst_pack_expansion): Keep TREE_VEC for sizeof... (tsubst_expr): Don't strip TREE_VEC here. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/variadic-mangle2.C: Add non-member. * g++.dg/cpp0x/variadic-mangle4.C: New test. * g++.dg/cpp0x/variadic-mangle5.C: New test. * g++.dg/cpp0x/variadic-mangle5a.C: New test.
2023-12-21testsuite: suppress mangling compatibility aliasesJason Merrill76-40/+76
Recently a mangling test failed on a target with no mangling alias support because I hadn't updated the expected mangling, but it was still passing on x86_64-pc-linux-gnu because of the alias for the old mangling. So let's avoid these aliases in mangling tests. gcc/testsuite/ChangeLog: * g++.dg/abi/mangle-arm-crypto.C: Specify -fabi-compat-version. * g++.dg/abi/mangle-concepts1.C * g++.dg/abi/mangle-neon-aarch64.C * g++.dg/abi/mangle-neon.C * g++.dg/abi/mangle-regparm.C * g++.dg/abi/mangle-regparm1a.C * g++.dg/abi/mangle-ttp1.C * g++.dg/abi/mangle-union1.C * g++.dg/abi/mangle1.C * g++.dg/abi/mangle13.C * g++.dg/abi/mangle15.C * g++.dg/abi/mangle16.C * g++.dg/abi/mangle18-1.C * g++.dg/abi/mangle19-1.C * g++.dg/abi/mangle20-1.C * g++.dg/abi/mangle22.C * g++.dg/abi/mangle23.C * g++.dg/abi/mangle24.C * g++.dg/abi/mangle25.C * g++.dg/abi/mangle26.C * g++.dg/abi/mangle27.C * g++.dg/abi/mangle28.C * g++.dg/abi/mangle29.C * g++.dg/abi/mangle3-2.C * g++.dg/abi/mangle3.C * g++.dg/abi/mangle30.C * g++.dg/abi/mangle31.C * g++.dg/abi/mangle32.C * g++.dg/abi/mangle33.C * g++.dg/abi/mangle34.C * g++.dg/abi/mangle35.C * g++.dg/abi/mangle36.C * g++.dg/abi/mangle37.C * g++.dg/abi/mangle39.C * g++.dg/abi/mangle40.C * g++.dg/abi/mangle43.C * g++.dg/abi/mangle44.C * g++.dg/abi/mangle45.C * g++.dg/abi/mangle46.C * g++.dg/abi/mangle47.C * g++.dg/abi/mangle48.C * g++.dg/abi/mangle49.C * g++.dg/abi/mangle5.C * g++.dg/abi/mangle50.C * g++.dg/abi/mangle51.C * g++.dg/abi/mangle52.C * g++.dg/abi/mangle53.C * g++.dg/abi/mangle54.C * g++.dg/abi/mangle55.C * g++.dg/abi/mangle56.C * g++.dg/abi/mangle57.C * g++.dg/abi/mangle58.C * g++.dg/abi/mangle59.C * g++.dg/abi/mangle6.C * g++.dg/abi/mangle60.C * g++.dg/abi/mangle61.C * g++.dg/abi/mangle62.C * g++.dg/abi/mangle62a.C * g++.dg/abi/mangle63.C * g++.dg/abi/mangle64.C * g++.dg/abi/mangle65.C * g++.dg/abi/mangle66.C * g++.dg/abi/mangle68.C * g++.dg/abi/mangle69.C * g++.dg/abi/mangle7.C * g++.dg/abi/mangle70.C * g++.dg/abi/mangle71.C * g++.dg/abi/mangle72.C * g++.dg/abi/mangle73.C * g++.dg/abi/mangle74.C * g++.dg/abi/mangle75.C * g++.dg/abi/mangle76.C * g++.dg/abi/mangle77.C * g++.dg/abi/mangle78.C * g++.dg/abi/mangle8.C * g++.dg/abi/mangle9.C: Likewise.
2023-12-22Daily bump.GCC Administrator6-1/+361
2023-12-21Document cond_copysign and cond_len_copysign optabs [PR112951]Andrew Pinski2-3/+11
This adds the documentation for cond_copysign and cond_len_copysign optabs. Also reorders the optabs.def to be in the similar order as how the internal function was done. gcc/ChangeLog: PR middle-end/112951 * doc/md.texi (cond_copysign): Document. (cond_len_copysign): Likewise. * optabs.def: Reorder cond_copysign to be before cond_fmin. Likewise for cond_len_copysign. Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
2023-12-21c++: fix -Wparentheses for bool-like class typesPatrick Palka4-10/+101
Since r14-4977-g0f2e2080685e75 we now issue a -Wparentheses warning for extern std::vector<bool> v; bool b = v[0] = true; // warning: suggest parentheses around assignment used as truth value [-Wparentheses] I intended for that commit to just allow the existing diagnostics to happen in a template context as well, but the refactoring of is_assignment_op_expr_p caused us for this -Wparentheses warning from convert_for_assignment to now consider user-defined operator= expressions instead of just built-in operator=. And since std::vector<bool> is really a bitset, whose operator[] returns a class type with such a user-defined operator= (taking bool), we now warn here when we didn't use to. That we now accept user-defined operator= expressions is generally good, but arguably "boolish" class types should be treated like ordinary bool as far as the warning is concerned. To that end this patch suppresses the warning for such types, specifically when the class type can be implicitly converted to and assigned from bool. This criterion captures the std::vector<bool>::reference of libstdc++ at least. gcc/cp/ChangeLog: * cp-tree.h (maybe_warn_unparenthesized_assignment): Add 'nested_p' bool parameter. * semantics.cc (boolish_class_type_p_cache): Define. (boolish_class_type_p): Define. (maybe_warn_unparenthesized_assignment): Add 'nested_p' bool parameter. Suppress the warning for nested assignments to bool and bool-like class types. (maybe_convert_cond): Pass nested_p=false to maybe_warn_unparenthesized_assignment. * typeck.cc (convert_for_assignment): Pass nested_p=true to maybe_warn_unparenthesized_assignment. Remove now redundant check for 'rhs' having bool type. gcc/testsuite/ChangeLog: * g++.dg/warn/Wparentheses-34.C: New test.
2023-12-21c++: [[deprecated]] on template redecl [PR84542]Patrick Palka3-3/+18
The deprecated and unavailable attributes weren't working when used on a template redeclaration ultimately because we weren't merging the corresponding tree flags in duplicate_decls. PR c++/84542 gcc/cp/ChangeLog: * decl.cc (merge_attribute_bits): Merge TREE_DEPRECATED and TREE_UNAVAILABLE. gcc/testsuite/ChangeLog: * g++.dg/ext/attr-deprecated-2.C: No longer XFAIL. * g++.dg/ext/attr-unavailable-12.C: New test.
2023-12-21c++: visibility wrt template and ptrmem targs [PR70413]Patrick Palka5-4/+78
When constraining the visibility of an instantiation, we weren't properly considering the visibility of PTRMEM_CST and TEMPLATE_DECL template arguments. This patch fixes this. It turns out we don't maintain the relevant visibility flags for alias templates (e.g. TREE_PUBLIC is never set), so continue to ignore alias template template arguments for now. PR c++/70413 PR c++/107906 gcc/cp/ChangeLog: * decl2.cc (min_vis_expr_r): Handle PTRMEM_CST and TEMPLATE_DECL other than those for alias templates. gcc/testsuite/ChangeLog: * g++.dg/template/linkage2.C: New test. * g++.dg/template/linkage3.C: New test. * g++.dg/template/linkage4.C: New test. * g++.dg/template/linkage4a.C: New test.
2023-12-21omp: Fix simdclone arguments with veclen lower than simdlen [PR113040]Andre Vieira (lists)1-16/+8
This patch fixes an issue introduced by: commit ea4a3d08f11a59319df7b750a955ac613a3f438a Author: Andre Vieira <andre.simoesdiasvieira@arm.com> Date: Wed Nov 1 17:02:41 2023 +0000 omp: Reorder call for TARGET_SIMD_CLONE_ADJUST The problem was that after this patch we no longer added multiple arguments for vector arguments where the veclen was lower than the simdlen. Bootstrapped and regression tested on x86_64-pc-linux-gnu and aarch64-unknown-linux-gnu. gcc/ChangeLog: PR middle-end/113040 * omp-simd-clone.cc (simd_clone_adjust_argument_types): Add multiple vector arguments where simdlen is larger than veclen.
2023-12-21i386: Fix shifts with high register input operand [PR113044]Uros Bizjak2-2/+28
The move to the output operand should use high register input operand. PR target/113044 gcc/ChangeLog: * config/i386/i386.md (*ashlqi_ext<mode>_1): Move from the high register of the input operand. (*<insn>qi_ext<mode>_1): Ditto. gcc/testsuite/ChangeLog: * gcc.target/i386/pr113044.c: New test.
2023-12-21Revert "[PR112918][LRA]: Fixing IRA ICE on m68k"Vladimir N. Makarov1-15/+11
This reverts commit 989e67f827b74b76e58abe137ce12d948af2290c.