aboutsummaryrefslogtreecommitdiff
path: root/gcc
AgeCommit message (Collapse)AuthorFilesLines
2023-11-09Daily bump.GCC Administrator1-1/+1
2023-11-08Daily bump.GCC Administrator1-1/+1
2023-11-07Daily bump.GCC Administrator2-1/+5
2023-11-06hppa: Fix typo in PA 2.0 trampoline templateJohn David Anglin1-1/+1
2023-11-06 John David Anglin <danglin@gcc.gnu.org> * config/pa/pa.c (pa_asm_trampoline_template): Fix typo.
2023-11-06Daily bump.GCC Administrator1-1/+1
2023-11-05Daily bump.GCC Administrator1-1/+1
2023-11-04Daily bump.GCC Administrator1-1/+1
2023-11-03Daily bump.GCC Administrator1-1/+1
2023-11-02Daily bump.GCC Administrator1-1/+1
2023-11-01Daily bump.GCC Administrator1-1/+1
2023-10-31Daily bump.GCC Administrator1-1/+1
2023-10-30Daily bump.GCC Administrator1-1/+1
2023-10-29Daily bump.GCC Administrator1-1/+1
2023-10-28Daily bump.GCC Administrator1-1/+1
2023-10-27Daily bump.GCC Administrator1-1/+1
2023-10-26Daily bump.GCC Administrator1-1/+1
2023-10-25Daily bump.GCC Administrator1-1/+1
2023-10-24Daily bump.GCC Administrator3-1/+30
2023-10-23SH: Fix PR 111001Oleg Endo1-1/+8
gcc/ChangeLog: PR target/111001 * config/sh/sh_treg_combine.cc (sh_treg_combine::record_set_of_reg): Skip over nop move insns.
2023-10-22rs6000: Make 32 bit stack_protect support prefixed insn [PR111367]Kewen Lin2-46/+49
As PR111367 shows, with prefixed insn supported, some of checkings consider it's able to leverage prefixed insn for stack protect related load/store, but since we don't actually change the emitted assembly for 32 bit, it can cause the assembler error as exposed. Mike's commit r10-4547-gce6a6c007e5a98 has already handled the 64 bit case (DImode), this patch is to treat the 32 bit case (SImode) by making use of mode iterator P and ptrload attribute iterator, also fixes the constraints to match the emitted operand formats. PR target/111367 gcc/ChangeLog: * config/rs6000/rs6000.md (stack_protect_setsi): Support prefixed instruction emission and incorporate to stack_protect_set<mode>. (stack_protect_setdi): Rename to ... (stack_protect_set<mode>): ... this, adjust constraint. (stack_protect_testsi): Support prefixed instruction emission and incorporate to stack_protect_test<mode>. (stack_protect_testdi): Rename to ... (stack_protect_test<mode>): ... this, adjust constraint. gcc/testsuite/ChangeLog: * g++.target/powerpc/pr111367.C: New test. (cherry picked from commit 530babc2058be5f2b06b1541384e7b730c368b93)
2023-10-23Daily bump.GCC Administrator1-1/+1
2023-10-22Daily bump.GCC Administrator3-1/+18
2023-10-21Fortran: out of bounds access with nested implied-do IO [PR111837]Harald Anlauf2-1/+19
gcc/fortran/ChangeLog: PR fortran/111837 * frontend-passes.c (traverse_io_block): Dependency check of loop nest shall be triangular, not banded. gcc/testsuite/ChangeLog: PR fortran/111837 * gfortran.dg/implied_do_io_8.f90: New test. (cherry picked from commit 5ac63ec5da2e93226457bea4dbb3a4f78d5d82c2)
2023-10-21Daily bump.GCC Administrator2-1/+7
2023-10-20SH: Fix PR 101177Oleg Endo1-1/+1
Fix accidentally inverted comparison. gcc/ChangeLog: PR target/101177 * config/sh/sh.md (unnamed split pattern): Fix comparison of find_regno_note result.
2023-10-20Daily bump.GCC Administrator1-1/+1
2023-10-19Daily bump.GCC Administrator1-1/+1
2023-10-18Daily bump.GCC Administrator3-1/+18
2023-10-17Disparage slightly for the alternative which move DFmode between SSE_REGS ↵liuhongt2-2/+13
and GENERAL_REGS. For testcase void __cond_swap(double* __x, double* __y) { bool __r = (*__x < *__y); auto __tmp = __r ? *__x : *__y; *__y = __r ? *__y : *__x; *__x = __tmp; } GCC-14 with -O2 and -march=x86-64 options generates the following code: __cond_swap(double*, double*): movsd xmm1, QWORD PTR [rdi] movsd xmm0, QWORD PTR [rsi] comisd xmm0, xmm1 jbe .L2 movq rax, xmm1 movapd xmm1, xmm0 movq xmm0, rax .L2: movsd QWORD PTR [rsi], xmm1 movsd QWORD PTR [rdi], xmm0 ret rax is used to save and restore DFmode value. In RA both GENERAL_REGS and SSE_REGS cost zero since we didn't disparage the alternative in movdf_internal pattern, according to register allocation order, GENERAL_REGS is allocated. The patch add ? for alternative (r,v) and (v,r) just like we did for movsf/hf/bf_internal pattern, after that we get optimal RA. __cond_swap: .LFB0: .cfi_startproc movsd (%rdi), %xmm1 movsd (%rsi), %xmm0 comisd %xmm1, %xmm0 jbe .L2 movapd %xmm1, %xmm2 movapd %xmm0, %xmm1 movapd %xmm2, %xmm0 .L2: movsd %xmm1, (%rsi) movsd %xmm0, (%rdi) ret gcc/ChangeLog: PR target/110170 * config/i386/i386.md (movdf_internal): Disparage slightly for 2 alternatives (r,v) and (v,r) by adding constraint modifier '?'. gcc/testsuite/ChangeLog: * gcc.target/i386/pr110170-3.c: New test. (cherry picked from commit 37a231cc7594d12ba0822077018aad751a6fb94e)
2023-10-17Daily bump.GCC Administrator1-1/+1
2023-10-16Daily bump.GCC Administrator1-1/+1
2023-10-15Daily bump.GCC Administrator1-1/+1
2023-10-14Daily bump.GCC Administrator1-1/+1
2023-10-13Daily bump.GCC Administrator1-1/+1
2023-10-12Daily bump.GCC Administrator1-1/+1
2023-10-11Daily bump.GCC Administrator1-1/+1
2023-10-10Daily bump.GCC Administrator1-1/+1
2023-10-09Daily bump.GCC Administrator1-1/+1
2023-10-08Daily bump.GCC Administrator3-1/+18
2023-10-07MATCH: Fix infinite loop between `vec_cond(vec_cond(a,b,0), c, d)` and `a & b`Andrew Pinski2-0/+12
Match has a pattern which converts `vec_cond(vec_cond(a,b,0), c, d)` into `vec_cond(a & b, c, d)` but since in this case a is a comparison fold will change `a & b` back into `vec_cond(a,b,0)` which causes an infinite loop. The best way to fix this is to enable the patterns for vec_cond(*,vec_cond,*) only for GIMPLE so we don't get an infinite loop for fold any more. Note this is a latent bug since these patterns were added in r11-2577-g229752afe3156a and was exposed by r14-3350-g47b833a9abe1 where now able to remove a VIEW_CONVERT_EXPR. OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions. PR middle-end/111699 gcc/ChangeLog: * match.pd ((c ? a : b) op d, (c ? a : b) op (c ? d : e), (v ? w : 0) ? a : b, c1 ? c2 ? a : b : b): Enable only for GIMPLE. gcc/testsuite/ChangeLog: * gcc.c-torture/compile/pr111699-1.c: New test. (cherry picked from commit e77428a9a336f57e3efe3eff95f2b491d7e9be14)
2023-10-07Daily bump.GCC Administrator1-1/+1
2023-10-06Daily bump.GCC Administrator1-1/+1
2023-10-05Daily bump.GCC Administrator1-1/+1
2023-10-04Daily bump.GCC Administrator1-1/+1
2023-10-03Daily bump.GCC Administrator3-1/+26
2023-10-02Disable generation of scalar modulo instructions.Pat Haugen8-29/+72
It was recently discovered that the scalar modulo instructions can suffer noticeable performance issues for certain input values. This patch disables their generation since the equivalent div/mul/sub sequence does not suffer the same problem. gcc/ * config/rs6000/rs6000.c (rs6000_rtx_costs): Check whether the modulo instruction is disabled. * config/rs6000/rs6000.h (RS6000_DISABLE_SCALAR_MODULO): New. * config/rs6000/rs6000.md (mod<mode>3, *mod<mode>3): Check it. (define_expand umod<mode>3): New. (define_insn umod<mode>3): Rename to *umod<mode>3 and check if the modulo instruction is disabled. (umodti3, modti3): Check if the modulo instruction is disabled. gcc/testsuite/ * gcc.target/powerpc/clone1.c: Add xfails. * gcc.target/powerpc/clone3.c: Likewise. * gcc.target/powerpc/mod-1.c: Update scan strings and add xfails. * gcc.target/powerpc/mod-2.c: Likewise. * gcc.target/powerpc/p10-vdivq-vmodq.c: Add xfails. (cherry picked from commit 58ab38213b979811d314f68e3f455c28a1d44140)
2023-10-02Daily bump.GCC Administrator1-1/+1
2023-10-01Daily bump.GCC Administrator1-1/+1
2023-09-30Daily bump.GCC Administrator1-1/+1
2023-09-29Daily bump.GCC Administrator1-1/+1