aboutsummaryrefslogtreecommitdiff
path: root/gcc
AgeCommit message (Collapse)AuthorFilesLines
2021-01-14use sigjmp_buf for analyzer sigsetjmp testsAlexandre Oliva2-2/+2
The sigsetjmp analyzer tests use jmp_buf in sigsetjmp and siglongjmp calls. Not every system that supports sigsetjmp uses the same data structure for setjmp and sigsetjmp, which results in type mismatches. This patch changes the tests to use sigjmp_buf, that is the POSIX-specific type for use with sigsetjmp and siglongjmp. for gcc/testsuite/ChnageLog * gcc.dg/analyzer/sigsetjmp-5.c: Use sigjmp_buf. * gcc.dg/analyzer/sigsetjmp-6.c: Likewise.
2021-01-14declare getpass in analyzer/sensitive-1.c testAlexandre Oliva1-0/+5
The getpass function is not available on all systems; and not necessarily declared in unistd.h, as expected by the sensitive-1 analyzer test. Since this is a compile-only test, it doesn't really matter if the function is defined in the system libraries. All we need is a declaration, to avoid warnings from calling an undeclared function. This patch adds the declaration, in a way that is most unlikely to conflict with any existing declaration. for gcc/testsuite/ChangeLog * gcc.dg/analyzer/sensitive-1.c: Declare getpass.
2021-01-14[gcn offloading] Only supported in 64-bit configurationsThomas Schwinge1-126/+134
Similar to nvptx offloading, see PR65099 "nvptx offloading: hard-coded 64-bit assumptions". gcc/ * config/gcn/mkoffload.c (main): Create an offload image only in 64-bit configurations.
2021-01-14PR fortran/98661 - valgrind issues with error recoveryHarald Anlauf2-0/+23
During error recovery after an invalid derived type specification it was possible to try to resolve an invalid array specification. We now skip this if the component has the ALLOCATABLE or POINTER attribute and the shape is not deferred. gcc/fortran/ChangeLog: PR fortran/98661 * resolve.c (resolve_component): Derived type components with ALLOCATABLE or POINTER attribute shall have a deferred shape. gcc/testsuite/ChangeLog: PR fortran/98661 * gfortran.dg/pr98661.f90: New test.
2021-01-14Revert "PR fortran/98661 - valgrind issues with error recovery"Harald Anlauf2-42/+7
This reverts commit d0d2becf2dfe8316c9014d962e7f77773ec5c27e.
2021-01-14PR fortran/98661 - valgrind issues with error recoveryHarald Anlauf2-7/+42
During error recovery after an invalid derived type specification it was possible to try to resolve an invalid array specification. We now skip this if the component has the ALLOCATABLE or POINTER attribute and the shape is not deferred. gcc/fortran/ChangeLog: PR fortran/98661 * resolve.c (resolve_component): Derived type components with ALLOCATABLE or POINTER attribute shall have a deferred shape. gcc/testsuite/ChangeLog: PR fortran/98661 * gfortran.dg/pr98661.f90: New test.
2021-01-14libgo: update hurd supportIan Lance Taylor1-1/+1
Patch from Svante Signell. Fixes PR go/98496 Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/283692
2021-01-14RTEMS: Fix Ada build for riscvSebastian Huber1-0/+5
gcc/ada/ PR ada/98595 * Makefile.rtl (LIBGNAT_TARGET_PAIRS) <riscv*-*-rtems*>: Use wraplf version of Aux_Long_Long_Float.
2021-01-14gcov: add one more pytestMartin Liska2-0/+77
gcc/testsuite/ChangeLog: * g++.dg/gcov/gcov-17.C: New test. * g++.dg/gcov/test-gcov-17.py: New test.
2021-01-14x86: Error on -fcf-protection with incompatible targetH.J. Lu4-1/+32
-fcf-protection with CF_BRANCH inserts ENDBR32 at function entries. ENDBR32 is NOP only on 64-bit processors and 32-bit TARGET_CMOV processors. Issue an error for -fcf-protection with CF_BRANCH when compiling for 32-bit non-TARGET_CMOV targets. gcc/ PR target/98667 * config/i386/i386-options.c (ix86_option_override_internal): Issue an error for -fcf-protection with CF_BRANCH when compiling for 32-bit non-TARGET_CMOV targets. gcc/testsuite/ PR target/98667 * gcc.target/i386/pr98667-1.c: New file. * gcc.target/i386/pr98667-2.c: Likewise. * gcc.target/i386/pr98667-3.c: Likewise.
2021-01-14i386: Resolve variable shadowing in i386-options.c [PR98671]Uros Bizjak4-7/+5
Also change global variable pta_size to unsigned. 2021-01-14 Uroš Bizjak <ubizjak@gmail.com> gcc/ PR target/98671 * config/i386/i386-options.c (ix86_valid_target_attribute_inner_p): Remove declaration and initialization of shadow variable "ret". (ix86_option_override_internal): Remove delcaration of shadow variable "i". Redeclare shadowed variable to unsigned. * common/config/i386/i386-common.c (pta_size): Redeclare to unsigned. * config/i386/i386-builtins.c (get_builtin_code_for_version): Update for redeclaration. * config/i386/i386.h (pta_size): Ditto.
2021-01-14tree-optimization/98674 - improve dependence analysisRichard Biener2-2/+40
This improves dependence analysis on refs that access the same array but with different typed but same sized accesses. That's obviously safe for the case of types that cannot have any access function based off them. For the testcase this is signed short vs. unsigned short. 2021-01-14 Richard Biener <rguenther@suse.de> PR tree-optimization/98674 * tree-data-ref.c (base_supports_access_fn_components_p): New. (initialize_data_dependence_relation): For two bases without possible access fns resort to type size equality when determining shape compatibility. * gcc.dg/vect/pr98674.c: New testcase.
2021-01-14i386: Update PR target/95021 testsH.J. Lu2-2/+2
Also pass -mpreferred-stack-boundary=4 -mno-stackrealign to avoid disabling STV by: /* Disable STV if -mpreferred-stack-boundary={2,3} or -mincoming-stack-boundary={2,3} or -mstackrealign - the needed stack realignment will be extra cost the pass doesn't take into account and the pass can't realign the stack. */ if (ix86_preferred_stack_boundary < 128 || ix86_incoming_stack_boundary < 128 || opts->x_ix86_force_align_arg_pointer) opts->x_target_flags &= ~MASK_STV; PR target/98676 * gcc.target/i386/pr95021-1.c: Add -mpreferred-stack-boundary=4 -mno-stackrealign. * gcc.target/i386/pr95021-3.c: Likewise.
2021-01-14arm: Replace calls to __builtin_vcge* by <=,>= in arm_neon.h [PR66791]Prathamesh Kulkarni2-30/+28
gcc/ 2021-01-14 Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> PR target/66791 * config/arm/arm_neon.h: Replace calls to __builtin_vcge* by <=, >= operators in vcle and vcge intrinsics respectively. * config/arm/arm_neon_builtins.def: Remove entry for vcge and vcgeu.
2021-01-14c++: Fix erroneous parm comparison logic [PR 98372]Nathan Sidwell3-2/+31
I flubbed an application of De Morgan's law. Let's just express the logic directly and let the compiler figure it out. This bug made it look like pr52830 was fixed, but it is not. PR c++/98372 gcc/cp/ * tree.c (cp_tree_equal): Correct map_context logic. gcc/testsuite/ * g++.dg/cpp0x/constexpr-52830.C: Restore dg-ice * g++.dg/template/pr98372.C: New.
2021-01-14i386: Remove reduntand assignment in i386-options.c [PR98671]Uros Bizjak4-10/+9
Also rename x86_prefetch_sse to ix86_prefetch_sse. 2021-01-14 Uroš Bizjak <ubizjak@gmail.com> gcc/ PR target/98671 * config/i386/i386-options.c (ix86_function_specific_save): Remove redundant assignment to opts->x_ix86_branch_cost. * config/i386/i386.c (ix86_prefetch_sse): Rename from x86_prefetch_sse. Update all uses. * config/i386/i386.h: Update for rename. * config/i386/i386-options.h: Ditto.
2021-01-14i386: Fix the pmovzx SSE4.1 define_insn_and_split patterns [PR98670]Jakub Jelinek2-6/+25
I've made two mistakes in the *sse4_1_zero_extend* define_insn_and_split patterns. One is that when it uses vector_operand, it should use Bm rather than m constraint, and the other one is that because it is a post-reload splitter it needs isa attribute to select which alternatives are valid for which ISAs. Sorry for messing this up. 2021-01-14 Jakub Jelinek <jakub@redhat.com> PR target/98670 * config/i386/sse.md (*sse4_1_zero_extendv8qiv8hi2_3, *sse4_1_zero_extendv4hiv4si2_3, *sse4_1_zero_extendv2siv2di2_3): Use Bm instead of m for non-avx. Add isa attribute. * gcc.target/i386/pr98670.c: New test.
2021-01-14match.pd: Optimize ~(X >> Y) to ~X >> Y if ~X can be simplified [PR96688]Jakub Jelinek4-2/+38
This patch optimizes two GIMPLE operations into just one. As mentioned in the PR, there is some risk this might create more expensive constants, but sometimes it will make them on the other side less expensive, it really depends on the exact value. And if it is an important issue, we should do it in md or during expansion. 2021-01-14 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/96688 * match.pd (~(X >> Y) -> ~X >> Y): New simplification if ~X can be simplified. * gcc.dg/tree-ssa/pr96688.c: New test. * gcc.dg/tree-ssa/reassoc-37.c: Adjust scan-tree-dump regex. * gcc.target/i386/pr66821.c: Likewise.
2021-01-14vect: Account for unused IFN_LOAD_LANES resultsRichard Sandiford3-1/+37
At the moment, if we use only one vector of an LD4 result, we'll treat the LD4 as having the cost of a single load. But all 4 loads and any associated permutes take place regardless of which results are actually used. This patch therefore counts the cost of unused LOAD_LANES results against the first statement in a group. An alternative would be to multiply the ncopies of the first stmt by the group size and treat other stmts in the group as having zero cost, but I thought that might be more surprising when reading dumps. gcc/ * tree-vect-stmts.c (vect_model_load_cost): Account for unused IFN_LOAD_LANES results. gcc/testsuite/ * gcc.target/aarch64/sve/cost_model_11.c: New test. * gcc.target/aarch64/sve/mask_struct_load_5.c: Use -fno-vect-cost-model.
2021-01-14aarch64: Reimplememnt vmovn/vmovl intrinsics with builtins insteadKyrylo Tkachov3-12/+33
Turns out __builtin_convertvector is not as good a fit for the widening and narrowing intrinsics as I had hoped. During the veclower phase we lower most of it to bitfield operations and hope DCE cleans it back up into vector pack/unpack and extend operations. I received reports that in more complex cases GCC fails to do that and we're left with many vector extract operations that clutter the output. I think veclower can be improved on that front, but for GCC 10 I'd like to just implement these builtins with a good old RTL builtin rather than inline asm. gcc/ * config/aarch64/aarch64-simd.md (aarch64_<su>xtl<mode>): Define. (aarch64_xtn<mode>): Likewise. * config/aarch64/aarch64-simd-builtins.def (sxtl, uxtl, xtn): Define builtins. * config/aarch64/arm_neon.h (vmovl_s8): Reimplement using builtin. (vmovl_s16): Likewise. (vmovl_s32): Likewise. (vmovl_u8): Likewise. (vmovl_u16): Likewise. (vmovl_u32): Likewise. (vmovn_s16): Likewise. (vmovn_s32): Likewise. (vmovn_s64): Likewise. (vmovn_u16): Likewise. (vmovn_u32): Likewise. (vmovn_u64): Likewise.
2021-01-14aarch64: reimplement vqmovn_high* intrinsics using builtinsKyrylo Tkachov5-39/+57
This patch reimplements the saturating-truncate-and-insert-into-high intrinsics using the appropriate RTL codes and builtins. gcc/ * config/aarch64/aarch64-simd.md (aarch64_<su>qxtn2<mode>_le): Define. (aarch64_<su>qxtn2<mode>_be): Likewise. (aarch64_<su>qxtn2<mode>): Likewise. * config/aarch64/aarch64-simd-builtins.def (sqxtn2, uqxtn2): Define builtins. * config/aarch64/iterators.md (SAT_TRUNC): Define code_iterator. (su): Handle ss_truncate and us_truncate. * config/aarch64/arm_neon.h (vqmovn_high_s16): Reimplement using builtin. (vqmovn_high_s32): Likewise. (vqmovn_high_s64): Likewise. (vqmovn_high_u16): Likewise. (vqmovn_high_u32): Likewise. (vqmovn_high_u64): Likewise. gcc/testsuite/ * gcc.target/aarch64/narrow_high-intrinsics.c: Update uqxtn2 and sqxtn2 scan-assembler-times.
2021-01-14aarch64: Reimplement vmovn_high_* intrinsics using builtinsKyrylo Tkachov4-37/+49
The vmovn_high* intrinsics are supposed to map to XTN2 instructions that narrow their source vector and instert it into the top half of the destination vector. This patch reimplements them away from inline assembly to an RTL builtin that performs a vec_concat with a truncate. gcc/ * config/aarch64/aarch64-simd.md (aarch64_xtn2<mode>_le): Define. (aarch64_xtn2<mode>_be): Likewise. (aarch64_xtn2<mode>): Likewise. * config/aarch64/aarch64-simd-builtins.def (xtn2): Define builtins. * config/aarch64/arm_neon.h (vmovn_high_s16): Reimplement using builtins. (vmovn_high_s32): Likewise. (vmovn_high_s64): Likewise. (vmovn_high_u16): Likewise. (vmovn_high_u32): Likewise. (vmovn_high_u64): Likewise. gcc/testsuite/ * gcc.target/aarch64/narrow_high-intrinsics.c: Adjust scan-assembler-times for xtn2.
2021-01-14Daily bump.GCC Administrator4-1/+258
2021-01-14or1k: Fixup exception header data encodingsStafford Horne1-0/+4
While running glibc tests several *-textrel tests failed showing that relocations remained against read only sections. It turned out this was related to exception headers data encoding being wrong. By default pointer encoding will always use the DW_EH_PE_absptr format. This patch uses format DW_EH_PE_pcrel and DW_EH_PE_sdata4. Optionally DW_EH_PE_indirect is included for global symbols. This eliminates the relocations. gcc/ChangeLog: * config/or1k/or1k.h (ASM_PREFERRED_EH_DATA_FORMAT): New macro.
2021-01-14or1k: Add note to indicate execstackStafford Horne1-0/+2
Define TARGET_ASM_FILE_END as file_end_indicate_exec_stack to allow generation of the ".note.GNU-stack" section note. This allows binutils to properly set PT_GNU_STACK in the program header. This fixes a glibc execstack testsuite test failure found while working on the OpenRISC glibc port. gcc/ChangeLog: * config/or1k/linux.h (TARGET_ASM_FILE_END): Define macro.
2021-01-14or1k: Add builtin define to detect hard floatStafford Horne1-0/+2
This is used in libgcc and now glibc to detect when hardware floating point operations are supported by the target. gcc/ChangeLog: * config/or1k/or1k.h (TARGET_CPU_CPP_BUILTINS): Add builtin define for __or1k_hard_float__.
2021-01-14or1k: Implement profile hook calling _mcountStafford Horne1-2/+13
Defining this to not abort as found when working on running tests in the glibc test suite. We implement this with a call to _mcount with no arguments. The required return address's will be pulled from the stack. Passing the LR (r9) as an argument had problems as sometimes r9 is clobbered by the GOT logic in the prologue before the call to _mcount. gcc/ChangeLog: * config/or1k/or1k.h (NO_PROFILE_COUNTERS): Define as 1. (PROFILE_HOOK): Define to call _mcount. (FUNCTION_PROFILER): Change from abort to no-op.
2021-01-13c++: Failure to lookup using-decl name [PR98231]Marek Polacek4-0/+31
In r11-4690 we removed the call to finish_nonmember_using_decl in tsubst_expr/DECL_EXPR in the USING_DECL block. This was done not to perform name lookup twice for a non-dependent using-decl, which sounds sensible. However, finish_nonmember_using_decl also pushes the decl's bindings which we still have to do so that we can find the USING_DECL's name later. In this case, we've got a USING_DECL N::operator<< that we are tsubstituting. We already looked it up while parsing the template "foo", and lookup_using_decl stashed the OVERLOAD it found into USING_DECL_DECLS. Now we just have to update the IDENTIFIER_BINDING of the identifier for operator<< with the overload the name is bound to. I didn't want to export push_local_binding so I've introduced a new wrapper. gcc/cp/ChangeLog: PR c++/98231 * name-lookup.c (push_using_decl_bindings): New. * name-lookup.h (push_using_decl_bindings): Declare. * pt.c (tsubst_expr): Call push_using_decl_bindings. gcc/testsuite/ChangeLog: PR c++/98231 * g++.dg/lookup/using63.C: New test.
2021-01-13match.pd: Fold (~X | C) ^ D into (X | C) ^ (~D ^ C) if (~D ^ C) can be ↵Jakub Jelinek2-1/+32
simplified [PR96691] These simplifications are only simplifications if the (~D ^ C) or (D ^ C) expressions fold into gimple vals, but in that case they decrease number of operations by 1. 2021-01-13 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/96691 * match.pd ((~X | C) ^ D -> (X | C) ^ (~D ^ C), (~X & C) ^ D -> (X & C) ^ (D ^ C)): New simplifications if (~D ^ C) or (D ^ C) can be simplified. * gcc.dg/tree-ssa/pr96691.c: New test.
2021-01-13tree-optimization/92645 - avoid harmful early BIT_FIELD_REF canonicalizationRichard Biener4-5/+31
This avoids canonicalizing BIT_FIELD_REF <T1> (a, <sz>, 0) to (T1)a on integer typed a. This confuses the vectorizer SLP matching. With this delayed to after vector lowering the testcase in PR92645 from Skia is now finally optimized to reasonable assembly. 2021-01-13 Richard Biener <rguenther@suse.de> PR tree-optimization/92645 * match.pd (BIT_FIELD_REF to conversion): Delay canonicalization until after vector lowering. * gcc.target/i386/pr92645-7.c: New testcase. * gcc.dg/tree-ssa/ssa-fre-54.c: Adjust. * gcc.dg/pr69047.c: Likewise.
2021-01-13c++: Fix cp_build_function_call_vec [PR 98626]Nathan Sidwell1-2/+2
I misunderstood the cp_build_function_call_vec API, thinking a NULL vector was an acceptable way of passing no arguments. You need to pass a vector of no elements. PR c++/98626 gcc/cp/ * module.cc (module_add_import_initializers): Pass a zero-element argument vector.
2021-01-13aarch64: Add support for unpacked SVE MLS and MSBRichard Sandiford7-44/+246
This patch extends the MLS/MSB patterns to support unpacked integer vectors. The type suffix could be either the element size or the container size, but using the element size should be more efficient. gcc/ * config/aarch64/aarch64-sve.md (fnma<mode>4): Extend from SVE_FULL_I to SVE_I. (@aarch64_pred_fnma<mode>, cond_fnma<mode>, *cond_fnma<mode>_2) (*cond_fnma<mode>_4, *cond_fnma<mode>_any): Likewise. gcc/testsuite/ * gcc.target/aarch64/sve/mls_2.c: New test. * g++.target/aarch64/sve/cond_mls_1.C: Likewise. * g++.target/aarch64/sve/cond_mls_2.C: Likewise. * g++.target/aarch64/sve/cond_mls_3.C: Likewise. * g++.target/aarch64/sve/cond_mls_4.C: Likewise. * g++.target/aarch64/sve/cond_mls_5.C: Likewise.
2021-01-13aarch64: Add support for unpacked SVE MLA and MADRichard Sandiford7-44/+246
This patch extends the MLA/MAD patterns to support unpacked integer vectors. The type suffix could be either the element size or the container size, but using the element size should be more efficient. gcc/ * config/aarch64/aarch64-sve.md (fma<mode>4): Extend from SVE_FULL_I to SVE_I. (@aarch64_pred_fma<mode>, cond_fma<mode>, *cond_fma<mode>_2) (*cond_fma<mode>_4, *cond_fma<mode>_any): Likewise. gcc/testsuite/ * gcc.target/aarch64/sve/mla_2.c: New test. * g++.target/aarch64/sve/cond_mla_1.C: Likewise. * g++.target/aarch64/sve/cond_mla_2.C: Likewise. * g++.target/aarch64/sve/cond_mla_3.C: Likewise. * g++.target/aarch64/sve/cond_mla_4.C: Likewise. * g++.target/aarch64/sve/cond_mla_5.C: Likewise.
2021-01-13tree-optimization/92645 - improve SLP with existing vectorsRichard Biener2-2/+63
This improves SLP discovery in the face of existing vectors allowing punning of the vector shape (or even punning from an integer type). For punning from integer types this does not yet handle lane zero extraction being represented as conversion rather than BIT_FIELD_REF. 2021-01-13 Richard Biener <rguenther@suse.de> PR tree-optimization/92645 * tree-vect-slp.c (vect_build_slp_tree_1): Relax supported BIT_FIELD_REF argument. (vect_build_slp_tree_2): Record the desired vector type on the external vector def. (vectorizable_slp_permutation): Handle required punning of existing vector defs. * gcc.target/i386/pr92645-6.c: New testcase.
2021-01-13aarch64: Tighten condition on sve/sel* testsRichard Sandiford3-3/+3
Noticed while testing on a different machine that the sve/sel_*.c tests require .variant_pcs support but don't test for it. .variant_pcs post-dates SVE so there shouldn't be a need to test for both. gcc/testsuite/ * gcc.target/aarch64/sve/sel_1.c: Require aarch64_variant_pcs. * gcc.target/aarch64/sve/sel_2.c: Likewise. * gcc.target/aarch64/sve/sel_3.c: Likewise.
2021-01-13rtl-ssa: Fix reversed comparisons in accesses.h commentRichard Sandiford1-4/+4
Noticed while looking at something else that the comment above def_lookup got the description of the comparisons the wrong way round. gcc/ * rtl-ssa/accesses.h (def_lookup): Fix order of comparison results.
2021-01-13sh: Remove match_scratch operand testRichard Sandiford1-2/+1
This patch fixes a regression on sh4 introduced by the rtl-ssa stuff. The port had a pattern: (define_insn "movsf_ie" [(set (match_operand:SF 0 "general_movdst_operand" "=f,r,f,f,fy, f,m, r, r,m,f,y,y,rf,r,y,<,y,y") (match_operand:SF 1 "general_movsrc_operand" " f,r,G,H,FQ,mf,f,FQ,mr,r,y,f,>,fr,y,r,y,>,y")) (use (reg:SI FPSCR_MODES_REG)) (clobber (match_scratch:SI 2 "=X,X,X,X,&z, X,X, X, X,X,X,X,X, y,X,X,X,X,X"))] "TARGET_SH2E && (arith_reg_operand (operands[0], SFmode) || fpul_operand (operands[0], SFmode) || arith_reg_operand (operands[1], SFmode) || fpul_operand (operands[1], SFmode) || arith_reg_operand (operands[2], SImode))" But recog can generate this pattern from something that matches: [(set (match_operand:SF 0 "general_movdst_operand") (match_operand:SF 1 "general_movsrc_operand") (use (reg:SI FPSCR_MODES_REG))] with recog adding the (clobber (match_scratch:SI)) automatically. recog tests the C condition before adding the clobber, so there might not be an operands[2] to test. Similarly, gen_movsf_ie takes only two arguments, with operand 2 being filled in automatically. The only way to create this pattern with a REG operands[2] before RA would be to generate it directly from RTL. AFAICT the only things that do this are the secondary reload patterns, which are generated during RA and come with pre-vetted operands. arith_reg_operand rejects 6 specific registers: return (regno != T_REG && regno != PR_REG && regno != FPUL_REG && regno != FPSCR_REG && regno != MACH_REG && regno != MACL_REG); The fpul_operand tests allow FPUL_REG, leaving 5 invalid registers. However, in all alternatives of movsf_ie, either operand 0 or operand 1 is a register that belongs r, f or y, none of which include any of the 5 rejected registers. This means that any post-RA pattern would satisfy the operands[0] or operands[1] condition without the operands[2] test being necessary. gcc/ * config/sh/sh.md (movsf_ie): Remove operands[2] test.
2021-01-13Hurd: Enable ifunc by defaultSamuel Thibault1-1/+3
The binutils bugs seem to have been fixed. gcc/ * config.gcc [$target == *-*-gnu*]: Enable 'default_gnu_indirect_function'.
2021-01-13i386, expand: Optimize also 256-bit and 512-bit permutatations as vpmovzx if ↵Jakub Jelinek13-11/+388
possible [PR95905] The following patch implements what I've talked about, i.e. to no longer force operands of vec_perm_const into registers in the generic code, but let each of the (currently 8) targets force it into registers individually, giving the targets better control on if it does that and when and allowing them to do something special with some particular operands. And then defines the define_insn_and_split for the 256-bit and 512-bit permutations into vpmovzx* (only the bw, wd and dq cases, in theory we could add define_insn_and_split patterns also for the bd, bq and wq). 2021-01-13 Jakub Jelinek <jakub@redhat.com> PR target/95905 * optabs.c (expand_vec_perm_const): Don't force v0 and v1 into registers before calling targetm.vectorize.vec_perm_const, only after that. * config/i386/i386-expand.c (ix86_vectorize_vec_perm_const): Handle two argument permutation when one operand is zero vector and only after that force operands into registers. * config/i386/sse.md (*avx2_zero_extendv16qiv16hi2_1): New define_insn_and_split pattern. (*avx512bw_zero_extendv32qiv32hi2_1): Likewise. (*avx512f_zero_extendv16hiv16si2_1): Likewise. (*avx2_zero_extendv8hiv8si2_1): Likewise. (*avx512f_zero_extendv8siv8di2_1): Likewise. (*avx2_zero_extendv4siv4di2_1): Likewise. * config/mips/mips.c (mips_vectorize_vec_perm_const): Force operands into registers. * config/arm/arm.c (arm_vectorize_vec_perm_const): Likewise. * config/sparc/sparc.c (sparc_vectorize_vec_perm_const): Likewise. * config/ia64/ia64.c (ia64_vectorize_vec_perm_const): Likewise. * config/aarch64/aarch64.c (aarch64_vectorize_vec_perm_const): Likewise. * config/rs6000/rs6000.c (rs6000_vectorize_vec_perm_const): Likewise. * config/gcn/gcn.c (gcn_vectorize_vec_perm_const): Likewise. Use std::swap. * gcc.target/i386/pr95905-2.c: Use scan-assembler-times instead of scan-assembler. Add tests with zero vector as first __builtin_shuffle operand. * gcc.target/i386/pr95905-3.c: New test. * gcc.target/i386/pr95905-4.c: New test.
2021-01-13if-to-switch: fix also virtual phisMartin Liska2-7/+23
gcc/ChangeLog: PR tree-optimization/98455 * gimple-if-to-switch.cc (condition_info::record_phi_mapping): Record also virtual PHIs. (pass_if_to_switch::execute): Return TODO_cleanup_cfg only conditionally. gcc/testsuite/ChangeLog: PR tree-optimization/98455 * gcc.dg/tree-ssa/pr98455.c: New test.
2021-01-13doc: Fix typos in C++ Modules documentationJonathan Wakely1-2/+2
gcc/ChangeLog: * doc/invoke.texi (C++ Modules): Fix typos.
2021-01-13tree-optimization/98640 - fix bogus sign-extension with VNRichard Biener2-6/+31
VN tried to express a sign extension from int to long of a trucated quantity with a plain conversion but that loses the truncation. Since there's no single operand doing truncate plus sign extend (there was a proposed SEXT_EXPR to do that at some point mapping to RTL sign_extract) don't bother to appropriately model this with two ops (which the VN insert machinery doesn't handle and which is unlikely to CSE fully). 2021-01-13 Richard Biener <rguenther@suse.de> PR tree-optimization/98640 * tree-ssa-sccvn.c (visit_nary_op): Do not try to handle plus or minus from a truncated operand to be sign-extended. * gcc.dg/torture/pr98640.c: New testcase.
2021-01-13i386: Add define_insn_and_split patterns for btrl [PR96938]Jakub Jelinek2-0/+131
In the following testcase we only optimize f2 and f7 to btrl, although we should optimize that way all of the functions. The problem is the type demotion/narrowing (which is performed solely during the generic folding and not later), without it we see the AND performed in SImode and match it as btrl, but with it while the shifts are still performed in SImode, the AND is already done in QImode or HImode low part of the shift. 2021-01-13 Jakub Jelinek <jakub@redhat.com> PR target/96938 * config/i386/i386.md (*btr<mode>_1, *btr<mode>_2): New define_insn_and_split patterns. (splitter after *btr<mode>_2): New splitter. * gcc.target/i386/pr96938.c: New test.
2021-01-13ipa: remove a dead codeMartin Liska1-2/+0
gcc/ChangeLog: PR ipa/98652 * cgraphunit.c (analyze_functions): Remove dead code.
2021-01-13[PATCH v2] aarch64: Add cpu cost tables for A64FXQian Jianhua2-4/+171
This patch add cost tables for A64FX. 2021-01-13 Qian jianhua <qianjh@cn.fujitsu.com> gcc/ * config/aarch64/aarch64-cost-tables.h (a64fx_extra_costs): New. * config/aarch64/aarch64.c (a64fx_addrcost_table): New. (a64fx_regmove_cost, a64fx_vector_cost): New. (a64fx_tunings): Use the new added cost tables.
2021-01-13i386: Optimize _mm_unpacklo_epi8 of 0 vector as second argument or similar ↵Jakub Jelinek4-0/+188
VEC_PERM_EXPRs into pmovzx [PR95905] The following patch adds patterns (so far 128-bit only) for permutations like { 0 16 1 17 2 18 3 19 4 20 5 21 6 22 7 23 } where the second operand is CONST0_RTX CONST_VECTOR to be emitted as pmovzx. 2021-01-13 Jakub Jelinek <jakub@redhat.com> PR target/95905 * config/i386/predicates.md (pmovzx_parallel): New predicate. * config/i386/sse.md (*sse4_1_zero_extendv8qiv8hi2_3): New define_insn_and_split pattern. (*sse4_1_zero_extendv4hiv4si2_3): Likewise. (*sse4_1_zero_extendv2siv2di2_3): Likewise. * gcc.target/i386/pr95905-1.c: New test. * gcc.target/i386/pr95905-2.c: New test.
2021-01-12amdgcn: Remove dead code for fixed v0 registerJulian Brown1-4/+0
This patch removes code to fix the v0 register in gcn_conditional_register_usage that was missed out of the previous patch removing the need for that: https://gcc.gnu.org/pipermail/gcc-patches/2019-November/534284.html 2021-01-13 Julian Brown <julian@codesourcery.com> gcc/ * config/gcn/gcn.c (gcn_conditional_register_usage): Remove dead code to fix v0 register.
2021-01-12amdgcn: Fix exec register live-on-entry to BB in md-reorgJulian Brown1-1/+16
This patch fixes a corner case in the AMD GCN md-reorg pass when the EXEC register is live on entry to a BB, and could be clobbered by code inserted by the pass before a use in (e.g.) a different BB. 2021-01-13 Julian Brown <julian@codesourcery.com> gcc/ * config/gcn/gcn.c (gcn_md_reorg): Fix case where EXEC reg is live on entry to a BB.
2021-01-12amdgcn: Improve FP division accuracyJulian Brown3-20/+81
GCN has a reciprocal-approximation instruction but no hardware divide. This patch adjusts the open-coded reciprocal approximation/Newton-Raphson refinement steps to use fused multiply-add instructions as is necessary to obtain a properly-rounded result, and adds further refinement steps to correctly round the full division result. The patterns in question are still guarded by a flag_reciprocal_math condition, and do not yet support denormals. 2021-01-13 Julian Brown <julian@codesourcery.com> gcc/ * config/gcn/gcn-valu.md (recip<mode>2<exec>, recip<mode>2): Use unspec for reciprocal-approximation instructions. (div<mode>3): Use fused multiply-accumulate operations for reciprocal refinement and division result. * config/gcn/gcn.md (UNSPEC_RCP): New unspec constant. gcc/testsuite/ * gcc.target/gcn/fpdiv.c: New test.
2021-01-12amdgcn: Fix subdf3 patternJulian Brown1-1/+1
This patch fixes a typo in the subdf3 pattern that meant it had a non-standard name and thus the compiler would emit a libcall rather than the proper hardware instruction for DFmode subtraction. 2021-01-13 Julian Brown <julian@codesourcery.com> gcc/ * config/gcn/gcn-valu.md (subdf): Rename to... (subdf3): This.