aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-04-03Fix va-arg-22.c at -O1 on m32r.Jeff Law2-1/+8
PR rtl-optimization/92264 * config/m32r/m32r.c (m32r_output_block_move): Properly account for post-increment addressing of source operands as well as residuals when computing any adjustments to the input pointer.
2020-04-03i386: Fix up handling of OPTION_MASK_ISA_MMX builtins [PR94461]Jakub Jelinek4-97/+105
In https://gcc.gnu.org/ml/gcc-patches/2017-10/msg00576.html the builtin handling was changed so that OPTION_MASK_ISA_MMX | OPTION_MASK_ISA_SSE etc. in i386-builtin.def means we require both mmx and sse, not just one of those, and later on for other option combinations very similar rule has been clarified, with a few exceptions that ix86_expand_builtin lists (SSE | 3DNOW_A, SSE4_2 | CRC32 and FMA | FMA4 are one or the other). The above mentioned patch also added OPTION_MASK_ISA_MMX to a few insns that in the ISA documents are documented e.g. only requiring SSE2 or SSSE3 etc. CPUID, but because those builtins take or return V2SI or similar MMX-ish arguments, we can't really support those builtins in functions that have MMX disabled. Now, during the TARGET_MMX_WITH_SSE changes, https://gcc.gnu.org/ml/gcc-patches/2019-02/msg01479.html and https://gcc.gnu.org/ml/gcc-patches/2019-05/msg01084.html actually changed this; it added | OPTION_MASK_ISA_SSE2 to builtins that were formerly OPTION_MASK_ISA_MMX only, but didn't touch the builtins that were already using OPTION_MASK_ISA_SSE2 | OPTION_MASK_ISA_MMX for something different (both options must be enabled). This causes e.g. ICE on the following testcase, because the builtins are now enabled even with just -mmmx -mno-sse2, even when they (those changed in 2017) require SSE2. The following patch instead reverts the above two 2019-ish changes (except for header/testsuite changes), and instead treats OPTION_MASK_ISA_MMX requirement in bdesc/.isa specially, as being satisfied by either TARGET_MMX (no changes really needed for that), or by TARGET_MMX_WITH_SSE. This achieves what the two 2019-ish patches want to do, that the OPTION_MASK_ISA_MMX only builtins are enabled not just with -mmmx, but also with -m64 -msse2, and for the other builtins that require MMX and something else will either require -mmmx and that some other ISA, or -m64 -msse2 and that other ISA, but -mmmx will not enable builtins that need something more than OPTION_MASK_ISA_MMX only. The i386-builtins.c changes that aren't reversion of the two patches try to make sure that in .isa we still record OPTION_MASK_ISA_MMX for builtins that have that requirement, so that it is in the end only ix86_expand_builtin that decides if the builtin is ok or not and the rest of code just decides if it is the right time to declare the builtin already or if it should be deferred. 2020-04-03 Jakub Jelinek <jakub@redhat.com> PR target/94461 * config/i386/i386-expand.c (ix86_expand_builtin): If TARGET_MMX_WITH_SSE without TARGET_MMX and bisa contains OPTION_MASK_ISA_MMX, clear OPTION_MASK_ISA_MMX and set OPTION_MASK_ISA_SSE2 in bisa. Revert 2019-05-17 and 2019-05-15 changes. * config/i386/i386-builtins.c (def_builtin): If mask includes OPTION_MASK_ISA_MMX and TARGET_MMX_WITH_SSE, consider it satisfied. (ix86_add_new_builtins): For TARGET_64BIT, consider OPTION_MASK_ISA_SSE2 enabled in isa as satisfying OPTION_MASK_ISA_MMX requirement. (ix86_init_tm_builtins): If TARGET_MMX_WITH_SSE consider OPTION_MASK_ISA_MMX as satisfied. (bdesc_tm): Revert 2019-05-15 changes. (ix86_init_mmx_sse_builtins): Likewise. * config/i386/i386-builtin.def: Likewise. * gcc.target/i386/pr94461.c: New test.
2020-04-03c++: alias template and parameter packs (PR91966).Jason Merrill3-1/+139
In this testcase, when we do a pack expansion of count_better_mins<nums>, nums appears both in the definition of count_better_mins and as its template argument. The intent is that we get a expansion over pairs of elements of the pack, i.e. less<2,2>, less<2,7>, less<7,2>, .... But if we substitute into the definition of count_better_mins when parsing the template, we end up with sum<less<nums,nums>...>, which never gives us less<2,7>. We could deal with this by somehow marking up the use of 'nums' as an argument for 'num', but it's simpler to mark the alias as complex, so we need to instantiate it later with all its arguments rather than replace it early with its expansion. gcc/cp/ChangeLog 2020-04-03 Jason Merrill <jason@redhat.com> PR c++/91966 * pt.c (complex_pack_expansion_r): New. (complex_alias_template_p): Use it.
2020-04-03i386: Fix vph{add,subs?}[wd] 256-bit AVX2 RTL patterns [PR94460]Jakub Jelinek4-26/+70
The following testcase is miscompiled, because the AVX2 patterns don't describe correctly what the insn does. E.g. vphaddd with %ymm* operands (the second pattern) instruction as per: https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_hadd_epi32&expand=2941 does { a0+a1, a2+a3, b0+b1, b2+b3, a4+a5, a6+a7, b4+b5, b6+b7 } but our RTL pattern did { a0+a1, a2+a3, a4+a5, a6+a7, b0+b1, b2+b3, b4+b5, b6+b7 } where the first and last 64 bits are the same and two middle 64 bits swapped. https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_hadd_epi16&expand=2939 similarly, insn does: { a0+a1, a2+a3, a4+a5, a6+a7, b0+b1, b2+b3, b4+b5, b6+b7, a8+a9, a10+a11, a12+a13, a14+a15, b8+b9, b10+b11, b12+b13, b14+b15 } but RTL pattern did { a0+a1, a2+a3, a4+a5, a6+a7, a8+a9, a10+a11, a12+a13, a14+a15, b0+b1, b2+b3, b4+b5, b6+b7, b8+b9, b10+b11, b12+b13, b14+b15 } again, first and last 64 bits are the same and the two middle 64 bits swapped. 2020-04-03 Jakub Jelinek <jakub@redhat.com> PR target/94460 * config/i386/sse.md (avx2_ph<plusminus_mnemonic>wv16hi3, avx2_ph<plusminus_mnemonic>dv8si3): Fix up RTL pattern to do second half of first lane from first lane of second operand and first half of second lane from second lane of first operand. * gcc.target/i386/avx2-pr94460.c: New test.
2020-04-03c++: Add test for PR c++/93211Patrick Palka2-0/+17
The fix for PR c++/90711 also fixed this PR. gcc/testsuite/ChangeLog: PR c++/93211 PR c++/90711 * g++.dg/template/koenig11.C: New test.
2020-04-03arm: MVE: Fix unintended change to testsAndre Simoes Dias Vieira10-9/+21
When committing my last patch I accidentally removed -mfpu=auto from the following tests. This puts it back. testsuite/ChangeLog: 2020-04-03 Andre Vieira <andre.simoesdiasvieira@arm.com> * gcc.target/arm/mve/intrinsics/mve_vector_float.c: Put -mfpu=auto back. * gcc.target/arm/mve/intrinsics/mve_vector_float1.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_float2.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_int.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_int1.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_int2.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_uint.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_uint1.c: Likewise. * gcc.target/arm/mve/intrinsics/mve_vector_uint2.c: Likewise. Testing Done: @IP: I assert this is almost no risk. Reviewed at http://pdtlreviewboard.cambridge.arm.com/r/12880/
2020-04-03arm: Do not process rest of MVE header file after unsupported errorAndre Simoes Dias Vieira2-4/+7
This patch makes sure the rest of the header file is not parsed if MVE is not supported. The user should not be including this file if MVE is not supported, nevertheless making sure it doesn't parse the rest of the header file will save the user from a huge error output that would be rather useless. gcc/ChangeLog: 2020-04-03 Andre Vieira <andre.simoesdiasvieira@arm.com> * config/arm/arm_mve.h: Condition the header file on __ARM_FEATURE_MVE.
2020-04-03AArch64: Fix options canonicalization for assemblerTamar Christina19-1/+218
It is currently impossible to use fp16 on any architecture higher than Armv8.3-a due to a bug in options canonization. This bug results in the fp16 flag not being emitted in the assembly when it should have been. This is caused by a complicated architectural requirement at Armv8.4-a. On Armv8.2-a and Armv8.3-a fp16fml is an optional extension and turning it on turns on both fp and fp16. However starting with Armv8.4-a fp16fml is mandatory if fp16 is available, otherwise it's optional. In short this means that to enable fp16fml the smallest option that needs to passed to the assembler is Armv8.4-a+fp16. The fix in this patch takes into account that an option may be on by default in an architecture, but that not all the bits required to use it are on by default in an architecture. In such cases the difference between the two are still emitted to the assembler. gcc/ChangeLog: PR target/94396 * common/config/aarch64/aarch64-common.c (aarch64_get_extension_string_for_isa_flags): Handle default flags. gcc/testsuite/ChangeLog: PR target/94396 * gcc.target/aarch64/options_set_11.c: New test. * gcc.target/aarch64/options_set_12.c: New test. * gcc.target/aarch64/options_set_13.c: New test. * gcc.target/aarch64/options_set_14.c: New test. * gcc.target/aarch64/options_set_15.c: New test. * gcc.target/aarch64/options_set_16.c: New test. * gcc.target/aarch64/options_set_17.c: New test. * gcc.target/aarch64/options_set_18.c: New test. * gcc.target/aarch64/options_set_19.c: New test. * gcc.target/aarch64/options_set_20.c: New test. * gcc.target/aarch64/options_set_21.c: New test. * gcc.target/aarch64/options_set_22.c: New test. * gcc.target/aarch64/options_set_23.c: New test. * gcc.target/aarch64/options_set_24.c: New test. * gcc.target/aarch64/options_set_25.c: New test. * gcc.target/aarch64/options_set_26.c: New test.
2020-04-03middle-end/94465 - handle released SSA names in array_ref_low_boundRichard Biener2-1/+9
array_ref_low_bound is used in dumping ARRAY_REFs which in turn is called when basic blocks are deleted. cleanup_control_flow_pre consciously decides to remove unreachable basic-blocks in arbitrary order so the following makes array_ref_low_bound forgiving in the case the SSA name with the index definition has been released already. 2020-04-03 Richard Biener <rguenther@suse.de> PR middle-end/94465 * tree.c (array_ref_low_bound): Deal with released SSA names in index position.
2020-04-03Improve svn-rev to search for pattern at line beginning.Martin Liska2-1/+6
* gcc-git-customization.sh: Search for the pattern at line beginning only.
2020-04-03amdgcn: Support unordered floating-point comparison operatorsKwok Cheung Yeung3-1/+23
2020-04-03 Kwok Cheung Yeung <kcy@codesourcery.com> gcc/ * config/gcn/gcn.c (print_operand): Handle unordered comparison operators. * config/gcn/predicates.md (gcn_fp_compare_operator): Add unordered comparison operators.
2020-04-03libstdc++: Fix std::to_address for debug iterators (PR 93960)Jonathan Wakely4-2/+55
It should be valid to use std::to_address on a past-the-end iterator, but the debug mode iterators do a check for dereferenceable in their operator->(). That check is generally useful, so rather than remove it this changes std::__to_address to identify a debug mode iterator and use base().operator->() to skip the check. PR libstdc++/93960 * include/bits/ptr_traits.h (__to_address): Add special case for debug iterators, to avoid dereferenceable check. * testsuite/20_util/to_address/1_neg.cc: Adjust dg-error line number. * testsuite/20_util/to_address/debug.cc: New test.
2020-04-03Revert "[nvptx, libgomp] Update pr85381-{2,4}.c test-cases" [PR89713, PR94392]Thomas Schwinge3-2/+31
In response to PR94392 commit 75efe9cb1f8938a713ce540dc3b27bc2afcd3fae "c/94392 - only enable -ffinite-loops for C++", this reverts PR89713 commit 00908992f2a78f213d227aea8dbab014a1361df0, as apparently now again "empty oacc loops are" no longer "removed before expand". libgomp/ PR tree-optimization/89713 PR c/94392 * testsuite/libgomp.oacc-c-c++-common/pr85381-2.c: Again expect 'bar.sync'. * testsuite/libgomp.oacc-c-c++-common/pr85381-4.c: Likewise.
2020-04-03Fix PR94443 with gsi_insert_seq_before [PR94443]Kewen Lin4-2/+26
This patch is to fix the stupid mistake by using gsi_insert_seq_before instead of gsi_insert_before. BTW, the regression testing on one x86_64 machine from CFarm is unable to reveal it (I guess due to native arch sandybridge?), so I specified additional option -march=znver2 and verified the coverage. Bootstrapped/regtested on powerpc64le-linux-gnu (P9) and x86_64-pc-linux-gnu, also verified the fail cases in related PRs. 2020-04-03 Kewen Lin <linkw@gcc.gnu.org> gcc/ PR tree-optimization/94443 * tree-vect-loop.c (vectorizable_live_operation): Use gsi_insert_seq_before to replace gsi_insert_before. gcc/testsuite/ PR tree-optimization/94443 * gcc.dg/vect/pr94443.c: New test.
2020-04-03ICF: compare type attributes for gimple_call_fntypes.Martin Liska2-0/+10
PR ipa/94445 * ipa-icf-gimple.c (func_checker::compare_gimple_call): Compare type attributes for gimple_call_fntypes.
2020-04-03S/390 zTPF: Handle skip trace addresses when unwindingJim Johnston2-57/+89
Check for and handle new skip trace addresses when unwinding on zTPF. libgcc/ChangeLog: 2020-04-03 Jim Johnston <jjohnst@us.ibm.com> * config/s390/tpf-unwind.h (MIN_PATRANGE, MAX_PATRANGE) (TPFRA_OFFSET): Macros removed. (CP_CNF, cinfc_fast, CINFC_CMRESET, CINTFC_CMCENBKST) (CINTFC_CMCENBKED, ICST_CRET, ICST_SRET, LOWCORE_PAGE3_ADDR) (PG3_SKIPPING_OFFSET): New macros. (__isPATrange): Use cinfc_fast for the check. (__isSkipResetAddr): New function. (s390_fallback_frame_state): Check for skip trace addresses. Use either ICST_CRET or ICST_SRET to calculate return address location. (__tpf_eh_return): Handle skip trace addresses.
2020-04-03Daily bump.GCC Administrator1-1/+1
2020-04-02Fix some comment typos in alias.c.Sandra Loosemore2-7/+11
2020-04-02 Sandra Loosemore <sandra@codesourcery.com> * alias.c (get_alias_set): Fix comment typos.
2020-04-02Fix check_effective_target_sigsetjmp for glibc targets.Sandra Loosemore2-1/+12
2020-04-02 Sandra Loosemore <sandra@codesourcery.com> gcc/testsuite/ * lib/target-supports.exp (check_effective_target_sigsetjmp): Test for __sigsetjmp as well as sigsetjmp.
2020-04-02Fix fortran/85982 ICE in resolve_component.Fritz Reese4-9/+67
2020-04-01 Fritz Reese <foreese@gcc.gnu.org> PR fortran/85982 * fortran/decl.c (match_attr_spec): Lump COMP_STRUCTURE/COMP_MAP into attribute checking used by TYPE. 2020-04-01 Fritz Reese <foreese@gcc.gnu.org> PR fortran/85982 * gfortran.dg/dec_structure_28.f90: New test.
2020-04-02[Fortran] Resolve formal args before checking DTIOTobias Burnus6-6/+71
* gfortran.h (gfc_resolve_formal_arglist): Add prototype. * interface.c (check_dtio_interface1): Call it. * resolve.c (gfc_resolve_formal_arglist): Renamed from resolve_formal_arglist, removed static. (find_arglists, resolve_types): Update calls. * gfortran.dg/dtio_35.f90: New.
2020-04-02Prevent IPA-SRA from creating calls to local comdats (PR 92676)Martin Jambor2-2/+49
since r278669 (fix for PR ipa/91956), IPA-SRA makes sure that the clone it creates is put into the same same_comdat as the original cgraph_node, so that it can call private comdats (such as the ipa-split bits of a comdat that is private). However, that means that if there is non-comdat caller of a public comdat that is modified by IPA-SRA, it now finds itself calling a private comdat, which call graph verifier does not like (and for a reason, in theory it can disappear and since it is private it would not be available from other CUs). The patch fixes this by performing the fix for PR 91956 only when the node in question actually calls a local comdat and when it does, also making sure that no callers come from a different same_comdat (disabling IPA-SRA if both conditions are true), so that it plays by the rules in both modes, does not violate the private comdat calling rule and at the same time does not disable the transformation unnecessarily. The patch also fixes up the calls_comdat_local of callers of the modified node, despite that not triggering any known issues. 2020-04-02 Martin Jambor <mjambor@suse.cz> PR ipa/92676 * ipa-sra.c (struct caller_issues): New fields candidate and call_from_outside_comdat. (check_for_caller_issues): Check for calls from outsied of candidate's same_comdat_group. (check_all_callers_for_issues): Set up issues.candidate, check result of the new check. (mark_callers_calls_comdat_local): New function. (process_isra_node_results): Set calls_comdat_local of callers if appropriate.
2020-04-02c/94392 - only enable -ffinite-loops for C++Richard Biener15-4/+72
This does away with enabling -ffinite-loops at -O2+ for all languages and instead enables it selectively for C++ only. It also makes -ffinite-loops loop-private at CFG construction time fixing correctness issues with inlining. 2020-04-02 Richard Biener <rguenther@suse.de> PR c/94392 * c-opts.c (c_common_post_options): Enable -ffinite-loops for -O2 and C++11 or newer. * common.opt (ffinite-loops): Initialize to zero. * opts.c (default_options_table): Remove OPT_ffinite_loops entry. * cfgloop.h (loop::finite_p): New member. * cfgloopmanip.c (copy_loop_info): Copy finite_p. * ipa-icf-gimple.c (func_checker::compare_loops): Compare finite_p. * lto-streamer-in.c (input_cfg): Stream finite_p. * lto-streamer-out.c (output_cfg): Likewise. * tree-cfg.c (replace_loop_annotate): Initialize finite_p from flag_finite_loops at CFG build time. * tree-ssa-loop-niter.c (finite_loop_p): Check the loops finite_p flag instead of flag_finite_loops. * doc/invoke.texi (ffinite-loops): Adjust documentation of default setting. * gcc.dg/torture/pr94392.c: New testcase.
2020-04-02debug/94450 - remove DW_TAG_imported_unit generated in LTRANS unitsRichard Biener2-18/+6
This removes the DW_TAG_imported_unit we generate for each referenced early debug unit in LTRANS units. They are more harmful than they do good and the semantics can be read in a way making it even wrong. 2020-04-02 Richard Biener <rguenther@suse.de> PR debug/94450 * dwarf2out.c (dwarf2out_early_finish): Remove code emitting DW_TAG_imported_unit.
2020-04-02doc: RISC-V: Update binutils requirement to 2.30Maciej W. Rozycki2-8/+10
Complement commit bfe78b08471f ("RISC-V: Using fmv.x.w/fmv.w.x rather than fmv.x.s/fmv.s.x") and document a binutils 2.30 requirement in the installation manual, matching the addition of fmv.x.w/fmv.w.x mnemonics to GAS. gcc/ * doc/install.texi (Specific) <riscv32-*-elf, riscv32-*-linux> <riscv64-*-elf, riscv64-*-linux>: Update binutils requirement to 2.30.
2020-04-02Fix PR94401 by considering reverse overrunKewen Lin2-9/+35
The commit r10-7415 brings scalar type consideration to eliminate epilogue peeling for gaps, but it exposed one problem that the current handling doesn't consider the memory access type VMAT_CONTIGUOUS_REVERSE, for which the overrun happens on low address side. This patch is to make the code take care of it by updating the offset and construction element order accordingly. Bootstrapped/regtested on powerpc64le-linux-gnu P8 and aarch64-linux-gnu. 2020-04-02 Kewen Lin <linkw@gcc.gnu.org> gcc/ChangeLog PR tree-optimization/94401 * tree-vect-loop.c (vectorizable_load): Handle VMAT_CONTIGUOUS_REVERSE access type when loading halves of vector to avoid peeling for gaps.
2020-04-02Fix up -Wliteral-suffix warning on mti-linux.hJakub Jelinek2-1/+6
I've noticed while trying to reproduce PR92989 the following warning: In file included from ./tm.h:42, from ../../gcc/backend.h:28, from ../../gcc/lra-assigns.c:80: ../../gcc/config/mips/mti-linux.h:31:5: warning: invalid suffix on literal; C++11 requires a space between literal and string macro [-Wliteral-suffix] "/%{mmicromips:micro}mips%{mel|EL:el}-"MIPS_SYSVERSION_SPEC \ ^ This fixes it, string concatenation works just fine even with whitespace in between. 2020-04-02 Jakub Jelinek <jakub@redhat.com> * config/mips/mti-linux.h (SYSROOT_SUFFIX_SPEC): Add a space in between a string literal and MIPS_SYSVERSION_SPEC macro.
2020-04-02sra/doc: Document param sra-max-propagationsMartin Jambor2-0/+9
I forgot to document the new param in invoke.texi, does the text below look OK? Tested with make info and make pdf. Thanks, Martin 2020-04-02 Martin Jambor <mjambor@suse.cz> * doc/invoke.texi (Optimize Options): Document sra-max-propagations.
2020-04-02params: Decrease -param=max-find-base-term-values= default [PR92264]Jakub Jelinek2-1/+5
For the PR in question, my proposal would be to also lower -param=max-find-base-term-values= default from 2000 to 200 after this, at least in the above 4 bootstraps/regtests there is nothing that would ever result in find_base_term returning non-NULL with more than 200 VALUEs being processed. 2020-04-02 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/92264 * params.opt (-param=max-find-base-term-values=): Decrease default from 2000 to 200.
2020-04-02cselib: Reuse VALUEs on sp adjustments [PR92264]Jakub Jelinek5-42/+310
As discussed in the PR, if !ACCUMULATE_OUTGOING_ARGS on large functions we can have hundreds of thousands of stack pointer adjustments and cselib creates a new VALUE after each sp adjustment, which form extremely deep VALUE chains, which is very harmful e.g. for find_base_term. E.g. if we have sp -= 4 sp -= 4 sp += 4 sp += 4 sp -= 4 sp += 4 that means 7 VALUEs, one for the sp at beginning (val1), than val2 = val1 - 4, then val3 = val2 - 4, then val4 = val3 + 4, then val5 = val4 + 4, then val6 = val5 - 4, then val7 = val6 + 4. This patch tweaks cselib, so that it is smarter about sp adjustments. When cselib_lookup (stack_pointer_rtx, Pmode, 1, VOIDmode) and we know nothing about sp yet (this happens at the start of the function, for non-var-tracking also after cselib_reset_table and for var-tracking after processing fp_setter insn where we forget about former sp values because that is now hfp related while everything after it is sp related), we look it up normally, but in addition to what we have been doing before we mark the VALUE as SP_DERIVED_VALUE_P. Further lookups of sp + offset are then special cased, so that it is canonicalized to that SP_DERIVED_VALUE_P VALUE + CONST_INT (if possible). So, for the above, we get val1 with SP_DERIVED_VALUE_P set, then val2 = val1 - 4, val3 = val1 - 8 (note, no longer val2 - 4!), then we get val2 again, val1 again, val2 again, val1 again. In the find_base_term visited_vals.length () > 100 find_base_term statistics during combined x86_64-linux and i686-linux bootstrap+regtest cycle, without the patch I see: find_base_term > 100 returning NULL returning non-NULL 32-bit compilations 4229178 407 64-bit compilations 217523 0 with largest visited_vals.length () when returning non-NULL being 206. With the patch the same numbers are: 32-bit compilations 1249588 135 64-bit compilations 3510 0 with largest visited_vals.length () when returning non-NULL being 173. This shows significant reduction of the deep VALUE chains. On powerpc64{,le}-linux, these stats didn't change at all, we have 1008 0 for all of -m32, -m64 and little-endian -m64, just the gcc.dg/pr85180.c and gcc.dg/pr87985.c testcases which are unrelated to sp. My earlier version of the patch, which contained just the rtl.h and cselib.c changes, regressed some tests: gcc.dg/guality/{pr36728-{1,3},pr68860-{1,2}}.c gcc.target/i386/{pr88416,sse-{13,23,24,25,26}}.c The problem with the former tests was worse debug info, where with -m32 where arg7 was passed in a stack slot we though a push later on might have invalidated it, when it couldn't. This is something I've solved with the var-tracking.c (vt_initialize) changes. In those problematic functions, we create a cfa_base VALUE (argp) and want to record that at the start of the function the argp VALUE is sp + off and also record that current sp VALUE is argp's VALUE - off. The second permanent equivalence didn't make it after the patch though, because cselib_add_permanent_equiv will cselib_lookup the value of the expression it wants to add as the equivalence and if it is the same VALUE as we are calling it on, it doesn't do anything; and due to the cselib changes for sp based accesses that is exactly what happened. By reversing the order of the cselib_add_permanent_equiv calls we get both equivalences though and thus are able to canonicalize the sp based accesses in var-tracking to the cfa_base value + offset. The i386 FAILs were all ICEs, where we had pushf instruction pushing flags and then pop pseudo reading that value again. With the cselib changes, cselib during RTL DSE is able to see through the sp adjustment and wanted to replace_read what was done pushf, by moving the flags register into a pseudo and replace the memory read in the pop with that pseudo. That is wrong for two reasons: one is that the backend doesn't have an instruction to move the flags hard register into some other register, but replace_read has been validating just the mem -> pseudo replacement and not the insns emitted by copy_to_mode_reg. And the second issue is that it is obviously wrong to replace a stack pop which contains stack post-increment by a copy of pseudo into destination. dse.c has some code to handle RTX_AUTOINC, but only uses it when actually removing stores and only when there is REG_INC note (stack RTX_AUTOINC does not have those), in check_for_inc_dec* where it emits the reg adjustment(s) before the insn that is going to be deleted. replace_read doesn't remove the insn, so if it e.g. contained REG_INC note, it would be kept there and we might have the RTX_AUTOINC not just in *loc, but other spots. So, the dse.c changes try to validate the added insns and punt on all RTX_AUTOINC in *loc. Furthermore, it seems that with the cselib.c changes on the gfortran.dg/pr87360.f90 and gcc.target/i386/pr88416.c testcases check_for_inc_dec{,_1} happily throws stack pointer autoinc on the floor, which is also wrong. While we could perhaps do the for_each_inc_dec call regardless of whether we have REG_INC note or not, we aren't prepared to handle e.g. REG_ARGS_SIZE distribution and thus could end up with wrong unwind info or ICEs during dwarf2cfi.c. So the patch also punts on those, after all, if we'd in theory managed to try to optimize such pushes before, we'd create wrong-code. On x86_64-linux and i686-linux, the patch has some minor debug info coverage differences, but it doesn't appear very significant to me. https://github.com/pmachata/dwlocstat tool gives (where before is vanilla trunk + the rtl.h patch but not {cselib,var-tracking,dse}.c --enable-checking=yes,rtl,extra bootstrapped, then {cselib,var-tracking,dse}.c hunks applied and make cc1plus, while after is trunk with the whole patch applied). 64-bit cc1plus before cov% samples cumul 0..10 1232756/48% 1232756/48% 11..20 31089/1% 1263845/49% 21..30 39172/1% 1303017/51% 31..40 38853/1% 1341870/52% 41..50 47473/1% 1389343/54% 51..60 45171/1% 1434514/56% 61..70 69393/2% 1503907/59% 71..80 61988/2% 1565895/61% 81..90 104528/4% 1670423/65% 91..100 875402/34% 2545825/100% after cov% samples cumul 0..10 1233238/48% 1233238/48% 11..20 31086/1% 1264324/49% 21..30 39157/1% 1303481/51% 31..40 38819/1% 1342300/52% 41..50 47447/1% 1389747/54% 51..60 45151/1% 1434898/56% 61..70 69379/2% 1504277/59% 71..80 61946/2% 1566223/61% 81..90 104508/4% 1670731/65% 91..100 875094/34% 2545825/100% 32-bit cc1plus before cov% samples cumul 0..10 1231221/48% 1231221/48% 11..20 30992/1% 1262213/49% 21..30 36422/1% 1298635/51% 31..40 35793/1% 1334428/52% 41..50 47102/1% 1381530/54% 51..60 41201/1% 1422731/56% 61..70 65467/2% 1488198/58% 71..80 59560/2% 1547758/61% 81..90 104076/4% 1651834/65% 91..100 881879/34% 2533713/100% after cov% samples cumul 0..10 1230469/48% 1230469/48% 11..20 30390/1% 1260859/49% 21..30 36362/1% 1297221/51% 31..40 36042/1% 1333263/52% 41..50 47619/1% 1380882/54% 51..60 41674/1% 1422556/56% 61..70 65849/2% 1488405/58% 71..80 59857/2% 1548262/61% 81..90 104178/4% 1652440/65% 91..100 881273/34% 2533713/100% 2020-04-02 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/92264 * rtl.h (struct rtx_def): Mention that call bit is used as SP_DERIVED_VALUE_P in cselib.c. * cselib.c (SP_DERIVED_VALUE_P): Define. (PRESERVED_VALUE_P, SP_BASED_VALUE_P): Move definitions earlier. (cselib_hasher::equal): Handle equality between SP_DERIVED_VALUE_P val_rtx and sp based expression where offsets cancel each other. (preserve_constants_and_equivs): Formatting fix. (cselib_reset_table): Add reverse op loc to SP_DERIVED_VALUE_P locs list for cfa_base_preserved_val if needed. Formatting fix. (autoinc_split): If the to be returned value is a REG, MEM or VALUE which has SP_DERIVED_VALUE_P + CONST_INT as one of its locs, return the SP_DERIVED_VALUE_P VALUE and adjust *off. (rtx_equal_for_cselib_1): Call autoinc_split even if both expressions are PLUS in Pmode with CONST_INT second operands. Handle SP_DERIVED_VALUE_P cases. (cselib_hash_plus_const_int): New function. (cselib_hash_rtx): Use it for PLUS in Pmode with CONST_INT second operand, as well as for PRE_DEC etc. that ought to be hashed the same way. (cselib_subst_to_values): Substitute PLUS with Pmode and CONST_INT operand if the first operand is a VALUE which has SP_DERIVED_VALUE_P + CONST_INT as one of its locs for the SP_DERIVED_VALUE_P + adjusted offset. (cselib_lookup_1): When creating a new VALUE for stack_pointer_rtx, set SP_DERIVED_VALUE_P on it. Set PRESERVED_VALUE_P when adding SP_DERIVED_VALUE_P PRESERVED_VALUE_P subseted VALUE location. * var-tracking.c (vt_initialize): Call cselib_add_permanent_equiv on the sp value before calling cselib_add_permanent_equiv on the cfa_base value. * dse.c (check_for_inc_dec_1, check_for_inc_dec): Punt on RTX_AUTOINC in the insn without REG_INC note. (replace_read): Punt on RTX_AUTOINC in the *loc being replaced. Punt on invalid insns added by copy_to_mode_reg. Formatting fixes.
2020-04-02aarch64: Fix ICE due to aarch64_gen_compare_reg_maybe_ze [PR94435]Jakub Jelinek2-1/+29
The following testcase ICEs, because aarch64_gen_compare_reg_maybe_ze emits invalid RTL. For y_mode [QH]Imode it expects y to be of that mode (or CONST_INT that fits into that mode) and x being SImode; for non-CONST_INT y it zero extends y into SImode and compares that against x, for CONST_INT y it zero extends y into SImode. The problem is that when the zero extended constant isn't usable directly, it forces it into a REG, but with y_mode mode, and then compares against y. That is wrong, because it should force it into a SImode REG and compare that way. 2020-04-02 Jakub Jelinek <jakub@redhat.com> PR target/94435 * config/aarch64/aarch64.c (aarch64_gen_compare_reg_maybe_ze): For y_mode E_[QH]Imode and y being a CONST_INT, change y_mode to SImode. * gcc.target/aarch64/pr94435.c: New test.
2020-04-02aarch64: Fix ICE due to aarch64_gen_compare_reg_maybe_ze [PR94435]Jakub Jelinek2-0/+11
The following testcase ICEs, because aarch64_gen_compare_reg_maybe_ze emits invalid RTL. For y_mode [QH]Imode it expects y to be of that mode (or CONST_INT that fits into that mode) and x being SImode; for non-CONST_INT y it zero extends y into SImode and compares that against x, for CONST_INT y it zero extends y into SImode. The problem is that when the zero extended constant isn't usable directly, it forces it into a REG, but with y_mode mode, and then compares against y. That is wrong, because it should force it into a SImode REG and compare that way. 2020-04-02 Jakub Jelinek <jakub@redhat.com> PR target/94435 * config/aarch64/aarch64.c (aarch64_gen_compare_reg_maybe_ze): For y_mode E_[QH]Imode and y being a CONST_INT, change y_mode to SImode. * gcc.target/aarch64/pr94435.c: New test.
2020-04-02[ARM]: Fix for MVE ACLE intrinsics with writeback (PR94317).Srinath Parvathaneni16-43/+250
Following MVE ACLE intrinsics have an issue with writeback to the base address. vldrdq_gather_base_wb_s64, vldrdq_gather_base_wb_u64, vldrdq_gather_base_wb_z_s64, vldrdq_gather_base_wb_z_u64, vldrwq_gather_base_wb_s32, vldrwq_gather_base_wb_u32, vldrwq_gather_base_wb_z_s32, vldrwq_gather_base_wb_z_u32, vldrwq_gather_base_wb_f32, vldrwq_gather_base_wb_z_f32. This patch fixes the bug reported in PR94317 by adding separate builtin calls to update the result and writeback to base address for the above intrinsics. 2020-04-02 Srinath Parvathaneni <srinath.parvathaneni@arm.com> PR target/94317 * config/arm/arm-builtins.c (LDRGBWBXU_QUALIFIERS): Define. (LDRGBWBXU_Z_QUALIFIERS): Likewise. * config/arm/arm_mve.h (__arm_vldrdq_gather_base_wb_s64): Modify intrinsic defintion by adding a new builtin call to writeback into base address. (__arm_vldrdq_gather_base_wb_u64): Likewise. (__arm_vldrdq_gather_base_wb_z_s64): Likewise. (__arm_vldrdq_gather_base_wb_z_u64): Likewise. (__arm_vldrwq_gather_base_wb_s32): Likewise. (__arm_vldrwq_gather_base_wb_u32): Likewise. (__arm_vldrwq_gather_base_wb_z_s32): Likewise. (__arm_vldrwq_gather_base_wb_z_u32): Likewise. (__arm_vldrwq_gather_base_wb_f32): Likewise. (__arm_vldrwq_gather_base_wb_z_f32): Likewise. * config/arm/arm_mve_builtins.def (vldrwq_gather_base_wb_z_u): Modify builtin's qualifier. (vldrdq_gather_base_wb_z_u): Likewise. (vldrwq_gather_base_wb_u): Likewise. (vldrdq_gather_base_wb_u): Likewise. (vldrwq_gather_base_wb_z_s): Likewise. (vldrwq_gather_base_wb_z_f): Likewise. (vldrdq_gather_base_wb_z_s): Likewise. (vldrwq_gather_base_wb_s): Likewise. (vldrwq_gather_base_wb_f): Likewise. (vldrdq_gather_base_wb_s): Likewise. (vldrwq_gather_base_nowb_z_u): Define builtin. (vldrdq_gather_base_nowb_z_u): Likewise. (vldrwq_gather_base_nowb_u): Likewise. (vldrdq_gather_base_nowb_u): Likewise. (vldrwq_gather_base_nowb_z_s): Likewise. (vldrwq_gather_base_nowb_z_f): Likewise. (vldrdq_gather_base_nowb_z_s): Likewise. (vldrwq_gather_base_nowb_s): Likewise. (vldrwq_gather_base_nowb_f): Likewise. (vldrdq_gather_base_nowb_s): Likewise. * config/arm/mve.md (mve_vldrwq_gather_base_nowb_<supf>v4si): Define RTL pattern. (mve_vldrwq_gather_base_wb_<supf>v4si): Modify RTL pattern. (mve_vldrwq_gather_base_nowb_z_<supf>v4si): Define RTL pattern. (mve_vldrwq_gather_base_wb_z_<supf>v4si): Modify RTL pattern. (mve_vldrwq_gather_base_wb_fv4sf): Modify RTL pattern. (mve_vldrwq_gather_base_nowb_fv4sf): Define RTL pattern. (mve_vldrwq_gather_base_wb_z_fv4sf): Modify RTL pattern. (mve_vldrwq_gather_base_nowb_z_fv4sf): Define RTL pattern. (mve_vldrdq_gather_base_nowb_<supf>v4di): Define RTL pattern. (mve_vldrdq_gather_base_wb_<supf>v4di): Modify RTL pattern. (mve_vldrdq_gather_base_nowb_z_<supf>v4di): Define RTL pattern. (mve_vldrdq_gather_base_wb_z_<supf>v4di): Modify RTL pattern. gcc/testsuite/ChangeLog: 2020-04-02 Srinath Parvathaneni <srinath.parvathaneni@arm.com> PR target/94317 * gcc.target/arm/mve/intrinsics/vldrdq_gather_base_wb_s64.c: Modify. * gcc.target/arm/mve/intrinsics/vldrdq_gather_base_wb_u64.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrdq_gather_base_wb_z_s64.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrdq_gather_base_wb_z_u64.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_f32.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_s32.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_u32.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_z_f32.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_z_s32.c: Likewise. * gcc.target/arm/mve/intrinsics/vldrwq_gather_base_wb_z_u32.c: Likewise.
2020-04-02libstdc++-v3/test: Better skip for "use_service.cc"Andrea Corallo2-1/+10
2020-04-01 Andrea Corallo <andrea.corallo@arm.com> * testsuite/experimental/net/execution_context/use_service.cc: Require pthread and gthreads.
2020-04-02[Fortran] Fix error cleanup of select rank (PR93522)Tobias Burnus4-0/+37
PR fortran/93522 * match.c (gfc_match_select_rank): Fix error cleanup. PR fortran/93522 * gfortran.dg/select_rank_4.f90: New.
2020-04-02S/390: Remove superfluous commutative constraint modifiersAndreas Krebbel3-79/+101
For operands with an identical set of alternatives there is no point in marking them commutative. This patch removes the superfluous constraint modifiers in vector.md and vx-builtins.md since it might slow down reload without buying us anything. There were even two patterns where the constraint modifier was plain wrong: "sub<VF_HW>3" and "ior_not<VT>3". Fortunately it never had any effect. gcc/ChangeLog: 2020-04-02 Andreas Krebbel <krebbel@linux.ibm.com> * config/s390/vector.md ("<ti*>add<mode>3", "mul<mode>3") ("and<mode>3", "notand<mode>3", "ior<mode>3", "ior_not<mode>3") ("xor<mode>3", "notxor<mode>3", "smin<mode>3", "smax<mode>3") ("umin<mode>3", "umax<mode>3", "vec_widen_smult_even_<mode>") ("vec_widen_umult_even_<mode>", "vec_widen_smult_odd_<mode>") ("vec_widen_umult_odd_<mode>", "add<mode>3", "sub<mode>3") ("mul<mode>3", "fma<mode>4", "fms<mode>4", "neg_fma<mode>4") ("neg_fms<mode>4", "*smax<mode>3_vxe", "*smaxv2df3_vx") ("*smin<mode>3_vxe", "*sminv2df3_vx"): Remove % constraint modifier. ("vec_widen_umult_lo_<mode>", "vec_widen_umult_hi_<mode>") ("vec_widen_smult_lo_<mode>", "vec_widen_smult_hi_<mode>"): Remove constraints from expander. * config/s390/vx-builtins.md ("vacc<bhfgq>_<mode>", "vacq") ("vacccq", "vec_avg<mode>", "vec_avgu<mode>", "vec_vmal<mode>") ("vec_vmah<mode>", "vec_vmalh<mode>", "vec_vmae<mode>") ("vec_vmale<mode>", "vec_vmao<mode>", "vec_vmalo<mode>") ("vec_smulh<mode>", "vec_umulh<mode>", "vec_nor<mode>3") ("vfmin<mode>", "vfmax<mode>"): Remove % constraint modifier.
2020-04-02fortran : ICE in gfc_resolve_findloc PR93498Mark Eggleston5-0/+39
ICE occurs when findloc is used with character arguments of different kinds. If the character kinds are different reject the code. Original patch provided by Steven G. Kargl <kargl@gcc.gnu.org>. gcc/fortran/ChangeLog: PR fortran/93498 * check.c (gfc_check_findloc): If the kinds of the arguments differ goto label "incompat". gcc/testsuite/ChangeLog: PR fortran/93498 * gfortran.dg/pr93498_1.f90: New test. * gfortran.dg/pr93498_2.f90: New test.
2020-04-02fortran: ICE equivalence with an element of an array PR94030Mark Eggleston5-3/+63
Deferred size arrays can not be used in equivalance statements. gcc/fortran/ChangeLog: PR fortran/94030 * resolve.c (resolve_equivalence): Correct formatting around the label "identical_types". Instead of using gfc_resolve_array_spec use is_non_constants_shape_array to determine whether the array can be used in a in an equivalence statement. gcc/testsuite/ChangeLog: PR fortran/94030 * gfortran.dg/pr94030_1.f90 * gfortran.dg/pr94030_2.f90
2020-04-02Daily bump.GCC Administrator1-1/+1
2020-04-01d: Fix new tests gdc.dg/pr93038.d and gdc.dg/pr93038b.d in r10-7320 failIain Buclaw3-2/+13
The scan-file match is likely too strict to always succeed, so instead have split it up into a set of smaller matches. gcc/testsuite/ChangeLog: PR d/94315 * gdc.dg/pr93038.d: Split dg-final into multiple tests. * gdc.dg/pr93038b.d: Likewise.
2020-04-01d: Fix gdc.dg/pr92216.d FAILs on 32-bit targetsIain Buclaw2-2/+8
The symbol being scanned for only matched on 64-bit targets. gcc/testsuite/ChangeLog: PR d/94321 * gdc.dg/pr92216.d: Update to work on targets with 16 or 32-bit pointers.
2020-04-01libstdc++: Move "free books" list from fsf.org to gnu.orgGerald Pfeifer3-2/+8
* doc/xml/manual/appendix_free.xml: Move "free books" list from fsf.org to gnu.org. * doc/html/manual/appendix_free.html: Regenerate.
2020-04-01analyzer: handle compound assignments [PR94378]David Malcolm13-84/+522
PR analyzer/94378 reports a false -Wanalyzer-malloc-leak when returning a struct containing a malloc-ed pointer. The issue is that the assignment code was not handling compound copies, only copying top-level values from region to region, and not copying child values. This patch introduces a region_model::copy_region function, using it for assignments and when analyzing function return values. It recursively copies nested values within structs, unions, and arrays, fixing the bug. gcc/analyzer/ChangeLog: PR analyzer/94378 * checker-path.cc: Include "bitmap.h". * constraint-manager.cc: Likewise. * diagnostic-manager.cc: Likewise. * engine.cc: Likewise. (exploded_node::detect_leaks): Pass null region_id to pop_frame. * program-point.cc: Include "bitmap.h". * program-state.cc: Likewise. * region-model.cc (id_set<region_id>::id_set): Convert to... (region_id_set::region_id_set): ...this. (svalue_id_set::svalue_id_set): New ctor. (region_model::copy_region): New function. (region_model::copy_struct_region): New function. (region_model::copy_union_region): New function. (region_model::copy_array_region): New function. (stack_region::pop_frame): Drop return value. Add "result_dst_rid" param; if it is non-null, use copy_region to copy the result to it. Rather than capture and pass a single "known used" return value to be used by purge_unused_values, instead gather and pass a set of known used return values. (root_region::pop_frame): Drop return value. Add "result_dst_rid" param. (region_model::on_assignment): Use copy_region. (region_model::on_return): Likewise for the result. (region_model::on_longjmp): Pass null for pop_frame's result_dst_rid. (region_model::update_for_return_superedge): Pass the region for the return value of the call, if any, to pop_frame, rather than setting the lvalue for the lhs of the result. (region_model::pop_frame): Drop return value. Add "result_dst_rid" param. (region_model::purge_unused_svalues): Convert third param from an svalue_id * to an svalue_id_set *, updating the initial populating of the "used" bitmap accordingly. Don't remap it when done. (struct selftest::coord_test): New selftest fixture, extracted from... (selftest::test_dump_2): ...here. (selftest::test_compound_assignment): New selftest. (selftest::test_stack_frames): Pass null to new param of pop_frame. (selftest::analyzer_region_model_cc_tests): Call the new selftest. * region-model.h (class id_set): Delete template. (class region_id_set): Reimplement, using old id_set implementation. (class svalue_id_set): Likewise. Convert from auto_sbitmap to auto_bitmap. (region::get_active_view): New accessor. (stack_region::pop_frame): Drop return value. Add "result_dst_rid" param. (root_region::pop_frame): Likewise. (region_model::pop_frame): Likewise. (region_model::copy_region): New decl. (region_model::purge_unused_svalues): Convert third param from an svalue_id * to an svalue_id_set *. (region_model::copy_struct_region): New decl. (region_model::copy_union_region): New decl. (region_model::copy_array_region): New decl. gcc/testsuite/ChangeLog: PR analyzer/94378 * gcc.dg/analyzer/compound-assignment-1.c: New test. * gcc.dg/analyzer/compound-assignment-2.c: New test. * gcc.dg/analyzer/compound-assignment-3.c: New test.
2020-04-01subreg: Fix PR94123, SVN r273240 causes gcc.target/powerpc/pr87507.c to failPeter Bergner2-2/+7
Segher's patch that added -fsplit-wide-types-early and enabled by default for rs6000, caused pr87507.c to FAIL because when running lower-subreg earlier, we don't see any pseudo-to-pseudo copies of our wide type, which are created by combine, therefore, we skip decomposing our TImode accesses. The fix here is just to always run the third pass of lower-subreg instead of disabling it if we ran the second pass. 2020-04-01 Peter Bergner <bergner@linux.ibm.com> PR rtl-optimization/94123 * lower-subreg.c (pass_lower_subreg3::gate): Remove test for flag_split_wide_types_early.
2020-04-01doc: Fix typoJoerg Sonnenberger2-1/+5
2020-04-01 Joerg Sonnenberger <joerg@bec.de> * doc/extend.texi (Common Function Attributes): Fix typo.
2020-04-01Whoops, forgot the changelogSegher Boessenkool1-0/+6
2020-04-01doc: Fix a typo in the documentation of the copy attributeZackery Spytz2-1/+6
2020-04-01 Zackery Spytz <zspytz@gmail.com> gcc/ * doc/extend.texi: Fix a typo in the documentation of the copy function attribute.
2020-04-01rs6000: Make code questionably using r2 not ICE (PR94420)Segher Boessenkool1-1/+2
The example code in the PR uses r2 (the TOC register) directly. In the RTL generated for that, r2 is copied to some pseudo, and then cprop propagates that into a "*tocref<mode>" insn, because nothing is preventing it from doing that. So, put the same condition in the insn condition for this as we will later encounter in the constraint anyway, fixing this. 2020-04-01 Segher Boessenkool <segher@kernel.crashing.org> PR target/94420 * config/rs6000/rs6000.md (*tocref<mode> for P): Add insn condition on operands[1].
2020-04-01Add testcase for already fixed PR [PR94436]Jakub Jelinek2-0/+16
2020-04-01 Jakub Jelinek <jakub@redhat.com> PR middle-end/94436 * gcc.dg/pr94436.c: New test.
2020-04-01fortran : FAIL: gfortran.dg/pr93365.f90 PR94386Mark Eggleston2-7/+32
Failures of pr93365.f90, pr93600_1.f90 and pr93600_2.f90. Changes made by PR94246 delete and changed code from expr.c introduced by PR93600, the deleted code. This broke the PR93600 test cases. Restoring the deleted code and leaving the changed code alone allows the cases for PR93600 and PR94246 to pass. gcc/fortran/ChangeLog: PR fortran/94386 expr.c (simplify_parameter_variable): Restore code deleted in PR94246.