Age | Commit message (Collapse) | Author | Files | Lines |
|
RISC-V Linux encodes the ABI into the path, so in theory, we can only use that
to select multi-lib paths, and no way to use different multi-lib paths between
`rv32i/ilp32` and `rv32ima/ilp32`, we'll mapping both to `/lib/ilp32`.
It's hard to do that with GCC's builtin multi-lib selection mechanism; builtin
mechanism did the option string compare and then enumerate all possible reuse
rules during the build time. However, it's impossible to RISC-V; we have a huge
number of combinations of `-march`, so implementing a customized multi-lib
selection becomes the only solution.
Multi-lib configuration is only used for determines which ISA should be used
when compiling the corresponding ABI variant after this patch.
During the multi-lib selection stage, only consider -mabi as the only key to
select the multi-lib path.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc (riscv_select_multilib_by_abi): New.
(riscv_select_multilib): New.
(riscv_compute_multilib): Extract logic to riscv_select_multilib and
also handle select_by_abi.
* config/riscv/elf.h (RISCV_USE_CUSTOMISED_MULTI_LIB): Change it
to select_by_abi_arch_cmodel from 1.
* config/riscv/linux.h (RISCV_USE_CUSTOMISED_MULTI_LIB): Define.
* config/riscv/riscv-opts.h (enum riscv_multilib_select_kind): New.
|
|
Clean up confusing changes from the recent refactoring for
parallel match.pd build.
gimple-match-head.o is not built. Remove related flags adjustment.
Autogenerated gimple-match-N.o files do not depend on
gimple-match-exports.cc.
{gimple,generic)-match-auto.h only depend on the prerequisites of the
corresponding s-{gimple,generic}-match stamp file, not any .cc file.
gcc/ChangeLog:
* Makefile.in: (gimple-match-head.o-warn): Remove.
(GIMPLE_MATCH_PD_SEQ_SRC): Do not depend on
gimple-match-exports.cc.
(gimple-match-auto.h): Only depend on s-gimple-match.
(generic-match-auto.h): Likewise.
|
|
While looking into a different issue, I noticed that it
would take until the second forwprop pass to do some
forward proping and it was because the ssa name was
used more than once but the second statement was
"dead" and we don't remove that until much later.
So this uses simple_dce_from_worklist instead of manually
removing of the known unused statements instead.
Propagate engine does not do a cleanupcfg afterwards either but manually
cleans up possible EH edges so simple_dce_from_worklist
needs to communicate that back to the propagate engine.
Some testcases needed to be updated/changed even because of better optimization.
gcc.dg/pr81192.c even had to be changed to be using the gimple FE so it would
be less fragile in the future too.
gcc.dg/tree-ssa/pr98737-1.c was failing because __atomic_fetch_ was being matched
but in those cases, the result was not being used so both __atomic_fetch_ and
__atomic_x_and_fetch_ are valid choices and would not make a code generation difference.
evrp7.c, evrp8.c, vrp35.c, vrp36.c: just needed a slightly change as the removal message
is different slightly.
kernels-alias-8.c: ccp1 is able to remove an unused load which causes ealias to have
one less load to analysis so update the expected scan #.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
PR tree-optimization/109691
* tree-ssa-dce.cc (simple_dce_from_worklist): Add need_eh_cleanup
argument.
If the removed statement can throw, have need_eh_cleanup
include the bb of that statement.
* tree-ssa-dce.h (simple_dce_from_worklist): Update declaration.
* tree-ssa-propagate.cc (struct prop_stats_d): Remove
num_dce.
(substitute_and_fold_dom_walker::substitute_and_fold_dom_walker):
Initialize dceworklist instead of stmts_to_remove.
(substitute_and_fold_dom_walker::~substitute_and_fold_dom_walker):
Destore dceworklist instead of stmts_to_remove.
(substitute_and_fold_dom_walker::before_dom_children):
Set dceworklist instead of adding to stmts_to_remove.
(substitute_and_fold_engine::substitute_and_fold):
Call simple_dce_from_worklist instead of poping
from the list.
Don't update the stat on removal statements.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/evrp7.c: Update for output change.
* gcc.dg/tree-ssa/evrp8.c: Likewise.
* gcc.dg/tree-ssa/vrp35.c: Likewise.
* gcc.dg/tree-ssa/vrp36.c: Likewise.
* gcc.dg/tree-ssa/pr98737-1.c: Update scan-tree-dump-not
to check for assignment too instead of just a call.
* c-c++-common/goacc/kernels-alias-8.c: Update test
for removal of load.
* gcc.dg/pr81192.c: Rewrite testcase in gimple based test.
|
|
gcc/fortran/ChangeLog:
* resolve.cc (resolve_select_type): Call free() unconditionally.
libgfortran/ChangeLog:
* caf/single.c (_gfortran_caf_register): Call free() unconditionally.
* io/async.c (update_pdt, async_io): Likewise.
* io/format.c (free_format_data): Likewise.
* io/transfer.c (st_read_done_worker, st_write_done_worker): Likewise.
* io/unix.c (mem_close): Likewise.
|
|
gcc/fortran/ChangeLog:
PR fortran/68800
* expr.cc (find_array_section): Fix mpz memory leak.
* simplify.cc (gfc_simplify_reshape): Fix mpz memory leaks in
error paths.
|
|
PR fortran/109662
libgfortran/ChangeLog:
* io/list_read.c: Add check for a semicolon after a namelist
name in read input. Issue a runtime error message.
gcc/testsuite/ChangeLog:
* gfortran.dg/pr109662-a.f90: New test.
|
|
|
|
PR c++/85979
gcc/cp/ChangeLog:
* cxx-pretty-print.cc (cxx_pretty_printer::unary_expression)
<case ALIGNOF_EXPR>: Consider ALIGNOF_EXPR_STD_P.
* error.cc (dump_expr) <case ALIGNOF_EXPR>: Likewise.
gcc/testsuite/ChangeLog:
* g++.dg/diagnostic/alignof4.C: New test.
|
|
It seems ever since DR 2256 goto is permitted to cross the initialization
of a trivially initialized object with a non-trivial destructor. We
already supported this as an -fpermissive extension, so this patch just
makes us unconditionally support this.
DR 2256
PR c++/103091
gcc/cp/ChangeLog:
* decl.cc (decl_jump_unsafe): Return bool instead of int.
Don't consider TYPE_HAS_NONTRIVIAL_DESTRUCTOR.
(check_previous_goto_1): Simplify now that decl_jump_unsafe
returns bool instead of int.
(check_goto): Likewise.
gcc/testsuite/ChangeLog:
* g++.old-deja/g++.other/init9.C: Don't expect diagnostics for
goto made valid by DR 2256.
* g++.dg/init/goto4.C: New test.
|
|
constraints_satisfied_p already carefully checks dependence of template
arguments before proceeding with satisfaction, so the dependence check
in instantiate_alias_template is unnecessary and overly conservative.
Getting rid of it allows us to check satisfaction ahead of time in more
cases as in the below testcase.
gcc/cp/ChangeLog:
* pt.cc (instantiate_alias_template): Exit early upon
error from coerce_template_parms. Remove dependence test
guarding constraints_satisfied_p.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/concepts-alias6.C: New test.
|
|
* Harden some tree accessor macros and fix a couple of bad
PLACEHOLDER_TYPE_CONSTRAINTS accesses uncovered by this.
* Use strip_innermost_template_args in outer_template_args.
* Add !processing_template_decl early exit tests to some dependence
predicates.
gcc/cp/ChangeLog:
* cp-tree.h (PLACEHOLDER_TYPE_CONSTRAINTS_INFO): Harden via
TEMPLATE_TYPE_PARM_CHECK.
(TPARMS_PRIMARY_TEMPLATE): Harden via TREE_VEC_CHECK.
(TEMPLATE_TEMPLATE_PARM_TEMPLATE_DECL): Harden via
TEMPLATE_TEMPLATE_PARM_CHECK.
* cxx-pretty-print.cc (cxx_pretty_printer::simple_type_specifier):
Guard PLACEHOLDER_TYPE_CONSTRAINTS access.
* error.cc (dump_type) <case TEMPLATE_TYPE_PARM>: Use separate
variable to store CLASS_PLACEHOLDER_TEMPLATE result.
* pt.cc (outer_template_args): Use strip_innermost_template_args.
(any_type_dependent_arguments_p): Exit early if
!processing_template_decl. Use range-based for.
(any_dependent_template_arguments_p): Likewise.
|
|
Here we're neglecting to propagate parenthesized-ness when the
member access (this->m) resolves to a static data member (and
thus finish_class_member_access_expr yields a VAR_DECL instead
of a COMPONENT_REF).
PR c++/98283
gcc/cp/ChangeLog:
* pt.cc (tsubst_copy_and_build) <case COMPONENT_REF>: Propagate
REF_PARENTHESIZED_P more generally via force_paren_expr.
* semantics.cc (force_paren_expr): Document default argument.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1y/paren6.C: New test.
|
|
After r14-11-g2245459c85a3f4 we now coerce the template arguments of a
bound ttp again after level-lowering it. Notably a level-lowered ttp
doesn't have DECL_CONTEXT set, so during this coercion we fall back to
using current_template_parms to obtain the relevant set of in-scope
parameters.
But it turns out current_template_parms isn't properly set when
substituting the function type of a generic lambda, and so if the type
contains bound ttps that need to be lowered we'll crash during their
attempted coercion. Specifically in the first testcase below,
current_template_parms during the lambda type substitution (with T=int)
is "1 U" instead of the expected "2 TT, 1 U", and we crash when level
lowering TT<int>.
Ultimately the problem is that tsubst_lambda_expr does things in the
wrong order: we ought to substitute (and install) the in-scope template
parameters _before_ substituting anything that may use those template
parameters (such as the function type of a generic lambda). This patch
corrects this substitution order.
PR c++/109651
gcc/cp/ChangeLog:
* pt.cc (coerce_template_args_for_ttp): Mention we can hit the
current_template_parms fallback when level-lowering a bound ttp.
(tsubst_template_decl): Add lambda_tparms parameter. Prefer to
use lambda_tparms instead of substituting DECL_TEMPLATE_PARMS.
(tsubst_decl) <case TEMPLATE_DECL>: Pass NULL_TREE as lambda_tparms
to tsubst_template_decl.
(tsubst_lambda_expr): For a generic lambda, substitute
DECL_TEMPLATE_PARMS and set current_template_parms to it
before substituting the function type. Pass the substituted
DECL_TEMPLATE_PARMS as lambda_tparms to tsubst_template_decl.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/lambda-generic-ttp1.C: New test.
* g++.dg/cpp2a/lambda-generic-ttp2.C: New test.
|
|
aarch64_isa_flags (and aarch64_asm_isa_flags) are both aarch64_feature_flags (uint64_t)
but since r12-8000-g14814e20161d, they are saved/restored as unsigned long. This
does not make a difference for LP64 targets but on ILP32 and LLP64IL32 targets,
it means it does not get restored correctly.
This patch changes over to use aarch64_feature_flags instead of unsigned long.
Committed as obvious after a bootstrap/test.
gcc/ChangeLog:
PR target/109762
* config/aarch64/aarch64-builtins.cc (aarch64_simd_switcher::aarch64_simd_switcher):
Change argument type to aarch64_feature_flags.
* config/aarch64/aarch64-protos.h (aarch64_simd_switcher): Change
constructor argument type to aarch64_feature_flags.
Change m_old_asm_isa_flags to be aarch64_feature_flags.
|
|
enforce_access currently checks processing_template_decl to decide
whether to defer the given access check until instantiation time.
But using this flag is unreliable because it gets cleared during e.g.
non-dependent initializer folding, and so can lead to premature access
check failures as in the below testcase. It seems better to check
current_template_parms instead.
PR c++/109480
gcc/cp/ChangeLog:
* semantics.cc (enforce_access): Check current_template_parms
instead of processing_template_decl when deciding whether to
defer the access check.
gcc/testsuite/ChangeLog:
* g++.dg/template/non-dependent25a.C: New test.
|
|
Here we're incorrectly deeming the templated call a.g() inside b's
initializer as potentially constant, despite g being non-constexpr,
which leads to us needlessly instantiating the initializer ahead of time
and which subsequently triggers a bug in access checking deferral (to be
fixed by the follow-up patch).
This patch fixes this by calling get_fns earlier during CALL_EXPR
potentiality checking so that when we extract a FUNCTION_DECL out of a
templated member function call (whose overall callee is typically a
COMPONENT_REF) we do the usual constexpr-eligibility checking for it.
In passing, I noticed the nearby special handling of the object argument
of a non-static member function call is effectively the same as the
generic argument handling a few lines below. So this patch just gets
rid of this special handling; otherwise we'd have to adapt it to handle
templated versions of such calls.
PR c++/109480
gcc/cp/ChangeLog:
* constexpr.cc (potential_constant_expression_1) <case CALL_EXPR>:
Reorganize to call get_fns sooner. Remove special handling of
the object argument of a non-static member function call. Remove
dead store to 'fun'.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/noexcept59.C: Make e() constexpr so that the
expected "without object" diagnostic isn't replaced by a
"call to non-constexpr function" diagnostic.
* g++.dg/template/non-dependent25.C: New test.
|
|
Compare with previous version, this patch updates the comments only.
https://gcc.gnu.org/pipermail/gcc-patches/2022-December/608293.html
For a complicate 64bit constant, below is one instruction-sequence to
build:
lis 9,0x800a
ori 9,9,0xabcd
sldi 9,9,32
oris 9,9,0xc167
ori 9,9,0xfa16
while we can also use below sequence to build:
lis 9,0xc167
lis 10,0x800a
ori 9,9,0xfa16
ori 10,10,0xabcd
rldimi 9,10,32,0
This sequence is using 2 registers to build high and low part firstly,
and then merge them.
In parallel aspect, this sequence would be faster. (Ofcause, using 1 more
register with potential register pressure).
The instruction sequence with two registers for parallel version can be
generated only if can_create_pseudo_p. Otherwise, the one register
sequence is generated.
gcc/ChangeLog:
* config/rs6000/rs6000.cc (rs6000_emit_set_long_const): Generate
more parallel code if can_create_pseudo_p.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/parall_5insn_const.c: New test.
|
|
Following up on posts/reviews by Segher and Uros, there's some question
over why the middle-end's lower subreg pass emits a clobber (of a
multi-word register) into the instruction stream before emitting the
sequence of moves of the word-sized parts. This clobber interferes
with (LRA) register allocation, preventing the multi-word pseudo to
remain in the same hard registers. This patch eliminates this
(presumably superfluous) clobber and thereby improves register allocation.
A concrete example of the observed improvement is PR target/43644.
For the test case:
__int128 foo(__int128 x, __int128 y) { return x+y; }
on x86_64-pc-linux-gnu, gcc -O2 currently generates:
foo: movq %rsi, %rax
movq %rdi, %r8
movq %rax, %rdi
movq %rdx, %rax
movq %rcx, %rdx
addq %r8, %rax
adcq %rdi, %rdx
ret
with this patch, we now generate the much improved:
foo: movq %rdx, %rax
movq %rcx, %rdx
addq %rdi, %rax
adcq %rsi, %rdx
ret
2023-05-07 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/43644
* lower-subreg.cc (resolve_simple_move): Don't emit a clobber
immediately before moving a multi-word register by parts.
gcc/testsuite/ChangeLog
PR target/43644
* gcc.target/i386/pr43644.c: New test case.
|
|
|
|
gcc/
* config/riscv/riscv-v.cc (riscv_vector_preferred_simd_mode): Delete.
|
|
While working on autovectorizing for the RISCV port I encountered an issue
where can_duplicate_and_interleave_p assumes that GET_MODE_NUNITS is a
evenly divisible by two. The RISC-V target has vector modes (e.g. VNx1DImode),
where GET_MODE_NUNITS is equal to one.
Tested on RISCV and x86_64-linux-gnu. Okay?
gcc/
* tree-vect-slp.cc (can_duplicate_and_interleave_p):
Check that GET_MODE_NUNITS is a multiple of 2.
|
|
gcc/
* config/riscv/riscv.cc
(riscv_estimated_poly_value): Implement
TARGET_ESTIMATED_POLY_VALUE.
(riscv_preferred_simd_mode): Implement
TARGET_VECTORIZE_PREFERRED_SIMD_MODE.
(riscv_get_mask_mode): Implement TARGET_VECTORIZE_GET_MASK_MODE.
(riscv_empty_mask_is_expensive): Implement
TARGET_VECTORIZE_EMPTY_MASK_IS_EXPENSIVE.
(riscv_vectorize_create_costs): Implement
TARGET_VECTORIZE_CREATE_COSTS.
(riscv_support_vector_misalignment): Implement
TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT.
(TARGET_ESTIMATED_POLY_VALUE): Register target macro.
(TARGET_VECTORIZE_GET_MASK_MODE): Ditto.
(TARGET_VECTORIZE_EMPTY_MASK_IS_EXPENSIVE): Ditto.
(TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT): Ditto.
|
|
gcc/
* config/riscv/riscv-v.cc (autovec_use_vlmax_p): Remove
duplicate definition.
|
|
* config/riscv/riscv-v.cc (autovec_use_vlmax_p): New function.
(riscv_vector_preferred_simd_mode): Ditto.
(get_mask_policy_no_pred): Ditto.
(get_tail_policy_no_pred): Ditto.
(riscv_vector_mask_mode_p): Ditto.
(riscv_vector_get_mask_mode): Ditto.
|
|
gcc/
* config/riscv/riscv-vector-builtins.cc (get_tail_policy_for_pred):
Remove static declaration to to make externally visible.
(get_mask_policy_for_pred): Ditto.
* config/riscv/riscv-vector-builtins.h (get_tail_policy_for_pred):
New external declaration.
(get_mask_policy_for_pred): Ditto.
|
|
gcc/
* config/riscv/riscv-protos.h (riscv_vector_mask_mode_p): New.
(riscv_vector_get_mask_mode): Ditto.
(get_mask_policy_no_pred): Ditto.
(get_tail_policy_no_pred): Ditto.
|
|
This commit implements the target macros for shrink wrapping of function
prologues/epilogues shrink wrapping on LoongArch.
Bootstrapped and regtested on loongarch64-linux-gnu. I don't have an
access to SPEC CPU so I hope the reviewer can perform a benchmark to see
if there is real benefit.
gcc/ChangeLog:
* config/loongarch/loongarch.h (struct machine_function): Add
reg_is_wrapped_separately array for register wrapping
information.
* config/loongarch/loongarch.cc
(loongarch_get_separate_components): New function.
(loongarch_components_for_bb): Likewise.
(loongarch_disqualify_components): Likewise.
(loongarch_process_components): Likewise.
(loongarch_emit_prologue_components): Likewise.
(loongarch_emit_epilogue_components): Likewise.
(loongarch_set_handled_components): Likewise.
(TARGET_SHRINK_WRAP_GET_SEPARATE_COMPONENTS): Define.
(TARGET_SHRINK_WRAP_COMPONENTS_FOR_BB): Likewise.
(TARGET_SHRINK_WRAP_DISQUALIFY_COMPONENTS): Likewise.
(TARGET_SHRINK_WRAP_EMIT_PROLOGUE_COMPONENTS): Likewise.
(TARGET_SHRINK_WRAP_EMIT_EPILOGUE_COMPONENTS): Likewise.
(TARGET_SHRINK_WRAP_SET_HANDLED_COMPONENTS): Likewise.
(loongarch_for_each_saved_reg): Skip registers that are wrapped
separately.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/shrink-wrap.c: New test.
|
|
This prevents a spurious message building a cross-compiler when target
libc is not installed yet:
cc1: error: no include path in which to search for stdc-predef.h
As stdc-predef.h was added to define __STDC_* macros by libc, it's
unlikely the header will ever contain some bad definitions w/o "__"
prefix so it should be safe.
gcc/ChangeLog:
PR other/109522
* Makefile.in (s-macro_list): Pass -nostdinc to
$(GCC_FOR_TARGET).
|
|
gcc/ChangeLog:
* config/riscv/riscv-protos.h (preferred_simd_mode): New function.
* config/riscv/riscv-v.cc (autovec_use_vlmax_p): Ditto.
(preferred_simd_mode): Ditto.
* config/riscv/riscv.cc (riscv_get_arg_info): Handle RVV type in function arg.
(riscv_convert_vector_bits): Adjust for RVV auto-vectorization.
(riscv_preferred_simd_mode): New function.
(TARGET_VECTORIZE_PREFERRED_SIMD_MODE): New target hook support.
* config/riscv/vector.md: Add autovec.md.
* config/riscv/autovec.md: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/rvv.exp: Add testcases for RVV auto-vectorization.
* gcc.target/riscv/rvv/autovec/fixed-vlmax-1.c: New test.
* gcc.target/riscv/rvv/autovec/partial/single_rgroup-1.c: New test.
* gcc.target/riscv/rvv/autovec/partial/single_rgroup-1.h: New test.
* gcc.target/riscv/rvv/autovec/partial/single_rgroup_run-1.c: New test.
* gcc.target/riscv/rvv/autovec/scalable-1.c: New test.
* gcc.target/riscv/rvv/autovec/template-1.h: New test.
* gcc.target/riscv/rvv/autovec/v-1.c: New test.
* gcc.target/riscv/rvv/autovec/v-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve32f-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve32f-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve32f-3.c: New test.
* gcc.target/riscv/rvv/autovec/zve32f_zvl128b-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve32f_zvl128b-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve32x-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve32x-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve32x-3.c: New test.
* gcc.target/riscv/rvv/autovec/zve32x_zvl128b-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve32x_zvl128b-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64d-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64d-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64d-3.c: New test.
* gcc.target/riscv/rvv/autovec/zve64d_zvl128b-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64d_zvl128b-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64f-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64f-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64f-3.c: New test.
* gcc.target/riscv/rvv/autovec/zve64f_zvl128b-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64f_zvl128b-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64x-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64x-2.c: New test.
* gcc.target/riscv/rvv/autovec/zve64x-3.c: New test.
* gcc.target/riscv/rvv/autovec/zve64x_zvl128b-1.c: New test.
* gcc.target/riscv/rvv/autovec/zve64x_zvl128b-2.c: New test.
|
|
PR fortran/109662
libgfortran/ChangeLog:
* io/list_read.c: Add a check for a comma after a namelist
name in read input. Issue a runtime error message.
gcc/testsuite/ChangeLog:
* gfortran.dg/pr109662.f90: New test.
|
|
Similarly to the earlier sqrt patch, this patch attempts to improve
sin/cos ranges. As the functions are periodic, for the reverse range
there is not much we can do (but I've discovered I forgot to take
into account the boundary ulps for the discovery of impossible result
ranges). For fold_range, we can do something only if the range is
narrow enough (narrower than 2*pi). The patch computes the value of
the functions (taking ulps into account) and also computes the derivative
to find out if the function is growing or declining on the boundaries and
from that it figures out if the result range should be
[min (fn (lb), fn (ub)), max (fn (lb), fn (ub))] or if it needs to be
extended to 1 (actually using +Inf) and/or -1 (actually using -Inf) because
there must be a local minimum and/or maximum in the range.
2023-05-06 Jakub Jelinek <jakub@redhat.com>
* real.h (dconst_pi): Define.
(dconst_e_ptr): Formatting fix.
(dconst_pi_ptr): Declare.
* real.cc (dconst_pi_ptr): New function.
* gimple-range-op.cc (cfn_sincos::fold_range): Intersect the generic
boundaries range with range computed from sin/cos of the particular
bounds if the argument range is shorter than 2*pi.
(cfn_sincos::op1_range): Take bulps into account when determining
which result ranges are always invalid or behave like known NAN.
* gcc.dg/tree-ssa/range-sincos-2.c: New test.
|
|
The equal_p method in vrange_storage is only used to compare ranges
that are the same type. No sense passing the type if it can be
determined from the range being compared.
gcc/ChangeLog:
* gimple-range-cache.cc (sbr_sparse_bitmap::set_bb_range): Do not
pass type to vrange_storage::equal_p.
* value-range-storage.cc (vrange_storage::equal_p): Remove type.
(irange_storage::equal_p): Same.
(frange_storage::equal_p): Same.
* value-range-storage.h (class frange_storage): Same.
|
|
This patch is fixing my recent optimization patch:
https://github.com/gcc-mirror/gcc/commit/d51f2456ee51bd59a79b4725ca0e488c25260bbf
In that patch, the new_info = parse_insn (i) is not correct.
Since consider the following case:
vsetvli a5,a4, e8,m1
..
vsetvli zero,a5, e32, m4
vle8.v
vmacc.vv
...
Since we have backward demand fusion in Phase 1, so the real demand of "vle8.v" is e32, m4.
However, if we use parse_insn (vle8.v) = e8, m1 which is not correct.
So this patch we change new_info = new_info.parse_insn (i)
into:
vector_insn_info new_info = m_vector_manager->vector_insn_infos[i->uid ()];
So that, we can correctly optimize codes into:
vsetvli a5,a4, e32, m4
..
.. (vsetvli zero,a5, e32, m4 is removed)
vle8.v
vmacc.vv
Since m_vector_manager->vector_insn_infos is the member variable of pass_vsetvl class.
We remove static void function "local_eliminate_vsetvl_insn", and make it as the member function
of pass_vsetvl class.
PR target/109748
gcc/ChangeLog:
* config/riscv/riscv-vsetvl.cc (local_eliminate_vsetvl_insn): Remove it.
(pass_vsetvl::local_eliminate_vsetvl_insn): New function.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/vsetvl/pr109748.c: New test.
|
|
Use swap_communattive_operands_p for canonicalization. When both value
has same operand precedence value, then first bit in the mask should
select first operand.
The canonicalization should help backends for pattern match. .i.e. x86
backend has lots of vec_merge patterns, combine will create any form
of vec_merge(mask, or inverted mask), then backend need to add 2
patterns to match exact 1 instruction. The canonicalization can
simplify 2 patterns to 1.
gcc/ChangeLog:
* combine.cc (maybe_swap_commutative_operands): Canonicalize
vec_merge when mask is constant.
* doc/md.texi: Document vec_merge canonicalization.
|
|
The previous patch just added basic intrinsic ranges for sqrt
([-0.0, +Inf] +-NAN being the general result range of the function
and [-0.0, +Inf] the general operand range if result isn't NAN etc.),
the following patch intersects those ranges with particular range
computed from argument or result's exact range with the expected
error in ulps taken into account and adds a function (frange_arithmetic
variant) which can be used by other functions as well as helper.
2023-05-06 Jakub Jelinek <jakub@redhat.com>
* value-range.h (frange_arithmetic): Declare.
* range-op-float.cc (frange_arithmetic): No longer static.
* gimple-range-op.cc (frange_mpfr_arg1): New function.
(cfn_sqrt::fold_range): Intersect the generic boundaries range
with range computed from sqrt of the particular bounds.
(cfn_sqrt::op1_range): Intersect the generic boundaries range
with range computed from squared particular bounds.
* gcc.dg/tree-ssa/range-sqrt-2.c: New test.
|
|
Some hosts like AIX don't have seq command, this patch replaces it
with something that uses just GNU make features we've been using
for this already before for the parallel make check.
2023-05-06 Jakub Jelinek <jakub@redhat.com>
* Makefile.in (check_p_numbers): Rename to one_to_9999, move
earlier with helper variables also renamed.
(MATCH_SPLUT_SEQ): Use $(wordlist 1,$(NUM_MATCH_SPLITS),$(one_to_9999))
instead of $(shell seq 1 $(NUM_MATCH_SPLITS)).
(check_p_subdirs): Use $(one_to_9999) instead of $(check_p_numbers).
|
|
|
|
Unfortunately, doesn't cause a performance improvement for coremark,
but happens a few times in newlib, just enough to affect coremark
0.01% by size (or 4 bytes, and three cycles (__fwalk_sglue and
__vfiprintf_r each two bytes).
gcc:
* config/cris/cris.md (splitop): Add PLUS.
* config/cris/cris.cc (cris_split_constant): Also handle
PLUS when a split into two insns may be useful.
gcc/testsuite:
* gcc.target/cris/peep2-addsplit1.c: New test.
|
|
While moves of constants into registers are separately
optimizable, a combination of a move with a subsequent "and"
is slightly preferable even if the move can be generated
with the same number (and timing) of insns, as moves of
"just" registers are eliminated now and then in different
passes, loosely speaking. This movandsplit1 pattern feeds
into the opsplit1/AND peephole2, with matching occurrences
observed in the floating point functions in libgcc. Also, a
test-case to fit. Coremark improvements are unimpressive:
less than 0.0003% speed, 0.1% size.
But that was pre-LRA; after the switch to LRA this peephole2
doesn't match anymore (for any of coremark, local tests,
libgcc and newlib libc) and the test-case passes with and
without the patch. Still, there's no apparent reason why
LRA prefers "move R1,R2" "and I,R2" to "move I,R1" "and
R1,R2", or why that wouldn't "randomly" change (also seen
with other operations than "and"). Thus committed.
gcc:
* config/cris/cris.md (movandsplit1): New define_peephole2.
gcc/testsuite:
* gcc.target/cris/peep2-movandsplit1.c: New test.
|
|
Observed after opsplit1 with AND in libgcc floating-point
functions, like the first spottings of opsplit1/AND
opportunities. Two patterns are nominally needed, as the
peephole2 optimizer continues from the *first replacement*
insn, not from a minimum context for general matching; one
that includes it as the last match.
But, the "free-standing" opportunity (three shifts) didn't
match by itself in a gcc build of libraries plus running the
test-suite, and thus deemed uninteresting and left out.
(As expected; if it had matched, that'd have indicated a
previously missed optimization or other problem elsewhere.)
Only the one that includes the previous define_peephole2
that may generate the sequence (i.e. opsplit1/AND), matches
easily.
Coremark results aren't impressive though: 0.003%
improvement in speed and slightly less than 0.1% in size.
A testcase is added to match and another one to cover a case
of movulsr checking that it's used; it's preferable to
lsrandsplit when both would match.
gcc:
* config/cris/cris.md (lsrandsplit1): New define_peephole2.
gcc/testsuite:
* gcc.target/cris/peep2-lsrandsplit1.c,
gcc.target/cris/peep2-movulsr2.c: New tests.
|
|
I was a bit surprised when my newly-added define_peephole2 didn't
match, but it was because it was expected to partially match the
generated output of a previous define_peephole2, which matched and
modified the last insn of a sequence to be matched. I had assumed
that the algorithm backed-up the size of the match-buffer, thereby
exposing newly created opportunities *with sufficient context* to all
define_peephole2's. While things can change in that direction, let's
start with documenting the current state.
* doc/md.texi (define_peephole2): Document order of scanning.
|
|
Fortran allows overloading of intrinsic operators also for operands of
numeric intrinsic types. The intrinsic operator versions are used
according to the rules of F2018 table 10.2 and imply type conversion as
long as the operand ranks are conformable. Otherwise no type conversion
shall be performed to allow the resolution of a matching user-defined
operator.
gcc/fortran/ChangeLog:
PR fortran/109641
* arith.cc (eval_intrinsic): Check conformability of ranks of operands
for intrinsic binary operators before performing type conversions.
* gfortran.h (gfc_op_rank_conformable): Add prototype.
* resolve.cc (resolve_operator): Check conformability of ranks of
operands for intrinsic binary operators before performing type
conversions.
(gfc_op_rank_conformable): New helper function to compare ranks of
operands of binary operator.
gcc/testsuite/ChangeLog:
PR fortran/109641
* gfortran.dg/overload_5.f90: New test.
|
|
This patch try to legitimise the const0_rtx (aka zero register)
as the base register for the RVV indexed load/store instructions
by allowing the const as the operand of the indexed RTL pattern.
Then the underlying combine pass will try to perform the const
propagation.
For example:
vint32m1_t
test_vluxei32_v_i32m1_shortcut (vuint32m1_t bindex, size_t vl)
{
return __riscv_vluxei32_v_i32m1 ((int32_t *)0, bindex, vl);
}
Before this patch:
li a5,0 <- can be eliminated.
vl1re32.v v1,0(a1)
vsetvli zero,a2,e32,m1,ta,ma
vluxei32.v v1,(a5),v1 <- can propagate the const 0 to a5 here.
vs1r.v v1,0(a0)
ret
After this patch:
test_vluxei32_v_i32m1_shortcut:
vl1re32.v v1,0(a1)
vsetvli zero,a2,e32,m1,ta,ma
vluxei32.v v1,(0),v1
vs1r.v v1,0(a0)
ret
As above, this patch allow you to propagaate the const 0 (aka zero
register) to the base register of the RVV indexed load in the combine
pass. This may benefit the underlying RVV auto-vectorization.
gcc/ChangeLog:
* config/riscv/vector.md: Allow const as the operand of RVV
indexed load/store.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/zero_base_load_store_optimization.c:
Adjust indexed load/store check condition.
Signed-off-by: Pan Li <pan2.li@intel.com>
Co-authored-by: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
|
|
When some RVV integer compare operators act on the same vector registers
without mask. They can be simplified to VMSET.
This PATCH allows the eq, le, leu, ge, geu to perform such kind of the
simplification by adding one macro in riscv for simplify rtx.
Given we have:
vbool1_t test_shortcut_for_riscv_vmseq_case_0(vint8m8_t v1, size_t vl)
{
return __riscv_vmseq_vv_i8m8_b1(v1, v1, vl);
}
Before this patch:
vsetvli zero,a2,e8,m8,ta,ma
vl8re8.v v8,0(a1)
vmseq.vv v8,v8,v8
vsetvli a5,zero,e8,m8,ta,ma
vsm.v v8,0(a0)
ret
After this patch:
vsetvli zero,a2,e8,m8,ta,ma
vmset.m v1 <- optimized to vmset.m
vsetvli a5,zero,e8,m8,ta,ma
vsm.v v1,0(a0)
ret
As above, we may have one instruction eliminated and require less vector
registers.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv.h (VECTOR_STORE_FLAG_VALUE): Add new macro
consumed by simplify_rtx.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c:
Adjust test check condition.
|
|
Implement vshrq and vrshrq using the new MVE builtins framework.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/arm-mve-builtins-base.cc (vrshrq, vshrq): New.
* config/arm/arm-mve-builtins-base.def (vrshrq, vshrq): New.
* config/arm/arm-mve-builtins-base.h (vrshrq, vshrq): New.
* config/arm/arm_mve.h (vshrq): Remove.
(vrshrq): Remove.
(vrshrq_m): Remove.
(vshrq_m): Remove.
(vrshrq_x): Remove.
(vshrq_x): Remove.
(vshrq_n_s8): Remove.
(vshrq_n_s16): Remove.
(vshrq_n_s32): Remove.
(vshrq_n_u8): Remove.
(vshrq_n_u16): Remove.
(vshrq_n_u32): Remove.
(vrshrq_n_u8): Remove.
(vrshrq_n_s8): Remove.
(vrshrq_n_u16): Remove.
(vrshrq_n_s16): Remove.
(vrshrq_n_u32): Remove.
(vrshrq_n_s32): Remove.
(vrshrq_m_n_s8): Remove.
(vrshrq_m_n_s32): Remove.
(vrshrq_m_n_s16): Remove.
(vrshrq_m_n_u8): Remove.
(vrshrq_m_n_u32): Remove.
(vrshrq_m_n_u16): Remove.
(vshrq_m_n_s8): Remove.
(vshrq_m_n_s32): Remove.
(vshrq_m_n_s16): Remove.
(vshrq_m_n_u8): Remove.
(vshrq_m_n_u32): Remove.
(vshrq_m_n_u16): Remove.
(vrshrq_x_n_s8): Remove.
(vrshrq_x_n_s16): Remove.
(vrshrq_x_n_s32): Remove.
(vrshrq_x_n_u8): Remove.
(vrshrq_x_n_u16): Remove.
(vrshrq_x_n_u32): Remove.
(vshrq_x_n_s8): Remove.
(vshrq_x_n_s16): Remove.
(vshrq_x_n_s32): Remove.
(vshrq_x_n_u8): Remove.
(vshrq_x_n_u16): Remove.
(vshrq_x_n_u32): Remove.
(__arm_vshrq_n_s8): Remove.
(__arm_vshrq_n_s16): Remove.
(__arm_vshrq_n_s32): Remove.
(__arm_vshrq_n_u8): Remove.
(__arm_vshrq_n_u16): Remove.
(__arm_vshrq_n_u32): Remove.
(__arm_vrshrq_n_u8): Remove.
(__arm_vrshrq_n_s8): Remove.
(__arm_vrshrq_n_u16): Remove.
(__arm_vrshrq_n_s16): Remove.
(__arm_vrshrq_n_u32): Remove.
(__arm_vrshrq_n_s32): Remove.
(__arm_vrshrq_m_n_s8): Remove.
(__arm_vrshrq_m_n_s32): Remove.
(__arm_vrshrq_m_n_s16): Remove.
(__arm_vrshrq_m_n_u8): Remove.
(__arm_vrshrq_m_n_u32): Remove.
(__arm_vrshrq_m_n_u16): Remove.
(__arm_vshrq_m_n_s8): Remove.
(__arm_vshrq_m_n_s32): Remove.
(__arm_vshrq_m_n_s16): Remove.
(__arm_vshrq_m_n_u8): Remove.
(__arm_vshrq_m_n_u32): Remove.
(__arm_vshrq_m_n_u16): Remove.
(__arm_vrshrq_x_n_s8): Remove.
(__arm_vrshrq_x_n_s16): Remove.
(__arm_vrshrq_x_n_s32): Remove.
(__arm_vrshrq_x_n_u8): Remove.
(__arm_vrshrq_x_n_u16): Remove.
(__arm_vrshrq_x_n_u32): Remove.
(__arm_vshrq_x_n_s8): Remove.
(__arm_vshrq_x_n_s16): Remove.
(__arm_vshrq_x_n_s32): Remove.
(__arm_vshrq_x_n_u8): Remove.
(__arm_vshrq_x_n_u16): Remove.
(__arm_vshrq_x_n_u32): Remove.
(__arm_vshrq): Remove.
(__arm_vrshrq): Remove.
(__arm_vrshrq_m): Remove.
(__arm_vshrq_m): Remove.
(__arm_vrshrq_x): Remove.
(__arm_vshrq_x): Remove.
|
|
Factorize vsrhrq vrshrq so that they use the same pattern.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/iterators.md (MVE_VSHRQ_M_N, MVE_VSHRQ_N): New.
(mve_insn): Add vrshr, vshr.
* config/arm/mve.md (mve_vshrq_n_<supf><mode>)
(mve_vrshrq_n_<supf><mode>): Merge into ...
(@mve_<mve_insn>q_n_<supf><mode>): ... this.
(mve_vrshrq_m_n_<supf><mode>, mve_vshrq_m_n_<supf><mode>): Merge
into ...
(@mve_<mve_insn>q_m_n_<supf><mode>): ... this.
|
|
This patch adds the binary_rshift shape description.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/arm-mve-builtins-shapes.cc (binary_rshift): New.
* config/arm/arm-mve-builtins-shapes.h (binary_rshift): New.
|
|
Implement vqrshrunbq, vqrshruntq, vqshrunbq, vqshruntq using the new
MVE builtins framework.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/arm-mve-builtins-base.cc (FUNCTION_ONLY_N_NO_U_F): New.
(vqshrunbq, vqshruntq, vqrshrunbq, vqrshruntq): New.
* config/arm/arm-mve-builtins-base.def (vqshrunbq, vqshruntq)
(vqrshrunbq, vqrshruntq): New.
* config/arm/arm-mve-builtins-base.h (vqshrunbq, vqshruntq)
(vqrshrunbq, vqrshruntq): New.
* config/arm/arm-mve-builtins.cc
(function_instance::has_inactive_argument): Handle vqshrunbq,
vqshruntq, vqrshrunbq, vqrshruntq.
* config/arm/arm_mve.h (vqrshrunbq): Remove.
(vqrshruntq): Remove.
(vqrshrunbq_m): Remove.
(vqrshruntq_m): Remove.
(vqrshrunbq_n_s16): Remove.
(vqrshrunbq_n_s32): Remove.
(vqrshruntq_n_s16): Remove.
(vqrshruntq_n_s32): Remove.
(vqrshrunbq_m_n_s32): Remove.
(vqrshrunbq_m_n_s16): Remove.
(vqrshruntq_m_n_s32): Remove.
(vqrshruntq_m_n_s16): Remove.
(__arm_vqrshrunbq_n_s16): Remove.
(__arm_vqrshrunbq_n_s32): Remove.
(__arm_vqrshruntq_n_s16): Remove.
(__arm_vqrshruntq_n_s32): Remove.
(__arm_vqrshrunbq_m_n_s32): Remove.
(__arm_vqrshrunbq_m_n_s16): Remove.
(__arm_vqrshruntq_m_n_s32): Remove.
(__arm_vqrshruntq_m_n_s16): Remove.
(__arm_vqrshrunbq): Remove.
(__arm_vqrshruntq): Remove.
(__arm_vqrshrunbq_m): Remove.
(__arm_vqrshruntq_m): Remove.
(vqshrunbq): Remove.
(vqshruntq): Remove.
(vqshrunbq_m): Remove.
(vqshruntq_m): Remove.
(vqshrunbq_n_s16): Remove.
(vqshruntq_n_s16): Remove.
(vqshrunbq_n_s32): Remove.
(vqshruntq_n_s32): Remove.
(vqshrunbq_m_n_s32): Remove.
(vqshrunbq_m_n_s16): Remove.
(vqshruntq_m_n_s32): Remove.
(vqshruntq_m_n_s16): Remove.
(__arm_vqshrunbq_n_s16): Remove.
(__arm_vqshruntq_n_s16): Remove.
(__arm_vqshrunbq_n_s32): Remove.
(__arm_vqshruntq_n_s32): Remove.
(__arm_vqshrunbq_m_n_s32): Remove.
(__arm_vqshrunbq_m_n_s16): Remove.
(__arm_vqshruntq_m_n_s32): Remove.
(__arm_vqshruntq_m_n_s16): Remove.
(__arm_vqshrunbq): Remove.
(__arm_vqshruntq): Remove.
(__arm_vqshrunbq_m): Remove.
(__arm_vqshruntq_m): Remove.
|
|
Factorize vqrshrunb, vqrshrunt, vqshrunb, vqshrunt so that they use
existing patterns.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/iterators.md (MVE_SHRN_N): Add VQRSHRUNBQ,
VQRSHRUNTQ, VQSHRUNBQ, VQSHRUNTQ.
(MVE_SHRN_M_N): Likewise.
(mve_insn): Add vqrshrunb, vqrshrunt, vqshrunb, vqshrunt.
(isu): Add VQRSHRUNBQ, VQRSHRUNTQ, VQSHRUNBQ, VQSHRUNTQ.
(supf): Likewise.
* config/arm/mve.md (mve_vqrshrunbq_n_s<mode>): Remove.
(mve_vqrshruntq_n_s<mode>): Remove.
(mve_vqshrunbq_n_s<mode>): Remove.
(mve_vqshruntq_n_s<mode>): Remove.
(mve_vqrshrunbq_m_n_s<mode>): Remove.
(mve_vqrshruntq_m_n_s<mode>): Remove.
(mve_vqshrunbq_m_n_s<mode>): Remove.
(mve_vqshruntq_m_n_s<mode>): Remove.
|
|
This patch adds the binary_rshift_narrow_unsigned shape description.
2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
gcc/
* config/arm/arm-mve-builtins-shapes.cc
(binary_rshift_narrow_unsigned): New.
* config/arm/arm-mve-builtins-shapes.h
(binary_rshift_narrow_unsigned): New.
|