Age | Commit message (Collapse) | Author | Files | Lines |
|
which led to confusing code.
This patch further refines the previous work to fix several cases.
gcc/ada/ChangeLog:
* inline.adb (In_Main_Unit_Or_Subunit): Use Other_Comp_Unit instead of
Spec_Or_Body_Lib_Unit.
(Must_Inline): Use Other_Comp_Unit instead of Spec_Or_Body_Lib_Unit.
|
|
This implements elision of the copy operation for aggregate returns, i.e.
simple return statements whose expression is an aggregate, in the case of
nonlimited by-reference types (the copy operation is already elided for
limited types), which comprise controlled and tagged types. This is the
copy operation in the called function, that is to say the copy from the
anonymous object built for the aggregate to the anonymous return object.
The implementation simply extends that of limited types, which rewrites
the simple return statement as an extended return statement internally
and then leverages the built-in-place implementation of return objects
for these statements.
gcc/ada/ChangeLog:
* exp_aggr.adb (Is_Build_In_Place_Aggregate_Return): Also return
True for functions returning on the secondary stack or returning
a by-reference type if the back end exposes its return slot.
(Expand_Array_Aggregate): Call Is_Build_In_Place_Aggregate_Return
to spot aggregates to be built in place.
* exp_ch3.adb (Make_Allocator_For_Return): Add missing condition
in assertion pragma deduced from Expand_Subtype_From_Expr.
* exp_ch6.adb (Expand_Simple_Function_Return): Rewrite the statement
as an extended return statement if the expression is an aggregate
whose expansion is delayed. Properly relocate the expression in
this case.
* sem_ch6.adb: Add clauses for Exp_Aggr.
(Analyze_Function_Return): Do not apply the predicate check to an
aggregate whose expansion is delayed. Extended the processing of
case expressions to all conditional expressions.
|
|
gcc/ada/ChangeLog:
* gnat2.gpr: New file.
|
|
It is implemented for container aggregates that are used to initialize an
object, as specified by RM 7.6(17.2/3-17.3/3) for immutably limited types
and types that need finalization, but for all types like other aggregates.
gcc/ada/ChangeLog:
* exp_aggr.adb (Expand_Delta_Array_Aggregate): Move declaration.
(Expand_Delta_Record_Aggregate): Likewise.
(Expand_Container_Aggregate): Likewise. Move implementation to
Build_Container_Aggr_Code. Implement built-in-place expansion
for object declarations and allocators.
(Build_Container_Aggr_Code): New function containing most of the
code of the original Expand_Container_Aggregate. Do not build a
temporary for the size calculation. Minor formatting tweaks.
(Expand_N_Aggregate): Add comment.
* exp_ch4.adb (Expand_Allocator_Expression): Detect the case of
a container aggregate as qualified expression. Do not apply the
predicate check on entry in this case and rewrite the allocator.
* exp_ch7.adb (Build_Finalizer.Process_Object_Declaration): Deal
with Last_Aggregate_Assignment first to compute the attachment
point (as already done in Attach_Object_To_Master_Node).
|
|
This avoids a strange discrepancy in the handling of constants vs variables.
gcc/ada/ChangeLog:
* sem_aux.ads (Has_Unconstrained_Elements): Delete.
* sem_aux.adb (Has_Unconstrained_Elements): Likewise.
* sem_ch3.adb (Analyze_Object_Declaration): Remove obsolete code.
|
|
Inspector testing shows that calling Body_Lib_Unit on Spec can sometimes
fail due to the following assertion failing:
pragma Assert
(Unit (N) in N_Lib_Unit_Declaration_Id
| N_Lib_Unit_Renaming_Declaration_Id);
Indeed, Unit (N) may sometimes be an N_Subprogram_Body instead of an
N_Lib_Unit_Declaration_Id.
gcc/ada/ChangeLog:
* sem.adb (Process_Bodies_In_Context): check that Spec's unit is
an N_Lib_Unit_Declaration_Id.
|
|
The problem arises when an instance of Ada.Numerics.Discrete_Random is
covered by a pragma Component_Alignment with a non-default alignment.
gcc/ada/ChangeLog:
* exp_ch5.adb (Expand_Assign_Array): Make Act_Rhs a constant and
do not recompute it, as well as R_Type, when there is a change of
representation. Move comment about the RHS from here to...
(Expand_N_Assignment_Statement): ...here.
|
|
xspack.py is used to generate libgnat/s-pack* files.
gcc/ada/ChangeLog:
* xspack.py: New
* s-pack.ads.tmpl: New.
* s-pack.adb.tmpl: New.
|
|
When we decompose a complex load only used as real and imaginary
parts we fail to honor IL constraints which are that a BIT_FIELD_REF
of register type should be outermost in a ref. The following
simply avoids the transform when the complex load has such a
BIT_FIELD_REF.
PR tree-optimization/117417
* tree-ssa-forwprop.cc (pass_forwprop::execute): Avoid
decomposing BIT_FIELD_REF complex load.
* gcc.dg/torture/pr117417.c: New testcase.
|
|
When optimizing for NOPs in case of overlapping regs in VEC_SELECT expressions,
validate subreg data before using simplify_subreg_regno. There is no real
SUBREG rtx here, but a pseudo subreg call to check if subregs are possible.
gcc/ChangeLog:
* rtlanal.cc (set_noop_p): Validate subreg constraints before checking
for overlapping regs using simplify_subreg_regno.
|
|
The following adds X86_TUNE_AVX512_TWO_EPILOGUES tuning and directs the
vectorizer to produce both a vector AVX2 and SSE epilogue for AVX512
vectorized loops when set. The tuning is enabled by default for Zen4
and Zen5 where I benchmarked it to be overall positive on SPEC CPU 2017 both
in performance and overall code size. In particular it speeds up
525.x264_r which with only an AVX2 epilogue ends up in unvectorized code
at the moment.
* config/i386/i386.cc (ix86_vector_costs::finish_cost): Set
m_suggested_epilogue_mode according to X86_TUNE_AVX512_TWO_EPILOGUES.
* config/i386/x86-tune.def (X86_TUNE_AVX512_TWO_EPILOGUES): Add.
Enable for znver4 and znver5.
|
|
The following enables targets to suggest the vector mode to be used
preferably for the epilogue of a vectorized loop. The patch also
enables more than one vectorized epilogue in case the target suggests
a vector mode for the epilogue of a vector epilogue.
* tree-vectorizer.h (vector_costs::suggested_epilogue_mode): New.
(vector_costs::m_suggested_epilogue_mode): Likewise.
(vector_costs::vector_costs): Initialize m_suggested_epilogue_mode.
* tree-vect-loop.cc (vect_analyze_loop): Honor the target
suggested prefered epilogue mode and support vector epilogues
of vector epilogues if requested.
|
|
When we do SLP discovery of a .MASK_LOAD for a dataref group with gaps
the discovery for the mask will have gaps as well and this was
unexpected in a few places. The following re-organizes things
slightly to accomodate for this.
PR tree-optimization/117484
* tree-vect-slp.cc (vect_build_slp_tree_2): Handle gaps in
mask discovery. Fix condition to release the load permutation.
(vect_lower_load_permutations): Assert we get no load
permutation for the unpermuted node.
* tree-vect-slp-patterns.cc (linear_loads_p): Properly identify
loads (without permutation).
(compatible_complex_nodes_p): Likewise.
* gcc.dg/vect/pr117484-1.c: New testcase.
* gcc.dg/vect/pr117484-2.c: Likewise.
|
|
considering gather
The following treats both the same when considering to use gather or
scatter for single-element interleaving accesses.
This will cause
FAIL: gcc.target/aarch64/sve/sve_iters_low_2.c scan-tree-dump-not vect "LOOP VECTORIZED"
where we now vectorize the loop with VNx4QI, I'll leave it to ARM folks
to investigate whether that's OK and to adjust the testcase or to see
where to adjust things to make the testcase not vectorized again. The
original fix for which the testcase was introduced is still efffective.
PR tree-optimization/117502
* tree-vect-stmts.cc (get_group_load_store_type): Also consider
VMAT_STRIDED_SLP when checking to use gather/scatter for
single-element interleaving access.
* tree-vect-loop.cc (update_epilogue_loop_vinfo): STMT_VINFO_STRIDED_P
can be classified as VMAT_GATHER_SCATTER, so update DR_REF for
those as well.
|
|
This patch implements transformations for the following optimizations.
logN(x) CMP CST -> x CMP expN(CST)
expN(x) CMP CST -> x CMP logN(CST)
Where CMP expands to ge and le operations.
For example:
int
foo (float x)
{
return __builtin_logf (x) <= 0.0f;
}
can just be:
int
foo (float x)
{
return x <= 1.0f;
}
The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?
Signed-off-by: Soumya AR <soumyaa@nvidia.com>
gcc/ChangeLog:
* match.pd: Fold logN(x) CMP CST -> x CMP expN(CST)
and expN(x) CMP CST -> x CMP logN(CST)
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/log_exp.c: New test.
|
|
Forgot this in the -fmodules patch (r15-5112).
gcc/c-family/ChangeLog:
* c.opt.urls: Regenerate.
|
|
|
|
The C++ front-end uses symbols from these directories, so they should also
be in TAGS.
gcc/cp/ChangeLog:
* Make-lang.in: Also collect tags from libcody and c++tools.
|
|
The C++ modules support is not targeting the Modules TS, so it doesn't make
much sense to refer to the TS in the option name. But keep the old spelling
as an undocumented alias for now.
gcc/ChangeLog:
* doc/invoke.texi: Rename -fmodules-ts to -fmodules.
gcc/c-family/ChangeLog:
* c.opt: Add -fmodules with same effect as -fmodules-ts.
gcc/cp/ChangeLog:
* lang-specs.h: Check fmodules* instead of fmodules-ts.
|
|
The init-list initialization of cl_deferred_option p had a couple of
narrowing warnings: first of opt_index from int to size_t and then of value
from HOST_WIDE_INT to int. Fixed by making the types more consistent.
gcc/ChangeLog:
* opts.h (cl_deferred_option::value): Change to HOST_WIDE_INT.
(set_option): Change opt_index parm to size_t.
* opts-common.cc (set_option): Likewise.
|
|
Even though this PR is very close to PR117101, it's not addressed by the
fix I made through r15-4958-g5821f5c8c89a05 because cxx_placement_new_fn
has the very same issue as std_placement_new_fn_p used to have.
As suggested by Jason, this patch changes both functions so that
cxx_placement_new_fn leverages std_placement_new_fn_p which reduces code
duplication and fixes the PR.
PR c++/117463
gcc/cp/ChangeLog:
* constexpr.cc (cxx_placement_new_fn): Implement in terms of
std_placement_new_fn_p.
* cp-tree.h (std_placement_new_fn_p): Declare.
* init.cc (std_placement_new_fn_p): Add missing checks to ensure
that fndecl is a non-replaceable ::operator new.
gcc/testsuite/ChangeLog:
* g++.dg/init/new54.C: New test.
|
|
clang++ adds __builtin_operator_{new,delete} builtins which as documented
work similarly to ::operator {new,delete}, except that it is an error
if the called ::operator {new,delete} is not a replaceable global operator
and allow optimizations which C++ normally allows just when those are used
from new/delete expressions https://eel.is/c++draft/expr.new#14
When using these builtins, the same optimizations can be done even when
using those builtins.
For GCC we note that in the CALL_FROM_NEW_OR_DELETE_P flag on CALL_EXPRs.
The following patch implements it as a C++ FE keyword (because passing
references through ... changes the argument and so BUILT_IN_FRONTEND
builtin can't be used), just attempts to call the ::operator {new,delete}
and if it isn't replaceable, diagnoses it.
libstdc++ already uses the builtin in some cases.
2024-11-11 Jakub Jelinek <jakub@redhat.com>
gcc/c-family/
* c-common.h (enum rid): Add RID_BUILTIN_OPERATOR_NEW
and RID_BUILTIN_OPERATOR_DELETE.
(names_builtin_p): Change return type from bool to int.
* c-common.cc (c_common_reswords): Add __builtin_operator_new
and __builtin_operator_delete.
gcc/c/
* c-decl.cc (names_builtin_p): Change return type from
bool to int, adjust return statments.
gcc/cp/
* parser.cc (cp_parser_postfix_expression): Handle
RID_BUILTIN_OPERATOR_NEW and RID_BUILTIN_OPERATOR_DELETE.
* cp-objcp-common.cc (names_builtin_p): Change return type from
bool to int, adjust return statments. Handle
RID_BUILTIN_OPERATOR_NEW and RID_BUILTIN_OPERATOR_DELETE.
* pt.cc (tsubst_expr) <case CALL_EXPR>: Handle
CALL_FROM_NEW_OR_DELETE_P.
gcc/
* doc/extend.texi (New/Delete Builtins): Document
__builtin_operator_new and __builtin_operator_delete.
gcc/testsuite/
* g++.dg/ext/builtin-operator-new-1.C: New test.
* g++.dg/ext/builtin-operator-new-2.C: New test.
* g++.dg/ext/builtin-operator-new-3.C: New test.
|
|
The std::logic_error exceptions thrown from misuses of
std::wbuffer_convert and std::wstring_convert should use names qualified
with "std::".
libstdc++-v3/ChangeLog:
* include/bits/locale_conv.h (wstring_convert, wbuffer_convert):
Adjust strings passed to exception constructors.
|
|
The intended behaviour for std::text_encoding::aliases_view's iterator
is that it incrementing or decrementing too far sets it to a
value-initialized state, or fails an assertion when those are enabled.
There were typos that used == instead of = which meant that instead of
becoming singular or aborting, an out-of-range increment just did
nothing. This meant erroneous operations were well-defined and didn't
produce any undefined behaviour, but were not diagnosed with assertions
enabled, as had been intended.
This change fixes the bugs and adds more tests to verify the intended
behaviour.
libstdc++-v3/ChangeLog:
PR libstdc++/117520
* include/std/text_encoding (aliases_view:_Iterator::operator+=):
Fix typos that caused == to be used instead of =.
(aliases_view::_Iterator): Fix friend declaration.
* testsuite/std/text_encoding/members.cc: Adjust expected
behaviour of invalid subscript. Add tests for other erroneous
operations on iterators.
|
|
libstdc++-v3/ChangeLog:
* include/bits/unicode.h (_Utf_iterator::_M_read_utf16): Add
parentheses.
|
|
Since some of the c2y-if-decls tests use _Atomic, add a
requirement for target to support atomic operations on
int and long types.
This fixes spurious test link failures on pru-unknown-elf,
which lacks atomic ops. The tests still pass on x86_64-linux-gnu.
gcc/testsuite/ChangeLog:
* gcc.dg/c2y-if-decls-1.c: Require target that supports atomic
operations on int and long types.
* gcc.dg/c2y-if-decls-11.c: Ditto.
* gcc.dg/c2y-if-decls-4.c: Ditto.
* gcc.dg/c2y-if-decls-8.c: Ditto.
Signed-off-by: Dimitar Dimitrov <dimitar@dinux.eu>
|
|
With the change in 15-3128-gde1923f9f4d, this test case no longer xfail.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/complex/fast-math-complex-add-half-float.c: Remove
xfail from test.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
|
|
According to the aapcs64: If the argument is an 8-bit (...) precision
Floating-point or short vector type and the NSRN is less than 8, then the
argument is allocated to the least significant bits of register v[NSRN].
gcc/
* config/aarch64/aarch64.cc
(aarch64_vfp_is_call_or_return_candidate): use fp registers to
return svmfloat8_t parameters.
gcc/testsuite/
* gcc.target/aarch64/fp8_scalar_1.c:
|
|
Lewis' r15-5067 fixing the marking of TRAIT_EXPR led me to compare some
other front-end type definitions to their marking in cp_common_init_ts; it
seems we can change tree_common to something smaller in several cases, to
match how they are marked.
gcc/cp/ChangeLog:
* cp-tree.h (struct ptrmem_cst): Change tree_common to tree_typed.
(struct tree_trait_expr): Likewise.
(struct tree_static_assert): Change tree_common to tree_base.
(struct tree_argument_pack_select): Likewise.
|
|
On my system with E and P cores (hybrid) x86, the spincount is by default 1
and not 300000, cf. PR109812 and r14-4571-ge1e127de18dbee.
Hence, this commit updates the expected value of the testcase to also
accept omp_display_env showing "GOMP_SPINCOUNT = '1'" - but only for
x86-64, which might be hybrid.
libgomp/ChangeLog:
* testsuite/libgomp.c-c++-common/pr109062.c: Update dg-output
to also accept GOMP_SPINCOUNT = 1 for x86-64.
|
|
This was responsible for a bunch of SVE FAILs with --param vect-force-slp=1
* tree-vect-slp.cc (arg1_arg3_map): New.
(arg1_arg3_arg4_map): Likewise.
(vect_get_operand_map): Handle IFN_SCATTER_STORE,
IFN_MASK_SCATTER_STORE and IFN_MASK_LEN_SCATTER_STORE.
(vect_build_slp_tree_1): Likewise.
* tree-vect-stmts.cc (vectorizable_store): For SLP masked
gather/scatter record the mask with proper number of copies.
* tree-vect-loop.cc (vectorizable_recurr): Avoid costing
the initial value construction in the prologue twice with SLP.
|
|
Previous patches are supposed to add full support for SVE2.1,
so this patch advertises that through __ARM_FEATURE_SVE2p1.
pragma_cpp_predefs_3.c had one fewer pop than push. The final
test is triple-nested:
- armv8-a (to start with a clean slate, untainted by command-line flags)
- the maximal SVE set
- general-regs-only
gcc/
* config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins): Handle
__ARM_FEATURE_SVE2p1.
gcc/testsuite/
* gcc.target/aarch64/pragma_cpp_predefs_3.c: Add SVE2p1 tests.
|
|
This patch adds the instructions that are new to FEAT_SVE2p1.
It mostly contains simple additions, so it didn't seem worth
splitting up further.
It's likely that we'll find more autovec uses for some of these
instructions, but for now this patch just deals with one obvious case:
using the new hybrid-VLA permutations to handle "stepped" versions of
some Advanced SIMD permutations. See aarch64_evpc_hvla for details.
The patch also continues the existing practice of lowering ACLE
permutation intrinsics to VEC_PERM_EXPR. That's admittedly a bit
inconsistent with the approach I've been advocating for when it comes
to arithmetic, but I think the difference is that (a) these are pure
data movement, and so there's limited scope for things like gimple
canonicalisations to mess with the instruction selection or operation
mix; and (b) there are no added UB rules to worry about.
Another new thing in the patch is the concept of "memory-only"
SVE vector modes. These are used to represent the memory operands
of the new LD1[DW] (to .Q), LD[234]Q, ST1[DW] (from .Q), and ST[234]Q
instructions. We continue to use .B, .H, .S, and .D modes for the
registers, since there's no predicated contiguous LD1Q instruction,
and since there's no arithmetic that can be done on TI. (The new
instructions are instead intended for hybrid VLA, i.e. for vectors
of vectors.)
For now, all of the new instructions are non-streaming-only.
Some of them are streaming-compatible with SME2p1, but that's
a later patch.
gcc/
* config/aarch64/aarch64-modes.def (VNx1SI, VNx1DI): New modes.
* config/aarch64/aarch64-sve-builtins-base.cc
(svdup_lane_impl::expand): Update generation of TBL instruction.
(svtbl_impl): Delete.
(svtbl): Use unspec_based_uncond_function instead.
* config/aarch64/aarch64-sve-builtins-functions.h
(permute::fold_permute): Handle trailing immediate arguments.
* config/aarch64/aarch64-sve-builtins-shapes.h (extq): Declare.
(load_gather64_sv_index, load_gather64_sv_offset): Likewise.
(load_gather64_vs_index, load_gather64_vs_offset): Likewise.
(pmov_from_vector, pmov_from_vector_lane, pmov_to_vector_lane)
(reduction_neonq, store_scatter64_index, store_scatter64_offset)
(unary_lane): Likewise.
* config/aarch64/aarch64-sve-builtins-shapes.cc
(load_gather64_sv_base, store_scatter64_base): New classes.
(extq_def, ext): New shape.
(load_gather64_sv_index_def, load_gather64_sv_index): Likewise.
(load_gather64_sv_offset_def, load_gather64_sv_offset): Likewise.
(load_gather64_vs_index_def, load_gather64_vs_index): Likewise.
(load_gather64_vs_offset_def, load_gather64_vs_offset): Likewise.
(pmov_from_vector_def, pmov_from_vector): Likewise.
(pmov_from_vector_lane_def, pmov_from_vector_lane): Likewise.
(pmov_to_vector_lane_def, pmov_to_vector_lane): Likewise.
(reduction_neonq_def, reduction_neonq): Likewise.
(store_scatter64_index_def, store_scatter64_index): Likewise.
(store_scatter64_offset_def, store_scatter64_offset): Likewise.
(unary_lane_def, unary_lane): Likewise.
* config/aarch64/aarch64-sve-builtins-sve2.h (svaddqv, svandqv)
(svdup_laneq, sveorqv, svextq, svld1q_gather, svld1udq, svld1uwq)
(svld2q, svld3q, svld4q, svmaxnmqv, svmaxqv, svminnmqv, svminqv)
(svorqv, svpmov, svpmov_lane, svst1qd, svst1q_scatter, svst1wq)
(svst2q, svst3q, svst4q, svtblq, svtbx, svtbxq, svuzpq1, svuzpq2)
(svzipq1, svzipq2): Declare.
* config/aarch64/aarch64-sve-builtins-sve2.cc (ld1uxq_st1xq_base)
(ld234q_st234q_base, svdup_laneq_impl, svextq_impl): New classes.
(svld1q_gather_impl, svld1uxq_impl, svld234q_impl): Likewise.
(svpmov_impl, svpmov_lane_impl, svst1q_scatter_impl): Likewise.
(svst1xq_impl, svst234q_impl, svuzpq_impl, svzipq_impl): Likewise.
(svaddqv, svandqv, svdup_laneq, sveorqv, svextq, svld1q_gather)
(svld1udq, svld1uwq, svld2q, svld3q, svld4q, svmaxnmqv, svmaxqv)
(svminnmqv, svminqv, svorqv, svpmov, svpmov_lane, svst1qd)
(svst1q_scatter, svst1wq, svst2q, svst3q, svst4q, svtblq, svtbx)
(svtbxq, svuzpq1, svuzpq2, svzipq1, svzipq2): New function entries.
* config/aarch64/aarch64-sve-builtins-sve2.def (svaddqv, svandqv)
(svdup_laneq, sveorqv, svextq, svld2q, svld3q, svld4q, svmaxnmqv)
(svmaxqv, svminnmqv, svminqv, svorqv, svpmov, svpmov_lanes, vst2q)
(svst3q, svst4q, svtblq, svtbxq, svuzpq1, svuzpq2, svzipq1, svzipq2)
(svld1q_gather, svld1udq, svld1uwq, svst1dq, svst1q_scatter)
(svst1wq): New function definitions.
* config/aarch64/aarch64-sve-builtins.cc (TYPES_hsd_data)
(hsd_data, s_data): New type lists.
(function_resolver::infer_pointer_type): Give a specific error about
passing a pointer to 8-bit elements to an _index function.
(function_resolver::resolve_sv_displacement): Check whether the
function allows 32-bit bases.
* config/aarch64/iterators.md (UNSPEC_TBLQ, UNSPEC_TBXQ): New unspecs.
(UNSPEC_ADDQV, UNSPEC_ANDQV, UNSPEC_DUPQ, UNSPEC_EORQV, UNSPEC_EXTQ)
(UNSPEC_FADDQV, UNSPEC_FMAXQV, UNSPEC_FMAXNMQV, UNSPEC_FMINQV)
(UNSPEC_FMINNMQV, UNSPEC_LD1_EXTENDQ, UNSPEC_LD1Q_GATHER): Likewise.
(UNSPEC_LDNQ, UNSPEC_ORQV, UNSPEC_PMOV_PACK, UNSPEC_PMOV_PACK_LANE)
(UNSPEC_PMOV_UNPACK, UNSPEC_PMOV_UNPACK_LANE, UNSPEC_SMAXQV): Likewise.
(UNSPEC_SMINQV, UNSPEC_ST1_TRUNCQ, UNSPEC_ST1Q_SCATTER, UNSPEC_STNQ)
(UNSPEC_UMAXQV, UNSPEC_UMINQV, UNSPEC_UZPQ1, UNSPEC_UZPQ2): Likewise.
(UNSPEC_ZIPQ1, UNSPEC_ZIPQ2): Likewise.
(Vtype): Handle single-vector SVE modes.
(Vendreg): Handle SVE structure modes.
(VNxTI, LD1_EXTENDQ_MEM): New mode attributes.
(SVE_PERMUTE, SVE_TBL, SVE_TBX): New int iterators.
(SVE_INT_REDUCTION_128, SVE_FP_REDUCTION_128): Likewise.
(optab): Handle the new SVE2.1 reductions.
(perm_insn): Handle the new SVE2.1 permutations.
* config/aarch64/aarch64-sve.md
(@aarch64_sve_tbl<mode>): Generalize to...
(@aarch64_sve_<SVE_TBL:perm_insn><mode>): ...this.
(@aarch64_sve_<PERMUTE:perm_insn><mode>): Generalize to...
(@aarch64_sve_<SVE_PERMUTE:perm_insn><mode>): ...this.
* config/aarch64/aarch64-sve2.md (@aarch64_pmov_to_<mode>)
(@aarch64_pmov_lane_to_<mode>, @aarch64_pmov_from_<mode>)
(@aarch64_pmov_lane_from_<mode>, @aarch64_sve_ld1_extendq<mode>)
(@aarch64_sve_ldnq<mode>, aarch64_gather_ld1q): New patterns.
(@aarch64_sve_st1_truncq<mode>, @aarch64_sve_stnq<mode>): Likewise.
(aarch64_scatter_st1q, @aarch64_pred_reduc_<optab>_<mode>): Likewise.
(@aarch64_sve_dupq<mode>, @aarch64_sve_extq<mode>): Likewise.
(@aarch64_sve2_tbx<mode>): Generalize to...
(@aarch64_sve_<SVE_TBX:perm_insn><mode>): ...this.
* config/aarch64/aarch64.cc
(aarch64_classify_vector_memory_mode): New function.
(aarch64_regmode_natural_size): Use it.
(aarch64_classify_index): Likewise.
(aarch64_classify_address): Likewise.
(aarch64_print_address_internal): Likewise.
(aarch64_evpc_hvla): New function.
(aarch64_expand_vec_perm_const_1): Use it.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_index_1.c,
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_offset_1.c,
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_offset_2.c,
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_offset_3.c,
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_offset_4.c,
* gcc.target/aarch64/sve/acle/general-c/load_ext_gather_offset_5.c:
Adjust the "did you mean" suggestion.
* gcc.target/aarch64/sve/acle/general-c/ld1sh_gather_1.c: Removed.
* gcc.target/aarch64/sve/acle/general-c/extq_1.c: New test.
* gcc.target/aarch64/sve/acle/general-c/load_gather64_sv_index_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/load_gather64_sv_offset_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/pmov_from_vector_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/pmov_from_vector_lane_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/pmov_to_vector_lane_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/pmov_to_vector_lane_2.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/store_scatter64_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/store_scatter64_index_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/store_scatter64_offset_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/unary_lane_1.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/addqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/andqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dup_laneq_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/eorqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/extq_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1q_gather_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1udq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1udq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1udq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1uwq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1uwq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1uwq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld2q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld3q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld4q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxnmqv_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxnmqv_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxnmqv_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/maxqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minnmqv_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minnmqv_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minnmqv_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/minqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/orqv_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pmov_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1dq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1dq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1dq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1q_scatter_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1wq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1wq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1wq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st2q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st3q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st4q_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tblq_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/tbxq_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq1_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/uzpq2_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq1_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/zipq2_u8.c: Likewise.
* gcc.target/aarch64/sve2/dupq_1.c: Likewise.
* gcc.target/aarch64/sve2/extq_1.c: Likewise.
* gcc.target/aarch64/sve2/uzpq_1.c: Likewise.
* gcc.target/aarch64/sve2/zipq_1.c: Likewise.
|
|
This patch handles the SVE2p1 instructions that are shared
with SME2. This includes the consecutive-register forms of
the 2-register and 4-register loads and stores, but not the
strided-register forms.
gcc/
* config/aarch64/aarch64.h (TARGET_SVE2p1_OR_SME2): New macro.
* config/aarch64/aarch64-early-ra.cc
(is_stride_candidate): Require TARGET_STREAMING_SME2
(early_ra::maybe_convert_to_strided_access): Likewise.
* config/aarch64/aarch64-sve-builtins-sve2.def: Mark instructions
that are common to both SVE2p1 and SME2.
* config/aarch64/aarch64-sve.md
(@aarch64_<sur>dot_prod_lane<SVE_FULL_SDI:mode><SVE_FULL_BHI:mode>):
Test TARGET_SVE2p1_OR_SME2 instead of TARGET_STREAMING_SME2.
(@aarch64_sve_<sve_fp_op>vnx4sf): Move TARGET_SVE_BF16 condition
into SVE_BFLOAT_TERNARY_LONG.
(@aarch64_sve_<sve_fp_op>_lanevnx4sf): Likewise
SVE_BFLOAT_TERNARY_LONG_LANE.
* config/aarch64/aarch64-sve2.md
(@aarch64_<LD1_COUNT:optab><mode>): Require TARGET_SVE2p1_OR_SME2
instead of TARGET_STREAMING_SME2.
(@aarch64_<ST1_COUNT:optab><mode>): Likewise.
(@aarch64_sve_ptrue_c<BHSD_BITS>): Likewise.
(@aarch64_sve_pext<BHSD_BITS>): Likewise.
(@aarch64_sve_pext<BHSD_BITS>x2): Likewise.
(@aarch64_sve_cntp_c<BHSD_BITS>): Likewise.
(@aarch64_sve_fclamp<mode>): Likewise.
(*aarch64_sve_fclamp<mode>_x): Likewise.
(<sur>dot_prodvnx4sivnx8hi): Likewise.
(aarch64_sve_fdotvnx4sfvnx8hf): Likewise.
(aarch64_fdot_prod_lanevnx4sfvnx8hf): Likewise.
(@aarch64_sve_while<while_optab_cmp>_b<BHSD_BITS>_x2): Likewise.
(@aarch64_sve_while<while_optab_cmp>_c<BHSD_BITS>): Likewise.
(@aarch64_sve_<optab><VNx8HI_ONLY:mode><VNx8SI_ONLY:mode>): Move
TARGET_STREAMING_SME2 condition into SVE_QCVTxN.
(@aarch64_sve_<sve_int_op><mode>): Likewise
SVE2_INT_SHIFT_IMM_NARROWxN, but also require TARGET_STREAMING_SME2
for the 4-register forms.
* config/aarch64/iterators.md (SVE_BFLOAT_TERNARY_LONG): Require
TARGET_SVE2p1_OR_SME2 rather than TARGET_STREAMING_SME2 for
UNSPEC_BFMLSLB and UNSPEC_BFMLSLT. Require TARGET_SVE_BF16
for the others.
(SVE_BFLOAT_TERNARY_LONG_LANE): Likewise.
(SVE2_INT_SHIFT_IMM_NARROWxN): Require TARGET_SVE2p1_OR_SME2 for
the interleaving forms and TARGET_STREAMING_SME2 for the rest.
(SVE_QCVTxN): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/clamp_3.c: New test.
* gcc.target/aarch64/sve/clamp_4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/bfmlslb_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/bfmlslb_lane_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/bfmlslt_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/bfmlslt_lane_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/cntp_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/cntp_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/cntp_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/cntp_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_lane_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_lane_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_lane_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/dot_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_bf16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_bf16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_f64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_s8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ld1_u8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_bf16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_bf16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_f64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_s8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ldnt1_u8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/pext_lane_c8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ptrue_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ptrue_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ptrue_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/ptrue_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qcvtn_s16_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qcvtn_u16_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qcvtn_u16_u32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qrshrn_s16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qrshrn_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/qrshrun_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_bf16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_bf16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_f64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_s8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/st1_u8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_bf16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_bf16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_f64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_s8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u16_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u32_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u64_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/stnt1_u8_x4.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_b16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_b32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_b64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_b8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilege_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_b16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_b32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_b64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_b8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilegt_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_b16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_b32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_b64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_b8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilele_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_b16_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_b32_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_b64_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_b8_x2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/whilelt_c8.c: Likewise.
|
|
Some instructions that were previously restricted to streaming mode
can also be used in non-streaming mode with SVE2.1. This patch adds
support for those, as well as the usual new-extension boilerplate.
A later patch will add the feature macro.
gcc/
* config/aarch64/aarch64-option-extensions.def (sve2p1): New extension.
* doc/invoke.texi (sve2p1): Document it.
* config/aarch64/aarch64-sve-builtins-sve2.def: Mark instructions
that are common to both SVE2p1 and SME.
* config/aarch64/aarch64.h (TARGET_SVE2p1): New macro.
(TARGET_SVE2p1_OR_SME): Likewise.
* config/aarch64/aarch64-sve2.md
(@aarch64_sve_psel<BHSD_BITS>): Require TARGET_SVE2p1_OR_SME
instead of TARGET_STREAMING.
(*aarch64_sve_psel<BHSD_BITS>_plus): Likewise.
(@aarch64_sve_<su>clamp<mode>): Likewise.
(*aarch64_sve_<su>clamp<mode>_x): Likewise.
(@aarch64_pred_<optab><mode>): Likewise.
(@cond_<optab><mode>): Likewise.
gcc/testsuite/
* lib/target-supports.exp
(check_effective_target_aarch64_asm_sve2p1_ok): New procedure.
* gcc.target/aarch64/sve/clamp_1.c: New test.
* gcc.target/aarch64/sve/clamp_2.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/clamp_u8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_b16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_b32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_b64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_b8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_c16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_c32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_c64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/psel_lane_c8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_bf16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_f16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_f32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_f64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_s16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_s32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_s64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_s8.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_u16.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_u32.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_u64.c: Likewise.
* gcc.target/aarch64/sve2/acle/asm/revd_u8.c: Likewise.
|
|
This patch moves the scalar and single-vector Advanced SIMD types
from arm_neon.h into a private header, so that they can be defined
by arm_sve.h as well. This is needed for the upcoming SVE2.1
hybrid-VLA reductions, which return 128-bit Advanced SIMD vectors.
The approach follows Claudio's patch for FP8.
gcc/
* config.gcc (extra_headers): Add arm_private_neon_types.h.
* config/aarch64/arm_private_neon_types.h: New file, split out
from...
* config/aarch64/arm_neon.h: ...here.
* config/aarch64/arm_sve.h: Include arm_private_neon_types.h
|
|
This patch adds an svboolx4_t type, to go alongside the existing
svboolx2_t type. It doesn't require any special ISA support beyond
SVE itself and it currently has no associated instructions.
gcc/
* config/aarch64/aarch64-modes.def (VNx64BI): New mode.
* config/aarch64/aarch64-protos.h
(aarch64_split_double_move): Generalize to...
(aarch64_split_move): ...this.
* config/aarch64/aarch64-sve-builtins-base.def (svcreate4, svget4)
(svset4, svundef4): Add bool variants.
* config/aarch64/aarch64-sve-builtins.cc (handle_arm_sve_h): Add
svboolx4_t.
* config/aarch64/iterators.md (SVE_STRUCT_BI): New mode iterator.
* config/aarch64/aarch64-sve.md (movvnx32bi): Generalize to...
(mov<SVE_STRUCT_BI:mode>): ...this.
* config/aarch64/aarch64.cc
(pure_scalable_type_info::piece::get_rtx): Allow num_prs to be 4.
(aarch64_classify_vector_mode): Handle VNx64BI.
(aarch64_hard_regno_nregs): Likewise.
(aarch64_class_max_nregs): Likewise.
(aarch64_array_mode): Use VNx64BI for arrays of 4 svbool_ts.
(aarch64_split_double_move): Generalize to...
(aarch64_split_move): ...this.
(aarch64_split_128bit_move): Update call accordingly.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/create_5.c: Expect svcreate4
to succeed for svbool_ts.
* gcc.target/aarch64/sve/acle/asm/test_sve_acle.h
(TEST_UNDEF_B): New macro.
* gcc.target/aarch64/sve/acle/asm/create4_1.c: Test _b form.
* gcc.target/aarch64/sve/acle/asm/undef2_1.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/undef4_1.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/get4_b.c: New test.
* gcc.target/aarch64/sve/acle/asm/set4_b.c: Likewise.
* gcc.target/aarch64/sve/acle/general-c/svboolx4_1.c: Likewise.
|
|
gcc/
* config/aarch64/aarch64-sve-builtins-sve2.def: Sort entries
alphabetically.
* config/aarch64/aarch64-sve-builtins-sve2.h: Likewise.
* config/aarch64/aarch64-sve-builtins-sve2.cc: Likewise.
|
|
This patch factors out some of ext_def into a base class,
so that it can be reused for the SVE2.1 svextq intrinsic.
gcc/
* config/aarch64/aarch64-sve-builtins-shapes.cc (ext_base): New base
class, extracted from...
(ext_def): ...here.
|
|
All extending gather load intrinsics encode the source type in
their name (e.g. svld1sb for an extending load from signed bytes).
The type of the extension result has to be specified using an
explicit type suffix; it isn't something that can be inferred
from the arguments, since there are multiple valid choices for
the same arguments.
This meant that type inference for gather loads was only needed for
non-extending loads, in which case the pointer target had to be a
32-bit or 64-bit element type. The gather_scatter_p argument to
function_resolver::infer_pointer_type therefore controlled two things:
how we should react to vector base addresses, and whether we should
require a minimum element size of 32.
The element size restriction doesn't apply to the upcomding SVE2.1
svld1q intrinsic, so this patch adds a separate argument for the minimum
element size requirement.
gcc/
* config/aarch64/aarch64-sve-builtins.h
(function_resolver::target_type_restrictions): New enum.
(function_resolver::infer_pointer_type): Add an extra argument
that specifies what the target type can be.
* config/aarch64/aarch64-sve-builtins.cc
(function_resolver::infer_pointer_type): Likewise.
* config/aarch64/aarch64-sve-builtins-shapes.cc
(load_gather_sv_base::get_target_type_restrictions): New virtual
member function.
(load_gather_sv_base::resolve): Use it. Update call to
infer_pointer_type.
|
|
Until now, all data arguments to a scatter store needed to have
32-bit or 64-bit elements. This isn't true for the upcoming SVE2.1
svst1q scatter intrinsic, so this patch adds an abstraction around the
restriction.
gcc/
* config/aarch64/aarch64-sve-builtins-shapes.cc
(store_scatter_base::infer_vector_type): New virtual member function.
(store_scatter_base::resolve): Use it.
|
|
In the upcoming SVE2.1 svld1q and svst1q intrinsics, the relationship
between the base vector and the data vector differs from existing
gather/scatter intrinsics. This patch adds a new abstraction to
handle the difference.
gcc/
* config/aarch64/aarch64-sve-builtins.h
(function_shape::vector_base_type): New member function.
* config/aarch64/aarch64-sve-builtins.cc
(function_shape::vector_base_type): Likewise.
(function_resolver::resolve_sv_displacement): Use it.
(function_resolver::resolve_gather_address): Likewise.
|
|
GCC previously used the older assembly syntax for SVE TBL, with no
braces around the second operand. This patch switches to the newer,
official syntax, with braces around the operand.
The initial SVE binutils submission supported both syntaxes, so there
should be no issues with backwards compatibility.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_sve_tbl<mode>): Wrap
the second operand in braces.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/asm/dup_lane_bf16.c: Wrap the second
TBL operand in braces
* gcc.target/aarch64/sve/acle/asm/dup_lane_f16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_f32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_f64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_s16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_s32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_s64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_s8.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_u16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_u32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_u64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/dup_lane_u8.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_bf16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_f16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_f32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_f64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_s16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_s32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_s64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_s8.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_u16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_u32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_u64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/tbl_u8.c: Likewise.
* gcc.target/aarch64/sve/slp_perm_6.c: Likewise.
* gcc.target/aarch64/sve/slp_perm_7.c: Likewise.
* gcc.target/aarch64/sve/vec_perm_1.c: Likewise.
* gcc.target/aarch64/sve/vec_perm_const_1.c: Likewise.
* gcc.target/aarch64/sve/vec_perm_const_1_overrun.c: Likewise.
* gcc.target/aarch64/sve/vec_perm_const_single_1.c: Likewise.
* gcc.target/aarch64/sve/vec_perm_single_1.c: Likewise.
* gcc.target/aarch64/sve/uzp1_1.c: Shorten the scan-assembler-nots
to just "\ttbl\".
* gcc.target/aarch64/sve/uzp2_1.c: Likewise.
|
|
Past extensions to SVE have required new subsets of all_data; the
SVE2.1 patches will add another. This patch tries to make this more
scalable by defining the multi-size *_data macros to be unions of
single-size *_data macros.
gcc/
* config/aarch64/aarch64-sve-builtins.cc (TYPES_all_data): Redefine
in terms of single-size *_data definitions.
(TYPES_bhs_data, TYPES_hs_data, TYPES_sd_data): Likewise.
(TYPES_b_data, TYPES_h_data, TYPES_s_data): New macros.
|
|
g:ede97598e2c recorded separate ISA requirements for streaming
and non-streaming mode. The premise there was that AARCH64_FL_SME
should not be included in the streaming mode requirements, since:
(a) an __arm_streaming_compatible function wouldn't be in streaming
mode if SME wasn't available.
(b) __arm_streaming_compatible functions only allow things that are
possible in non-streaming mode, so the non-streaming architecture
is enough to assemble the code, even if +sme isn't enabled.
(c) we reject __arm_streaming if +sme isn't enabled, so don't need
to test it for individual intrinsics as well.
Later patches lean into this further.
This patch applies the same reasoning to the .md constructs for
base streaming-only SME instructions, guarding them with
TARGET_STREAMING rather than TARGET_STREAMING_SME.
gcc/
* config/aarch64/aarch64.h (TARGET_SME): Expand comment.
(TARGET_STREAMING_SME): Delete.
* config/aarch64/aarch64-sme.md: Use TARGET_STREAMING instead of
TARGET_STREAMING_SME.
* config/aarch64/aarch64-sve2.md: Likewise.
|
|
Some code was checking TARGET_STREAMING and TARGET_SME2 separately,
but we now have a macro to test both at once.
gcc/
* config/aarch64/aarch64-sme.md: Use TARGET_STREAMING_SME2
instead of separate TARGET_STREAMING and TARGET_SME2 tests.
* config/aarch64/aarch64-sve2.md: Likewise.
* config/aarch64/iterators.md: Likewise.
|
|
For the aarch64 simd clones patches, it would be useful to be able to
push a function declaration onto the cfun stack, even though it has no
function body associated with it. That is, we want cfun to be null,
current_function_decl to be the decl itself, and the target and
optimisation flags to reflect the declaration.
This patch adds a push/pop_function_decl pair to do that.
I think the more direct way of doing what I want to do under the
existing interface would have been:
push_cfun (nullptr);
invoke_set_current_function_hook (fndecl);
pop_cfun ();
where invoke_set_current_function_hook would need to become public.
But it seemed safer to use the higher-level routines, since it makes
sure that the target/optimisation changes are synchronised with the
function changes. In particular, if cfun was null before the
sequence above, the pop_cfun would leave the flags unchanged,
rather than restore them to the state before the push_cfun.
gcc/
* function.h (push_function_decl, pop_function_decl): Declare.
* function.cc (set_function_decl): New function, extracted from...
(set_cfun): ...here.
(push_function_decl): New function, extracted from...
(push_cfun): ...here.
(pop_cfun_1): New function, extracted from...
(pop_cfun): ...here.
(pop_function_decl): New function.
|
|
2024-11-10 Paul Thomas <pault@gcc.gnu.org>
gcc/fortran
PR fortran/109345
* trans-array.cc (gfc_get_array_span): Unlimited polymorphic
expressions are now treated separately since the span need not
be the same as the element size.
gcc/testsuite/
PR fortran/109345
* gfortran.dg/character_workout_1.f90: Cut trailing whitespace.
* gfortran.dg/pr109345.f90: New test.
|
|
For the loop in the testcase we currently fail to hoist the guard
check of the inner loop (m > 0) out of the outer loop because
find_loop_guard checks all blocks of the outer loop for side-effects,
including those that are skipped by the guard. This usually
is harmless as the guard does not skip any blocks in the outer loop
but in this case store-motion was applied to the inner loop and thus
there's now a skipped store in the outer loop.
The following properly skips blocks that are dominated by the
entry to the skipped region.
PR tree-optimization/117510
* tree-ssa-loop-unswitch.cc (find_loop_guard): Only check
not skipped blocks for side-effects.
* gcc.dg/vect/vect-outer-pr117510.c: New testcase.
|
|
This patch improves the parameter declaration by saving all parameter
kinds: proper procedure, definition module procedure and forward
procedures. This allows error messages to reference any parameter
in the three kinds of procedures. Variables and their declaration
are also stored. The expression, assignment and parameter checking
has been improved to highlight any variable or parameter and
its declaration causing a conflict.
gcc/m2/ChangeLog:
* gm2-compiler/M2Base.def (MixTypes): Rename parameters.
(MixTypesDecl): New procedure function.
* gm2-compiler/M2Base.mod (BuildOrdFunctions): Add
DefProcedure parameter to PutFunction.
(BuildTruncFunctions): Ditto.
(BuildFloatFunctions): Ditto.
(BuildIntFunctions): Ditto.
(InitBaseFunctions): Ditto.
(MixTypesDecl): New procedure function.
(MixTypes): Reimplement.
* gm2-compiler/M2Check.mod (checkProcType): Replace
NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
(checkProcedureProcType): Ditto.
* gm2-compiler/M2Error.def: Remove unnecessary export qualified list.
* gm2-compiler/M2GCCDeclare.mod: Replace NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
(DeclareProcedureToGccWholeProgram): Rename son to
Variable.
(DeclareProcedureToGccSeparateProgram): Ditto.
(PrintKind): New procedure.
(PrintProcedureParameters): Ditto.
(PrintProcedureReturnType): Ditto.
(PrintProcedure): Reimplement.
(PrintProcTypeParameters): New procedure.
(PrintProcType): Ditto.
(DeclareProcType): Rename Son to Parameter.
* gm2-compiler/M2GenGCC.mod: Replace NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
(ErrorMessageDecl): New procedure.
(checkIncorrectMeta): Replace call to MetaErrorT2 with
ErrorMessageDecl.
(ComparisonMixTypes): Add varleft and varright parameters.
Adjust all callers of ComparisonMixTypes.
* gm2-compiler/M2MetaError.def (MetaErrorDecl): New procedure.
* gm2-compiler/M2MetaError.mod (MetaErrorDecl): New procedure.
* gm2-compiler/M2Options.def (SetXCode): Add -fd flag description
to comment.
* gm2-compiler/M2Options.mod (SetXCode): Add -fd flag description
to comment.
* gm2-compiler/M2Quads.mod (CheckBreak): New procedure.
Replace NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
(FailParameter): Reimplement using GetVarDeclFullTok.
Generate message for formal parameter, actual parameter and
declaration of actual parameter.
(WarnParameter): Ditto.
(CheckBuildFunction): Reimplement error message using MetaErrorT1.
* gm2-compiler/M2Range.mod: Replace NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
* gm2-compiler/M2Scaffold.mod (DeclareScaffoldFunctions): Call
PutProcedureDefined after every procedure declaration.
(DeclareArgEnvParams): Add ProperProcedure parameter to PutParam.
* gm2-compiler/M2Size.mod (MakeSize): Add DefProcedure parameter
to PutFunction.
* gm2-compiler/M2Swig.mod: Replace NoOfParam with NoOfParamAny.
Replace IsVarParam with IsVarParamAny.
* gm2-compiler/M2SymInit.mod: Ditto.
* gm2-compiler/M2System.mod (InitSystem): Add DefProcedure
parameter to PutFunction.
* gm2-compiler/P1SymBuild.mod (StartBuildProcedure): Reimplement.
(EndBuildProcedure): Ditto.
(EndBuildForward): Ditto.
* gm2-compiler/P2Build.bnf (BuildProcedureDefinedByForward):
Remove.
(BuildProcedureDefinedByProper): Ditto.
(ForwardDeclaration): Remove BuildProcedureDefinedByForward.
(BuildNoReturnAttribute): Remove parameter.
* gm2-compiler/P2SymBuild.def (BuildNoReturnAttribute): Remove
parameter.
(BuildProcedureDefinedByForward): Remove.
(BuildProcedureDefinedByProper): Ditto.
* gm2-compiler/P2SymBuild.mod (Import): Remove
AreParametersDefinedInDefinition,
AreParametersDefinedInImplementation,
AreProcedureParametersDefined,
ParametersDefinedInDefinition,
ParametersDefinedInImplementation,
GetProcedureDeclaredDefinition,
GetProcedureDeclaredForward,
GetProcedureDeclaredProper,
GetParametersDefinedByForward,
GetParametersDefinedByProper and
PutProcedureNoReturn.
Add PutProcedureParametersDefined,
GetProcedureParametersDefined,
GetProcedureKindDesc,
GetProcedureDeclaredTok,
GetProcedureKind,
GetReturnTypeTok,
SetReturnOptional,
IsReturnOptional,
PutProcedureNoReturn and
PutProcedureDefined.
(Debug): New procedure.
(P2StartBuildDefModule): Space formatting.
(BuildVariable): Reimplement to record full declaration.
(StartBuildProcedure): Reimplement using token to determine
the kind of procedure.
(BuildProcedureHeading): Ditto.
(BuildFPSection): Ditto.
(BuildVarArgs): Ditto.
(BuildOptArg): Ditto.
(BuildProcedureDefinedByForward): Remove.
(BuildProcedureDefinedByProper): Ditto.
(BuildFormalParameterSection): Reimplement so that the
quad stack is unchanged.
(CheckFormalParameterSection): Ditto.
(RemoveFPParameters): New procedure.
(ParameterError): Reimplement.
(StartBuildFormalParameters): Add annotation.
(ParameterMismatch): Reimplement.
(EndBuildFormalParameters): Reimplement to check against
all procedure kinds.
(GetSourceDesc): Remove.
(GetCurSrcDesc): Ditto.
(GetDeclared): Ditto.
(ReturnTypeMismatch): Reimplement.
(BuildFunction): Ditto.
(BuildOptFunction): Ditto.
(CheckOptFunction): New procedure.
(BuildNoReturnAttribute): Remove parameter and obtain
procedure symbol from quad stack.
(CheckProcedureReturn): New procedure.
* gm2-compiler/P3SymBuild.mod (BuildOptArgInitializer):
Preserve ProcSym tok on the quad stack.
Add Assert.
* gm2-compiler/PCSymBuild.mod (fixupProcedureType): Replace
NoOfParam with NoOfParamAny.
* gm2-compiler/SymbolTable.def (GetNthParam): Add ProcedureKind
parameter.
(PutFunction): Ditto.
(PutOptFunction): Ditto.
(IsReturnOptional): Ditto.
(PutParam): Ditto.
(PutVarParam): Ditto.
(PutParamName): Ditto.
(PutProcedureNoReturn): Ditto.
(IsProcedureNoReturn): Ditto.
(IsVarParam): Ditto.
(IsUnboundedParam): Ditto.
(NoOfParam): Ditto.
(ForeachLocalSymDo): Ditto.
(GetProcedureKind): Ditto.
(GetProcedureDeclaredTok): Ditto.
(PutProcedureDeclaredTok): Ditto.
(GetReturnTypeTok): Ditto.
(PutReturnTypeTok): Ditto.
(PutParametersDefinedByForward): New procedure.
(PutProcedureParametersDefined): Ditto.
(PutProcedureDefined): Ditto.
(GetParametersDefinedByProper): Ditto.
(GetProcedureDeclaredForward): Ditto.
(GetProcedureDeclaredProper): Ditto.
(PutProcedureDeclaredProper): Ditto.
(GetProcedureDeclaredDefinition): Ditto.
(PutProcedureDeclaredDefinition): Ditto.
(GetProcedureDefined): Ditto.
(PutUseOptArg): Ditto.
(UsesOptArg): Ditto.
(PutOptArgInit): Ditto.
(SetReturnOptional): Ditto.
(UsesOptArgAny): Ditto.
(GetProcedureKindDesc): Ditto.
(IsReturnOptionalAny): New procedure function.
(GetNthParamAny): Ditto.
(NoOfParamAny): Ditto.
(IsProcedureAnyNoReturn): Ditto.
(AreParametersDefinedInImplementation): Remove.
(ParametersDefinedInImplementation): Ditto.
(AreParametersDefinedInDefinition): Ditto.
(AreProcedureParametersDefined): Ditto.
(ParametersDefinedInDefinition): Ditto.
(ProcedureParametersDefined): Ditto.
(PutParametersDefinedByProper): Ditto.
(PutProcedureDeclaredForward): Ditto.
(GetParametersDefinedByForward): Ditto.
(GetProcedureParametersDefined): Ditto.
(PushOffset): Ditto.
(PopSize): Ditto.
(PushParamSize): Ditto.
(PushSumOfLocalVarSize): Ditto.
(PushSumOfParamSize): Ditto.
(PopOffset): Ditto.
(PopSumOfParamSize): Ditto.
* gm2-compiler/SymbolTable.mod (MakeProcedure): Reimplement.
(PutProcedureNoReturn): Add ProcedureKind parameter.
(GetNthParam): Ditto.
(PutFunction): Ditto.
(PutOptFunction): Ditto.
(IsReturnOptional): Ditto.
(MakeVariableForParam): Ditto.
(PutParam): Ditto.
(PutVarParam): Ditto.
(PutParamName): Ditto.
(AddParameter): Ditto.
(IsVarParam): Ditto.
(IsVarParamAny): Ditto.
(NoOfParam): Ditto.
(HasVarParameters): Ditto.
(IsUnboundedParam): Ditto.
(PutUseVarArgs): Ditto.
(UsesVarArgs): Ditto.
(PutUseOptArg): Ditto.
(UsesOptArg): Ditto.
(UsesOptArgAny): Ditto.
(PutOptArgInit): Ditto.
(IsProcedure): Ditto.
(IsPointer): Ditto.
(IsRecord): Ditto.
(IsArray): Ditto.
(IsEnumeration): Ditto.
(IsUnbounded): Ditto.
(IsSet): Ditto.
(IsSetPacked): Ditto.
(CheckUnbounded): Ditto.
(IsOAFamily): Ditto.
(IsModuleWithinProcedure): Ditto.
(GetDeclaredDef): Ditto.
(GetDeclaredMod): Ditto.
(GetDeclaredFor): Ditto.
(GetProcedureDeclaredForward): Ditto.
(GetProcedureKind): Ditto.
(PutProcedureDeclaredForward): Ditto.
(GetProcedureDeclaredTok): Ditto.
(GetProcedureDeclaredProper): Ditto.
(PutProcedureDeclaredTok): Ditto.
(PutProcedureDeclaredProper): Ditto.
(GetReturnTypeTok): Ditto.
(GetProcedureDeclaredDefinition): Ditto.
(PutReturnTypeTok): Ditto.
(PutProcedureDeclaredDefinition): Ditto.
(GetProcedureKindDesc): Ditto.
(IsProcedureVariable): Ditto.
(IsAModula2Type): Ditto.
(GetParam): Ditto.
(ProcedureParametersDefined): Ditto.
(AreParametersDefinedInImplementation): Remove.
(AreParametersDefinedInDefinition): Ditto.
(AreProcedureParametersDefined): Ditto.
(IsSizeSolved): Ditto.
(IsOffsetSolved): Ditto.
(IsValueSolved): Ditto.
(IsSumOfParamSizeSolved): Ditto.
(PushSize): Ditto.
(PushOffset): Ditto.
(PopSize): Ditto.
(PushValue): Ditto.
(PushParamSize): Ditto.
(PushSumOfLocalVarSize): Ditto.
(PushSumOfParamSize): Ditto.
(PushVarSize): Ditto.
(PopValue): Ditto.
(PopSize): Ditto.
(PopOffset): Ditto.
(PopSumOfParamSize): Ditto.
(PutParametersDefinedByForward): New procedure.
(PutProcedureParametersDefined): Ditto.
(PutProcedureDefined): Ditto.
(GetParametersDefinedByProper): Ditto.
(GetProcedureDeclaredForward): Ditto.
(GetProcedureDeclaredProper): Ditto.
(PutProcedureDeclaredProper): Ditto.
(GetProcedureDeclaredDefinition): Ditto.
(PutProcedureDeclaredDefinition): Ditto.
(GetProcedureDefined): Ditto.
(PutUseOptArg): Ditto.
(UsesOptArg): Ditto.
(PutOptArgInit): Ditto.
(SetReturnOptional): Ditto.
(UsesOptArgAny): Ditto.
(GetProcedureKindDesc): Ditto.
(PutParametersDefinedByProper): Ditto.
(GetParametersDefinedByProper): Ditto.
(IsReturnOptionalAny): New procedure function.
(IsProcedureAnyDefaultBoolean): Ditto.
(IsProcedureAnyBoolean): Ditto.
(IsProcedureAnyNoReturn): Ditto.
(GetNthParamAny): Ditto.
(NoOfParamAny): Ditto.
(IsProcedureAnyNoReturn): Ditto.
(GetProcedureKind): Ditto.
(IsVarParamAny): Ditto.
(IsUnboundedParamAny): Ditto.
(ForeachParamSymDo): New comment.
* gm2-libs-coroutines/SYSTEM.mod: Reformat.
gcc/testsuite/ChangeLog:
* gm2/iso/fail/badexpression3.mod: New test.
* gm2/iso/fail/badparam4.def: New test.
* gm2/iso/fail/badparam4.mod: New test.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
|