Age | Commit message (Collapse) | Author | Files | Lines |
|
On ARM NEON doesn't support double, so __is_intrinsic_type_v<double,
whatever> should say false (instead of being ill-formed).
Signed-off-by: Matthias Kretz <m.kretz@gsi.de>
libstdc++-v3/ChangeLog:
PR libstdc++/109261
* include/experimental/bits/simd.h (__intrinsic_type):
Specialize __intrinsic_type<double, 8> and
__intrinsic_type<double, 16> in any case, but provide the member
type only with __aarch64__.
|
|
Signed-off-by: Matthias Kretz <m.kretz@gsi.de>
libstdc++-v3/ChangeLog:
PR libstdc++/109261
* include/experimental/bits/simd_neon.h (_S_reduce): Add
constexpr and make NEON implementation conditional on
not __builtin_is_constant_evaluated.
|
|
This patch fixes the case when a single character constant literal is
passed as a string actual parameter to an ARRAY OF CHAR formal parameter.
To be consistent a single character is promoted to a string and nul
terminated (and its high value is 1). Previously a single character
string would not be nul terminated and the high value was 0.
The documentation now includes a section describing the expected behavior
and included in this patch is some regression test code matching the
table inside the documentation.
gcc/ChangeLog:
PR modula2/109952
* doc/gm2.texi (High procedure function): New node.
(Using): New menu entry for High procedure function.
gcc/m2/ChangeLog:
PR modula2/109952
* Make-maintainer.in: Change header to include emacs file mode.
* gm2-compiler/M2GenGCC.mod (BuildHighFromChar): Check whether
operand is a constant string and is nul terminated then return one.
* gm2-compiler/PCSymBuild.mod (WalkFunction): Add default return
TRUE. Static analysis missing return path fix.
* gm2-libs/IO.mod (Init): Rewrite to help static analysis.
* target-independent/m2/gm2-libs.texi: Rebuild.
gcc/testsuite/ChangeLog:
PR modula2/109952
* gm2/pim/run/pass/hightests.mod: New test.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
|
|
When I wrote early-remat, the DF_FORWARD block order was a postorder
of a reverse/backward walk (i.e. of the inverted cfg), rather than a
reverse postorder of a forward walk. A postorder of a backward walk
lacked the important property that dominators come before the blocks
they dominate; instead it ensures that postdominators come after
the blocks that they postdominate.
The DF_BACKWARD block order was similarly a postorder of a forward
walk. Since early-remat wanted a standard postorder and reverse
postorder with normal dominator properties, it used the DF_BACKWARD
order instead of the DF_FORWARD order.
g:53dddbfeb213ac4ec39f fixed the DF orders so that DF_FORWARD was
an RPO of a forward walk and so that DF_BACKWARD was an RPO of a
backward walk. This meant that iterating backwards over the
DF_BACKWARD order had the exact problem that the original DF_FORWARD
order had, triggering a flurry of ICEs for SVE.
This fixes the build with SVE enabled. It also fixes an ICE
in g++.target/aarch64/sve/pr99766.C with normal builds. I've
included the test from the PR as well, for extra coverage.
gcc/
PR rtl-optimization/109940
* early-remat.cc (postorder_index): Rename to...
(rpo_index): ...this.
(compare_candidates): Sort by decreasing rpo_index rather than
increasing postorder_index.
(early_remat::sort_candidates): Calculate the forward RPO from
DF_FORWARD.
(early_remat::local_phase): Follow forward RPO using DF_FORWARD,
rather than DF_BACKWARD in reverse.
gcc/testsuite/
* gcc.dg/torture/pr109940.c: New test.
|
|
As the PR says we shouldn't be using qualifier_unsigned for the return type of the __ssat intrinsics.
UNSIGNED_SAT_BINOP_UNSIGNED_IMM_QUALIFIERS already exists for that.
This was just a thinko.
This patch fixes this and the warning with -Wconversion goes away.
Bootstrapped and tested on arm-none-linux-gnueabihf.
gcc/ChangeLog:
PR target/109939
* config/arm/arm-builtins.cc (SAT_BINOP_UNSIGNED_IMM_QUALIFIERS): Use
qualifier_none for the return operand.
gcc/testsuite/ChangeLog:
PR target/109939
* gcc.target/arm/pr109939.c: New test.
|
|
This patch is adding mask logic auto-vectorization, define the pattern
as "define_insn_and_split" to allow combine PASS easily combine series
instructions.
For example:
combine vmxor.mm + vmnot.m into vmxnor.mm
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
gcc/ChangeLog:
* config/riscv/autovec.md (<optab><mode>3): New pattern.
(one_cmpl<mode>2): Ditto.
(*<optab>not<mode>): Ditto.
(*n<optab><mode>): Ditto.
* config/riscv/riscv-v.cc (expand_vec_cmp_float): Change to
one_cmpl.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/cmp/vcond-4.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond_run-4.c: New test.
|
|
The bogus warning is present on 32-bit ppc-vx7r2 too, so drop the 64
from the powerpc xfail triplet.
for gcc/testsuite/ChangeLog
* gcc.dg/uninit-pred-9_b.c: Xfail bogus warning on 32-bit ppc
as well.
|
|
The expected results for signbit-2 only arise on x86 with avx512f
disabled and sse2 enabled. The patch already disables avx512f
explicitly, but it fails to enable sse2.
for gcc/testsuite/ChangeLog
* gcc.dg/signbit-2.c: Add -msse2 on x86.
|
|
The sysconf function is only available in rtp mode on vxworks. In
kernel mode, it is not even declared, but the feature test macro in
the testsuite doesn't notice its absence because it's a link test, and
vxworks kernel mode uses partial linking.
This patch introduces an alternate test on vxworks targets to check
for a declaration and for an often-used sysconf parameter.
for gcc/testsuite/ChangeLog
* lib/target-supports.exp (check_effective_target_sysconf):
Check for declaration and _SC_PAGESIZE on vxworks.
|
|
Following Richi's suggestion in [1], I'm working on deferring
cost evaluation next to the transformation, this patch is
to enhance function vect_transform_slp_perm_load_1 which
could under-cost for vector permutation, since the costing
doesn't try to consider nvectors_per_build, it's inconsistent
with the transformation part.
Basically it changes the below
if (index == count)
{
if (!noop_p)
{
// A ...
// ++*n_perms;
if (!analyze_only)
{
// B1 ...
// B2 ...
for ...
// B3 building VEC_PERM_EXPR
}
}
else if (!analyze_only)
{
// no B2 since no any further uses here.
for ...
// B4 building nothing
}
// B5 ...
}
to:
if (index == count)
{
if (!noop_p)
{
// A ...
if (!analyze_only)
// B1 ...
// B2 ... (trivial computations during analyze_only or not)
for ...
{
// now n_perms is consistent with building VEC_PERM_EXPR
// ++*n_perms;
if (analyze_only)
continue;
// B3 building VEC_PERM_EXPR
}
}
else if (!analyze_only)
{
// no B2 since no any further uses here.
for ...
// B4 building nothing
}
// B5 ...
}
[1] https://gcc.gnu.org/pipermail/gcc-patches/2021-January/563624.html
gcc/ChangeLog:
* tree-vect-slp.cc (vect_transform_slp_perm_load_1): Adjust the
calculation on n_perms by considering nvectors_per_build.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/costmodel/ppc/costmodel-slp-perm.c: New test.
|
|
This patch enable RVV auto-vectorization including floating-point
unorder and order comparison.
The testcases are leveraged from Richard. So include Richard as co-author.
And this patch is the prerequisite patch for my current middle-end work.
Without this patch, I can't support len_mask_xxx middle-end pattern
since the mask is generated by comparison.
For example,
for (int i...; i < n.)
if (cond[i])
a[i] = b[i]
We need len_mask_load/len_mask_store for such code and I am gonna
support them in the middle-end after this patch is merged.
Both integer && floating (order and unorder) are tested.
built && regression passed.
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
Co-Authored-By: Richard Sandiford <richard.sandiford@arm.com>
gcc/ChangeLog:
* config/riscv/autovec.md (@vcond_mask_<mode><vm>): New pattern.
(vec_cmp<mode><vm>): New pattern.
(vec_cmpu<mode><vm>): New pattern.
(vcond<V:mode><VI:mode>): New pattern.
(vcondu<V:mode><VI:mode>): New pattern.
* config/riscv/riscv-protos.h (enum insn_type): Add new enum.
(emit_vlmax_merge_insn): New function.
(emit_vlmax_cmp_insn): Ditto.
(emit_vlmax_cmp_mu_insn): Ditto.
(expand_vec_cmp): Ditto.
(expand_vec_cmp_float): Ditto.
(expand_vcond): Ditto.
* config/riscv/riscv-v.cc (emit_vlmax_merge_insn): Ditto.
(emit_vlmax_cmp_insn): Ditto.
(emit_vlmax_cmp_mu_insn): Ditto.
(get_cmp_insn_code): Ditto.
(expand_vec_cmp): Ditto.
(expand_vec_cmp_float): Ditto.
(expand_vcond): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/rvv.exp:
* gcc.target/riscv/rvv/autovec/cmp/vcond-1.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond-2.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond-3.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond_run-1.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond_run-2.c: New test.
* gcc.target/riscv/rvv/autovec/cmp/vcond_run-3.c: New test.
|
|
This patch support the RVV VREINTERPRET from the vbool*_t to the
vuint*m1_t. Aka:
vuint*m1_t __riscv_vreinterpret_x_x(vbool*_t);
These APIs help the users to convert vector the vbool*_t to the LMUL=1
unsigned integer vint*_t. According to the RVV intrinsic SPEC as below,
the reinterpret intrinsics only change the types of the underlying contents.
https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/master/rvv-intrinsic-rfc.md#reinterpret-vbool-o-vintm1
For example, given below code.
vuint8m1_t test_vreinterpret_v_b1_vuint8m1 (vbool1_t src) {
return __riscv_vreinterpret_v_b1_u8m1 (src);
}
It will generate the assembly code similar as below:
vsetvli a5,zero,e8,m8,ta,ma
vlm.v v1,0(a1)
vs1r.v v1,0(a0)
ret
Please NOTE the test files doesn't cover all the possible combinations
of the intrinsic APIs introduced by this PATCH due to too many.
This is the last PATCH for the reinterpret between the signed/unsigned
and the bool vector types.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/genrvv-type-indexer.cc (main): Add
unsigned_eew*_lmul1_interpret for indexer.
* config/riscv/riscv-vector-builtins-functions.def (vreinterpret):
Register vuint*m1_t interpret function.
* config/riscv/riscv-vector-builtins-types.def (DEF_RVV_UNSIGNED_EEW8_LMUL1_INTERPRET_OPS):
New macro for vuint8m1_t.
(DEF_RVV_UNSIGNED_EEW16_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_UNSIGNED_EEW32_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_UNSIGNED_EEW64_LMUL1_INTERPRET_OPS): Likewise.
(vbool1_t): Add to unsigned_eew*_interpret_ops.
(vbool2_t): Likewise.
(vbool4_t): Likewise.
(vbool8_t): Likewise.
(vbool16_t): Likewise.
(vbool32_t): Likewise.
(vbool64_t): Likewise.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_UNSIGNED_EEW8_LMUL1_INTERPRET_OPS):
New macro for vuint*m1_t.
(DEF_RVV_UNSIGNED_EEW16_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_UNSIGNED_EEW32_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_UNSIGNED_EEW64_LMUL1_INTERPRET_OPS): Likewise.
(required_extensions_p): Add vuint*m1_t interpret case.
* config/riscv/riscv-vector-builtins.def (unsigned_eew8_lmul1_interpret):
Add vuint*m1_t interpret to base type.
(unsigned_eew16_lmul1_interpret): Likewise.
(unsigned_eew32_lmul1_interpret): Likewise.
(unsigned_eew64_lmul1_interpret): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/misc_vreinterpret_vbool_vint.c:
Enrich test cases.
|
|
This patch support the RVV VREINTERPRET from the vbool*_t to the
vint*m1_t. Aka:
vint*m1_t __riscv_vreinterpret_x_x(vbool*_t);
These APIs help the users to convert vector the vbool*_t to the LMUL=1
signed integer vint*_t. According to the RVV intrinsic SPEC as below,
the reinterpret intrinsics only change the types of the underlying contents.
https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/master/rvv-intrinsic-rfc.md#reinterpret-vbool-o-vintm1
For example, given below code.
vint8m1_t test_vreinterpret_v_b1_vint8m1 (vbool1_t src) {
return __riscv_vreinterpret_v_b1_i8m1 (src);
}
It will generate the assembly code similar as below:
vsetvli a5,zero,e8,m8,ta,ma
vlm.v v1,0(a1)
vs1r.v v1,0(a0)
ret
Please NOTE the test files doesn't cover all the possible combinations
of the intrinsic APIs introduced by this PATCH due to too many.
The reinterpret from vbool*_t to vuint*m1_t with lmul=1 will be coverred
in another PATCH.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/genrvv-type-indexer.cc (EEW_SIZE_LIST): New macro
for the eew size list.
(LMUL1_LOG2): New macro for the log2 value of lmul=1.
(main): Add signed_eew*_lmul1_interpret for indexer.
* config/riscv/riscv-vector-builtins-functions.def (vreinterpret):
Register vint*m1_t interpret function.
* config/riscv/riscv-vector-builtins-types.def (DEF_RVV_SIGNED_EEW8_LMUL1_INTERPRET_OPS):
New macro for vint8m1_t.
(DEF_RVV_SIGNED_EEW16_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_SIGNED_EEW32_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_SIGNED_EEW64_LMUL1_INTERPRET_OPS): Likewise.
(vbool1_t): Add to signed_eew*_interpret_ops.
(vbool2_t): Likewise.
(vbool4_t): Likewise.
(vbool8_t): Likewise.
(vbool16_t): Likewise.
(vbool32_t): Likewise.
(vbool64_t): Likewise.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_SIGNED_EEW8_LMUL1_INTERPRET_OPS):
New macro for vint*m1_t.
(DEF_RVV_SIGNED_EEW16_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_SIGNED_EEW32_LMUL1_INTERPRET_OPS): Likewise.
(DEF_RVV_SIGNED_EEW64_LMUL1_INTERPRET_OPS): Likewise.
(required_extensions_p): Add vint8m1_t interpret case.
* config/riscv/riscv-vector-builtins.def (signed_eew8_lmul1_interpret):
Add vint*m1_t interpret to base type.
(signed_eew16_lmul1_interpret): Likewise.
(signed_eew32_lmul1_interpret): Likewise.
(signed_eew64_lmul1_interpret): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/misc_vreinterpret_vbool_vint.c:
Enrich the test cases.
|
|
To fix this issue, we seperate Vl operand and normal operands.
gcc/ChangeLog:
* config/riscv/autovec.md: Adjust for new interface.
* config/riscv/riscv-protos.h (emit_vlmax_insn): Add VL operand.
(emit_nonvlmax_insn): Add AVL operand.
* config/riscv/riscv-v.cc (emit_vlmax_insn): Add VL operand.
(emit_nonvlmax_insn): Add AVL operand.
(sew64_scalar_helper): Adjust for new interface.
(expand_tuple_move): Ditto.
* config/riscv/vector.md: Ditto.
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
|
|
This simple patch fixes the magic number, remove magic number make codes
more reasonable.
gcc/ChangeLog:
* config/riscv/riscv-v.cc (expand_vec_series): Remove magic number.
(expand_const_vector): Ditto.
(legitimize_move): Ditto.
(sew64_scalar_helper): Ditto.
(expand_tuple_move): Ditto.
(expand_vector_init_insert_elems): Ditto.
* config/riscv/riscv.cc (vector_zero_call_used_regs): Ditto.
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
|
|
Also for 64-bit vector abs intrinsics _mm_abs_{pi8,pi16,pi32}.
gcc/ChangeLog:
PR target/109900
* config/i386/i386.cc (ix86_gimple_fold_builtin): Fold
_mm{,256,512}_abs_{epi8,epi16,epi32,epi64} and
_mm_abs_{pi8,pi16,pi32} into gimple ABS_EXPR.
(ix86_masked_all_ones): Handle 64-bit mask.
* config/i386/i386-builtin.def: Replace icode of related
non-mask simd abs builtins with CODE_FOR_nothing.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr109900.c: New test.
|
|
|
|
Size expressions were sometimes lost and not gimplified correctly,
leading to ICEs and incorrect evaluation order. Fix this by 1) not
recursing pointers when gimplifying parameters, which was incorrect
because it might access variables declared later for incomplete
structs, and 2) adding a decl expr for variably-modified arrays
that are pointed to by parameters declared as arrays.
PR c/109450
gcc/
* function.cc (gimplify_parm_type): Remove function.
(gimplify_parameters): Call gimplify_type_sizes.
gcc/c/
* c-decl.cc (add_decl_expr): New function.
(grokdeclarator): Add decl expr for size expression in
types pointed to by parameters declared as arrays.
gcc/testsuite/
* gcc.dg/pr109450-1.c: New test.
* gcc.dg/pr109450-2.c: New test.
* gcc.dg/vla-26.c: New test.
|
|
Size expressions were sometimes lost and not gimplified correctly, leading to
ICEs and incorrect evaluation order. Fix this by 1) not recursing into
pointers when gimplifying parameters in the middle-end (the code is merged with
gimplify_type_sizes), which is incorrect because it might access variables
declared later for incomplete structs, and 2) tracking size expressions for
struct/union members correctly, 3) emitting code to evaluate size expressions
for missing cases (nested functions, empty declarations, and structs/unions).
PR c/70418
PR c/106465
PR c/107557
PR c/108423
gcc/c/
* c-decl.cc (start_decl): Make sure size expression are
evaluated only in correct context.
(grokdeclarator): Size expression in fields may need a bind
expression, make sure DECL_EXPR is always created.
(grokfield, declspecs_add_type): Pass along size expressions.
(finish_struct): Remove unneeded DECL_EXPR.
(start_function): Evaluate size expressions for nested functions.
* c-parser.cc (c_parser_struct_declarations,
c_parser_struct_or_union_specifier): Pass along size expressions.
(c_parser_declaration_or_fndef): Evaluate size expression.
(c_parser_objc_at_property_declaration,
c_parser_objc_class_instance_variables): Adapt.
* c-tree.h (grokfield): Adapt declaration.
gcc/testsuite/
* gcc.dg/nested-vla-1.c: New test.
* gcc.dg/nested-vla-2.c: New test.
* gcc.dg/nested-vla-3.c: New test.
* gcc.dg/pr70418.c: New test.
* gcc.dg/pr106465.c: New test.
* gcc.dg/pr107557-1.c: New test.
* gcc.dg/pr107557-2.c: New test.
* gcc.dg/pr108423-1.c: New test.
* gcc.dg/pr108423-2.c: New test.
* gcc.dg/pr108423-3.c: New test.
* gcc.dg/pr108423-4.c: New test.
* gcc.dg/pr108423-5.c: New test.
* gcc.dg/pr108423-6.c: New test.
* gcc.dg/typename-vla-2.c: New test.
* gcc.dg/typename-vla-3.c: New test.
* gcc.dg/typename-vla-4.c: New test.
* gcc.misc-tests/gcov-pr85350.c: Adapt.
|
|
By making use of the 'addsub_operator' added in the last patch.
gcc/ChangeLog:
* config/xtensa/xtensa.md (*addsubx): Rename from '*addx',
and change to also accept '*subx' pattern.
(*subx): Remove.
|
|
This patch decreses one machine instruction from "single bit extraction
with shifting" operation, and tries to eliminate the conditional
branch if CST2_POW2 doesn't fit into signed 12 bits with the help
of ifcvt optimization.
/* example #1 */
int test0(int x) {
return (x & 1048576) != 0 ? 1024 : 0;
}
extern int foo(void);
int test1(void) {
return (foo() & 1048576) != 0 ? 16777216 : 0;
}
;; before
test0:
movi a9, 0x400
srai a2, a2, 10
and a2, a2, a9
ret.n
test1:
addi sp, sp, -16
s32i.n a0, sp, 12
call0 foo
extui a2, a2, 20, 1
slli a2, a2, 20
beqz.n a2, .L2
movi.n a2, 1
slli a2, a2, 24
.L2:
l32i.n a0, sp, 12
addi sp, sp, 16
ret.n
;; after
test0:
extui a2, a2, 20, 1
slli a2, a2, 10
ret.n
test1:
addi sp, sp, -16
s32i.n a0, sp, 12
call0 foo
l32i.n a0, sp, 12
extui a2, a2, 20, 1
slli a2, a2, 24
addi sp, sp, 16
ret.n
In addition, if the left shift amount ('exact_log2(CST2_POW2)') is
between 1 through 3 and a either addition or subtraction with another
register follows, emit a ADDX[248] or SUBX[248] machine instruction
instead of separate left shift and add/subtract ones.
/* example #2 */
int test2(int x, int y) {
return ((x & 1048576) != 0 ? 4 : 0) + y;
}
int test3(int x, int y) {
return ((x & 2) != 0 ? 8 : 0) - y;
}
;; before
test2:
movi.n a9, 4
srai a2, a2, 18
and a2, a2, a9
add.n a2, a2, a3
ret.n
test3:
movi.n a9, 8
slli a2, a2, 2
and a2, a2, a9
sub a2, a2, a3
ret.n
;; after
test2:
extui a2, a2, 20, 1
addx4 a2, a2, a3
ret.n
test3:
extui a2, a2, 1, 1
subx8 a2, a2, a3
ret.n
gcc/ChangeLog:
* config/xtensa/predicates.md (addsub_operator): New.
* config/xtensa/xtensa.md (*extzvsi-1bit_ashlsi3,
*extzvsi-1bit_addsubx): New insn_and_split patterns.
* config/xtensa/xtensa.cc (xtensa_rtx_costs):
Add a special case about ifcvt 'noce_try_cmove()' to handle
constant loads that do not fit into signed 12 bits in the
patterns added above.
|
|
The x86 backend looks at the SLP node passed to the add_stmt_cost
hook when costing vec_construct, looking for elements that require
a move from a GPR to a vector register and cost that. But since
vect_prologue_cost_for_slp decomposes the cost for an external
SLP node into individual pieces this cost gets applied N times
without a chance for the backend to know it's just dealing with
a part of the SLP node. Just looking at a part is also not perfect
since the GPR to XMM move cost applies only once per distinct
element so handling the whole SLP node one more correctly reflects
cost (albeit without considering other external SLP nodes).
The following addresses the issue by passing down the SLP node
only for one piece and nullptr for the rest. The x86 backend
is currently the only one looking at it.
In the future the cost of external elements is something to deal
with globally but that would require the full SLP tree be available
to costing.
It's difficult to write a testcase, at the tipping point not
vectorizing is better so I'll followup with x86 specific adjustments
and will see to add a testcase later.
PR tree-optimization/109747
* tree-vect-slp.cc (vect_prologue_cost_for_slp): Pass down
the SLP node only once to the cost hook.
|
|
Some miscomputation of rtx_costs lead to sub-optimal code for
single-bit bit insertions. This patch implements TARGET_INSN_COST,
which has a chance to see the whole insn during insn combination;
in partictlar the SET_DEST of (set (zero_extract (...) ...)).
gcc/
* config/avr/avr.cc (avr_insn_cost): New static function.
(TARGET_INSN_COST): Define to that function.
|
|
The following also accounts for a GPR->XMM move cost for splat
operations and properly guards eliding the cost when moving from
memory only for SSE4.1 or HImode or larger operands. This
doesn't fix the PR fully yet.
PR target/109944
* config/i386/i386.cc (ix86_vector_costs::add_stmt_cost):
For vector construction or splats apply GPR->XMM move
costing. QImode memory can be handled directly only
with SSE4.1 pinsrb.
|
|
This is a small adjustment to the work done for PR108752 and
better reflects the cost of the generated sequence.
PR tree-optimization/108752
* tree-vect-stmts.cc (vectorizable_operation): For bit
operations with generic word_mode vectors do not cost
an extra stmt. For plus, minus and negate also cost the
constant materialization.
|
|
Add V8QImode and V4QImode vector shift patterns that call into
ix86_expand_vecop_qihi_partial. Generate special sequences
for constant count operands.
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_vecop_qihi_partial):
Call ix86_expand_vec_shift_qihi_constant for shifts
with constant count operand.
* config/i386/i386.cc (ix86_shift_rotate_cost):
Handle V4QImode and V8QImode.
* config/i386/mmx.md (<insn>v8qi3): New insn pattern.
(<insn>v4qi3): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/vect-shiftv4qi.c: New test.
* gcc.target/i386/vect-shiftv8qi.c: New test.
|
|
I just notice the warning:
../../../riscv-gcc/gcc/config/riscv/vector.md:618:1: warning: source
missing a mode?
gcc/ChangeLog:
* config/riscv/vector.md: Add mode.
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
|
|
This patch removes a buggy special case in irange::invert which seems
to have been broken for a while, and probably never triggered because
the legacy code was handled elsewhere, and the non-legacy code was
using an int_range_max of int_range<255> which made it extremely
likely for num_ranges == 255. However, with auto-resizing ranges,
int_range_max will start off at 3 and can hit this bogus code in the
unswitching code.
PR tree-optimization/109934
gcc/ChangeLog:
* value-range.cc (irange::invert): Remove buggy special case.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr109934.c: New test.
|
|
This dumps ANTIC_OUT before pruning clobbered mems from it as part
of the ANTIC_IN compute.
* tree-ssa-pre.cc (compute_antic_aux): Dump the correct
ANTIC_OUT.
|
|
At -O2, and so with SLP vectorisation enabled:
struct complx_t { float re, im; };
complx_t add(complx_t a, complx_t b) {
return {a.re + b.re, a.im + b.im};
}
generates:
fmov w3, s1
fmov x0, d0
fmov x1, d2
fmov w2, s3
bfi x0, x3, 32, 32
fmov d31, x0
bfi x1, x2, 32, 32
fmov d30, x1
fadd v31.2s, v31.2s, v30.2s
fmov x1, d31
lsr x0, x1, 32
fmov s1, w0
lsr w0, w1, 0
fmov s0, w0
ret
This is because complx_t is passed and returned in FPRs, but GCC gives
it DImode. We therefore “need” to assemble a DImode pseudo from the
two individual floats, bitcast it to a vector, do the arithmetic,
bitcast it back to a DImode pseudo, then extract the individual floats.
There are many problems here. The most basic is that we shouldn't
use SLP for such a trivial example. But SLP should in principle be
beneficial for more complicated examples, so preventing SLP for the
example above just changes the reproducer needed. A more fundamental
problem is that it doesn't make sense to use single DImode pseudos in a
testcase like this. I have a WIP patch to allow re and im to be stored
in individual SFmode pseudos instead, but it's quite an invasive change
and might end up going nowhere.
A simpler problem to tackle is that we allow DImode pseudos to be stored
in FPRs, but we don't provide any patterns for inserting values into
them, even though INS makes that easy for element-like insertions.
This patch adds some patterns for that.
Doing that showed that aarch64_modes_tieable_p was too strict:
it didn't allow SFmode and DImode values to be tied, even though
both of them occupy a single GPR and FPR, and even though we allow
both classes to change between the modes.
The *aarch64_bfidi<ALLX:mode>_subreg_<SUBDI_BITS> pattern is
especially ugly, but it's not clear what target-independent
code ought to simplify it to, if it was going to simplify it.
We should probably do the same thing for extractions, but that's left
as future work.
After the patch we generate:
ins v0.s[1], v1.s[0]
ins v2.s[1], v3.s[0]
fadd v0.2s, v0.2s, v2.2s
fmov x0, d0
ushr d1, d0, 32
lsr w0, w0, 0
fmov s0, w0
ret
which seems like a step in the right direction.
All in all, there's nothing elegant about this patchh. It just
seems like the least worst option.
gcc/
PR target/109632
* config/aarch64/aarch64.cc (aarch64_modes_tieable_p): Allow
subregs between any scalars that are 64 bits or smaller.
* config/aarch64/iterators.md (SUBDI_BITS): New int iterator.
(bits_etype): New int attribute.
* config/aarch64/aarch64.md (*insv_reg<mode>_<SUBDI_BITS>)
(*aarch64_bfi<GPI:mode><ALLX:mode>_<SUBDI_BITS>): New patterns.
(*aarch64_bfidi<ALLX:mode>_subreg_<SUBDI_BITS>): Likewise.
gcc/testsuite/
* gcc.target/aarch64/ins_bitfield_1.c: New test.
* gcc.target/aarch64/ins_bitfield_2.c: Likewise.
* gcc.target/aarch64/ins_bitfield_3.c: Likewise.
* gcc.target/aarch64/ins_bitfield_4.c: Likewise.
* gcc.target/aarch64/ins_bitfield_5.c: Likewise.
* gcc.target/aarch64/ins_bitfield_6.c: Likewise.
|
|
In a follow-up patch, I wanted to use an int iterator to iterate
over various possible values of a const_int. But one problem
with int iterators was that there was no way of referring to the
current value of the iterator. This is unlike modes and codes,
which provide automatic "mode", "MODE", "code" and "CODE"
attribbutes. These automatic definitions are the equivalent
of an explicit:
(define_mode_attr mode [(QI "qi") (HI "hi") ...])
We obviously can't do that for every possible value of an int.
One option would have been to go for some kind of lazily-populated
attribute. But that sounds quite complicated. This patch instead
goes for the simpler approach of allowing <FOO> to refer to the
current value of FOO.
In principle it would be possible to allow the same thing
for mode and code iterators. But for modes there are at least
4 realistic possiblities:
- the E_* enumeration value (which is what this patch would give)
- the user-facing C token, like DImode, SFmode, etc.
- the equivalent of <MODE>
- the equivalent of <mode>
Because of this ambiguity, it seemed better to stick to the
current approach for modes. For codes it's less clear-cut,
but <CODE> and <code> are both realistic possibilities, so again
it seemed better to be explicit.
The patch also removes “Each @var{int} must have the same rtx format.
@xref{RTL Classes}.”, which was erroneously copied from the code
iterator section.
gcc/
* doc/md.texi: Document that <FOO> can be used to refer to the
numerical value of an int iterator FOO. Tweak other parts of
the int iterator documentation.
* read-rtl.cc (iterator_group::has_self_attr): New field.
(map_attr_string): When has_self_attr is true, make <FOO>
expand to the current value of iterator FOO.
(initialize_iterators): Set has_self_attr for int iterators.
|
|
This patch is to refactor the framework of RVV auto-vectorization.
Since we find out are keep adding helpers && wrappers when implementing
auto-vectorization.
It will make the RVV auto-vectorizaiton very messy.
After double check my downstream RVV GCC, assemble all auto-vectorization
patterns we are going to have. Base on these informations, I refactor the
RVV framework to make it is easier and flexible for future use.
For example, we will definitely implement len_mask_load/len_mask_store
patterns which have both length && mask operand and use undefine merge operand.
len_cond_div or cond_div will have length or mask operand and use a real
merge operand instead of undefine merge operand.
Also, we will have some patterns will use tail undisturbed and mask any.
etc..... We will defintely have various features.
Base on these circumstances, we add these following private members:
int m_op_num;
/* It't true when the pattern has a dest operand. Most of the patterns have
dest operand wheras some patterns like STOREs does not have dest
operand.
*/
bool m_has_dest_p;
bool m_fully_unmasked_p;
bool m_use_real_merge_p;
bool m_has_avl_p;
bool m_vlmax_p;
bool m_has_tail_policy_p;
bool m_has_mask_policy_p;
enum tail_policy m_tail_policy;
enum mask_policy m_mask_policy;
machine_mode m_dest_mode;
machine_mode m_mask_mode;
These variables I believe can cover all potential situations.
And the instruction generater wrapper is "emit_insn" which will add
operands and
emit instruction according to the variables I mentioned above.
After this is done. We will easily add helpers without changing any base
class "insn_expand".
Currently, we have "emit_vlmax_tany_many" and "emit_nonvlmax_tany_many".
For example, when we want to emit a binary operations:
We have
Then just use emit_vlmax_tany_many (...RVV_BINOP_NUM...)
So, if we support ternary operation in the future. It's quite simple:
emit_vlmax_tany_many (...RVV_BINOP_NUM...)
"*_tany_many" means we are using tail any and mask any.
We will definitely need tail undisturbed or mask undisturbed when we
support these patterns
in middle-end. It's very simple to extend such helper base on current
framework:
we can do that in the future like this:
void
emit_nonvlmax_tu_mu (unsigned icode, int op_num, rtx *ops)
{
machine_mode data_mode = GET_MODE (ops[0]);
machine_mode mask_mode = get_mask_mode (data_mode).require ();
/* The number = 11 is because we have maximum 11 operands for
RVV instruction patterns according to vector.md. */
insn_expander<11> e (/*OP_NUM*/ op_num,
/*HAS_DEST_P*/ true,
/*USE_ALL_TRUES_MASK_P*/ true,
/*USE_UNDEF_MERGE_P*/ true,
/*HAS_AVL_P*/ true,
/*VLMAX_P*/ false,
/*HAS_TAIL_POLICY_P*/ true,
/*HAS_MASK_POLICY_P*/ true,
/*TAIL_POLICY*/ TAIL_UNDISTURBED,
/*MASK_POLICY*/ MASK_UNDISTURBED,
/*DEST_MODE*/ data_mode,
/*MASK_MODE*/ mask_mode);
e.emit_insn ((enum insn_code) icode, ops);
}
That's enough (I have tested it fully in my downstream RVV GCC).
I didn't add it in this patch.
Thanks.
Signed-off-by: Juzhe-Zhong <juzhe.zhong@rivai.ai>
gcc/ChangeLog:
* config/riscv/autovec.md: Refactor the framework of RVV auto-vectorization.
* config/riscv/riscv-protos.h (RVV_MISC_OP_NUM): Ditto.
(RVV_UNOP_NUM): New macro.
(RVV_BINOP_NUM): Ditto.
(legitimize_move): Refactor the framework of RVV auto-vectorization.
(emit_vlmax_op): Ditto.
(emit_vlmax_reg_op): Ditto.
(emit_len_op): Ditto.
(emit_len_binop): Ditto.
(emit_vlmax_tany_many): Ditto.
(emit_nonvlmax_tany_many): Ditto.
(sew64_scalar_helper): Ditto.
(expand_tuple_move): Ditto.
* config/riscv/riscv-v.cc (emit_pred_op): Ditto.
(emit_pred_binop): Ditto.
(emit_vlmax_op): Ditto.
(emit_vlmax_tany_many): New function.
(emit_len_op): Remove.
(emit_nonvlmax_tany_many): New function.
(emit_vlmax_reg_op): Remove.
(emit_len_binop): Ditto.
(emit_index_op): Ditto.
(expand_vec_series): Refactor the framework of RVV auto-vectorization.
(expand_const_vector): Ditto.
(legitimize_move): Ditto.
(sew64_scalar_helper): Ditto.
(expand_tuple_move): Ditto.
(expand_vector_init_insert_elems): Ditto.
* config/riscv/riscv.cc (vector_zero_call_used_regs): Ditto.
* config/riscv/vector.md: Ditto.
|
|
aarch64-simd.md
In this PR we ICE because the substituted pattern for mla "lost" its predicate and constraint for operand 0
because the define_subst template:
[(set (match_operand:<VDBL> 0)
(vec_concat:<VDBL>
(match_dup 1)
(match_operand:VDZ 2 "aarch64_simd_or_scalar_imm_zero")))])
Uses match_operand instead of match_dup for operand 0. We can't use match_dup 0 for it because we need to specify the widened mode.
The problem is fixed by adding a "register_operand" predicate and "=w" constraint to the match_operand.
This makes sense conceptually too as the transformation we're targeting only applies to instructions that write a "w" register.
With this change the mddump pattern that ICEs goes from:
(define_insn ("aarch64_mlav4hi_vec_concatz_le")
[
(set (match_operand:V8HI 0 ("") ("")) <<------ Missing constraint!
(vec_concat:V8HI (plus:V4HI (mult:V4HI (match_operand:V4HI 2 ("register_operand") ("w"))
(match_operand:V4HI 3 ("register_operand") ("w")))
(match_operand:V4HI 1 ("register_operand") ("0")))
(match_operand:V4HI 4 ("aarch64_simd_or_scalar_imm_zero") (""))))
] ("(!BYTES_BIG_ENDIAN) && (TARGET_SIMD)") ("mla\t%0.4h, %2.4h, %3.4h")
to the proper:
(define_insn ("aarch64_mlav4hi_vec_concatz_le")
[
(set (match_operand:V8HI 0 ("register_operand") ("=w")) <<-------- Constraint in the right place
(vec_concat:V8HI (plus:V4HI (mult:V4HI (match_operand:V4HI 2 ("register_operand") ("w"))
(match_operand:V4HI 3 ("register_operand") ("w")))
(match_operand:V4HI 1 ("register_operand") ("0")))
(match_operand:V4HI 4 ("aarch64_simd_or_scalar_imm_zero") (""))))
] ("(!BYTES_BIG_ENDIAN) && (TARGET_SIMD)") ("mla\t%0.4h, %2.4h, %3.4h")
This seems to do the right thing for multi-alternative patterns as well, the annotated pattern for aarch64_cmltv8qi is:
(define_insn ("aarch64_cmltv8qi")
[
(set (match_operand:V8QI 0 ("register_operand") ("=w,w"))
(neg:V8QI (lt:V8QI (match_operand:V8QI 1 ("register_operand") ("w,w"))
(match_operand:V8QI 2 ("aarch64_simd_reg_or_zero") ("w,ZDz")))))
]
whereas the substituted version now looks like:
(define_insn ("aarch64_cmltv8qi_vec_concatz_le")
[
(set (match_operand:V16QI 0 ("register_operand") ("=w,w"))
(vec_concat:V16QI (neg:V8QI (lt:V8QI (match_operand:V8QI 1 ("register_operand") ("w,w"))
(match_operand:V8QI 2 ("aarch64_simd_reg_or_zero") ("w,ZDz"))))
(match_operand:V8QI 3 ("aarch64_simd_or_scalar_imm_zero") (""))))
]
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
PR target/109855
* config/aarch64/aarch64-simd.md (add_vec_concat_subst_le): Add predicate
and constraint for operand 0.
(add_vec_concat_subst_be): Likewise.
gcc/testsuite/ChangeLog:
PR target/109855
* gcc.target/aarch64/pr109855.c: New test.
|
|
The following fixes code hoisting to properly consider ANTIC_OUT instead
of ANTIC_IN. That's a bit expensive to re-compute but since we no
longer iterate we're doing this only once per BB which should be
acceptable. This avoids missing hoistings to the end of blocks where
something in the block clobbers the hoisted value.
PR tree-optimization/109849
* tree-ssa-pre.cc (do_hoist_insertion): Compute ANTIC_OUT
and use that to determine what to hoist.
* gcc.dg/tree-ssa/ssa-hoist-8.c: New testcase.
|
|
|
|
The encoder for CONSTRUCTORs assumes that all bit-fields (DECL_BIT_FIELD)
have integral types, but that's not the case in Ada where they may have
pretty much any type, resulting in a wrong encoding for them
gcc/
* fold-const.cc (native_encode_initializer) <CONSTRUCTOR>: Apply the
specific treatment for bit-fields only if they have an integral type
and filter out non-integral bit-fields that do not start and end on
a byte boundary.
gcc/testsuite/
* gnat.dg/opt101.adb: New test.
* gnat.dg/opt101_pkg.ads: New helper.
|
|
Add new aspect Exceptional_Cases, which is intended for SPARK and
describes in which cases an exception will be raised, and optionally
supply a postcondition that shall be verified in this case.
The implementation is heavily modeled after Subprogram_Variant, which in
turn was heavily modeled after Contract_Cases. Currently the aspect is
only analysed; the code infrastructure required to expand it is prepared
but empty. This is enough for the aspect to be verified by GNATprove.
gcc/ada/
* aspects.ads
(Aspect_Id): Add aspect identifier.
(Aspect_Argument): New aspect accepts an expression.
(Is_Representation_Aspect): New aspect is not a representation
aspect.
(Aspect_Names): Associate name with the new aspect identifier.
(Aspect_Delay): New aspect is never delayed.
* contracts.adb
(Add_Contract_Item): Store new aspect among contract items.
(Analyze_Entry_Or_Subprogram_Contract): Likewise.
(Analyze_Subprogram_Body_Stub_Contract): Likewise.
(Process_Contract_Cases): Expand new aspect, if present.
* contracts.ads
(Analyze_Entry_Or_Subprogram_Body_Contract): Mention new aspect in
spec.
(Analyze_Entry_Or_Subprogram_Contract): Likewise.
* einfo-utils.adb
(Get_Pragma): Allow new aspect to be picked by the backend.
* einfo-utils.ads
(Get_Pragma): Mention new aspect in spec.
* exp_prag.adb
(Expand_Pragma_Exceptional_Cases): Dummy expansion routine.
* exp_prag.ads
(Expand_Pragma_Exceptional_Cases): Add spec for expansion routine.
* inline.adb
(Remove_Aspects_And_Pragmas): Remove aspect from bodies to inline.
* par-prag.adb
(Par.Prag): Accept pragma in the parser, so it will be checked
later.
* sem_ch12.adb
(Implementation of Generic Contracts): Mention new aspect in
comment.
* sem_ch13.adb
(Analyze_Aspect_Specifications): Transform new aspect info a
corresponding pragma.
* sem_prag.adb
(Analyze_Exceptional_Cases_In_Decl_Part): Analyze aspect
expression; heavily inspired by the existing code for analysis of
Subprogram_Variant and exception handlers.
(Analyze_Pragma): Analyze pragma corresponding to the new aspect.
(Is_Non_Significant_Pragma_Reference): Add new pragma to the
table.
* sem_prag.ads
(Assertion_Expression_Pragma): New pragma acts as an assertion
expression, even though it is not currently expanded.
(Analyze_Exceptional_Cases_In_Decl_Part): Add spec.
* sem_util.adb
(Is_Subprogram_Contract_Annotation): Mark new annotation is a
subprogram contract, so the subprogram with it won't be inlined.
* sem_util.ads
(Is_Subprogram_Contract_Annotation): Mention new aspect in
comment.
* sinfo.ads
(Contract_Test_Cases): Mention new aspect in comment.
* snames.ads-tmpl: Add entries for the new name and pragma.
|
|
It turns out that skipping compiler-generated block scopes is problematic
when computing the public status of a subprogram, because this subprogram
may end up being nested in the elaboration procedure of a package spec or
body, in which case it may not be public.
This replaces the original fix with a pair of Push_Scope/Pop_Scope in the
Build_Predicate_Function procedure, as done elsewhere in similar cases.
gcc/ada/
* sem_ch13.adb (Build_Predicate_Functions): If the current scope
is not that of the type, push this scope and pop it at the end.
* sem_util.ads (Current_Scope_No_Loops_No_Blocks): Delete.
* sem_util.adb (Current_Scope_No_Loops_No_Blocks): Likewise.
(Set_Public_Status): Call again Current_Scope.
|
|
The compiler blows up (such as with a Storage_Error or Assert_Failure)
on a call to a limited build-in-place function occurring in the return
for a function with a limited class-wide result. Such a function
should include extra formals for a task master and activation chain
(because it's possible for a limited class-wide type to have values
with task parts), but when the enclosing function occurs within an
instantiation and the result subtype comes from a formal type, the
extra formals were missing for the enclosing function. As a result,
the attempt to retrieve the task master formal for passing along to
a BIP call in the return failed when calling Build_In_Place_Formal to
loop through the formals. When determining the need for the formals in
Create_Extra_Formals, Needs_BIP_Actual_Task_Actuals was returning False,
because Might_Have_Tasks incorrectly returned False due to the test
of Is_Limited_Record flag on the class-wide generic actual subtype's
Etype being False. Is_Limited_Record was not being properly inherited
by the class-wide type in the case of private extensions, because
Make_Class_Wide_Type was called in Analyze_Private_Extension_Declaration
before certain flags (such as Is_Limited_Record and Is_Controlled_Active)
are inherited later in Build_Derived_Record_Type (which will also call
Make_Class_Wide_Type). This is corrected by removing the early call
to Make_Class_Wide_Type.
gcc/ada/
* exp_ch6.adb (Might_Have_Tasks): Remove unneeded Etype call from
call to Is_Limited_Record, since that flag is now properly
inherited by class-wide types.
* sem_ch3.adb (Analyze_Private_Extension_Declaration): Remove call
to Make_Class_Wide_Type, which is done too early, and will later
be done in Build_Derived_Record_Type after flags such as
Is_Limited_Record and Is_Controlled_Active have been set on the
derived type.
|
|
gcc/ada/
* libgnat/s-stchop.adb (Stack_Check): Remove redundant parentheses.
|
|
Some of the calls to Error_Msg_N controlled by the flag
Warn_On_Redundant_Constructs missed the "?r?" tag in their message
string. This caused a misleading "[enabled by default]" label to appear
next to the error message.
Spotted while adding a warning about duplicated choices in exception
handlers.
gcc/ada/
* freeze.adb (Freeze_Record_Type): Add tag for redundant pragma Pack.
* sem_aggr.adb (Resolve_Record_Aggregate): Add tag for redundant OTHERS
choice.
* sem_ch8.adb (Use_One_Type): Add tag for redundant USE clauses.
|
|
When detecting duplicate choices in exception handlers we had
inconsistent pairs of First/Next_Non_Pragma and First_Non_Pragma/Next.
This was harmless, because exception choices don't allow pragmas at all,
e.g.:
when Program_Error | Constraint_Error | ...; -- pragma not allowed
and exception handlers only allow pragmas to appear as the first item
on the list, e.g.:
exception
pragma Inspection_Point; -- first item on the list of handlers
when Program_Error =>
<statements>
pragma Inspection_Point; -- last item on the list of statements
when Constraint_Error =>
...
However, it still seems cleaner to have consistent pairs of First/Next
and First_Non_Pragma/Next_Non_Pragma.
gcc/ada/
* sem_ch11.adb
(Check_Duplication): Fix inconsistent iteration.
(Others_Present): Iterate over handlers using First_Non_Pragma and
Next_Non_Pragma just like in Check_Duplication.
|
|
The problem is that, unlike for protected subprograms, the expansion of
cleanups for protected entries is not delayed when they contain package
instances with a body, so the cleanups are generated twice and this may
yield two finalizers if the secondary stack is used in the entry body.
This restores the delaying, which uncovers the missing propagation of the
Uses_Sec_Stack flag as is done for protected subprograms, which in turn
requires using a Corresponding_Spec field as for protected subprograms.
This also gets rid of the Delay_Subprogram_Descriptors flag on entities,
whose only remaining use in Expand_Cleanup_Actions was unreachable.
The last change is to unconditionally reset the scopes in the case of
protected subprograms when they are expanded, as is done in the case of
protected entries. This makes it possible to remove the code adjusting
the scope on the fly in Cleanup_Scopes but requires a few adjustments.
gcc/ada/
* einfo.ads (Delay_Subprogram_Descriptors): Delete.
* gen_il-fields.ads (Opt_Field_Enum): Remove
Delay_Subprogram_Descriptors.
* gen_il-gen-gen_entities.adb (Gen_Entities): Likewise.
* gen_il-gen-gen_nodes.adb (N_Entry_Body): Add Corresponding_Spec.
* sinfo.ads (Corresponding_Spec): Document new use.
(N_Entry_Body): Likewise.
* exp_ch6.adb (Expand_Protected_Object_Reference): Be prepared for
protected subprograms that have been expanded.
* exp_ch7.adb (Expand_Cleanup_Actions): Remove unreachable code.
* exp_ch9.adb (Build_Protected_Entry): Add a local variable for the
new block and propagate Uses_Sec_Stack from the corresponding spec.
(Expand_N_Protected_Body) <N_Subprogram_Body>: Unconditionally reset
the scopes of top-level entities in the new body.
* inline.adb (Cleanup_Scopes): Do not adjust the scope on the fly.
* sem_ch9.adb (Analyze_Entry_Body): Set Corresponding_Spec.
* sem_ch12.adb (Analyze_Package_Instantiation): Remove obsolete code
setting Delay_Subprogram_Descriptors and tidy up.
* sem_util.adb (Scope_Within): Deal with protected subprograms that
have been expanded.
(Scope_Within_Or_Same): Likewise.
|
|
The implementation of task attributes in the runtime defines an atomic clone
of System.Address, which is awkward for targets where addresses and pointers
have a specific representation, so this change replaces that with a pragma
Atomic_Components on the Attribute_Array type.
gcc/ada/
* libgnarl/s-taskin.ads (Atomic_Address): Delete.
(Attribute_Array): Add pragma Atomic_Components.
(Ada_Task_Control_Block): Adjust default value of Attributes.
* libgnarl/s-tasini.adb (Finalize_Attributes): Adjust type of local
variable.
* libgnarl/s-tataat.ads (Deallocator): Adjust type of parameter.
(To_Attribute): Adjust source type.
* libgnarl/a-tasatt.adb: Add clauses for System.Storage_Elements.
(New_Attribute): Adjust return type.
(Deallocate): Adjust type of parameter.
(To_Real_Attribute): Adjust source type.
(To_Address): Add target type.
(To_Attribute): Adjust source type.
(Fast_Path): Adjust tested type.
(Finalize): Compare with Null_Address.
(Reference): Likewise.
(Reinitialize): Likewise.
(Set_Value): Likewise. Add conversion to Integer_Address.
(Value): Likewise.
|
|
gcc/ada/
* scng.adb (Scan): Replace occurrences of All_Extensions_Allowed
by Core_Extensions_Allowed.
|
|
Introduce new ghost helper functions to facilitate proof.
gcc/ada/
* libgnat/s-valueu.adb (Scan_Raw_Unsigned): Use new helpers.
* libgnat/s-vauspe.ads (Raw_Unsigned_Starts_As_Based_Ghost,
Raw_Unsigned_Is_Based_Ghost): New ghost helper functions.
(Is_Raw_Unsigned_Format_Ghost, Scan_Split_No_Overflow_Ghost,
Scan_Split_Value_Ghost, Raw_Unsigned_Last_Ghost): Use new
helpers.
|
|
Improve -gnatyx to check additional complete conditions,
and introduce a new switch -gnatyz to check for unnecessary
parentheses according to operator precedence rules.
Enable -gnatyz as part of -gnatyg.
gcc/ada/
* par-ch5.adb, style.ads, styleg.adb, styleg.ads
(Check_Xtra_Parens): Remove extra parameter Enable.
(Check_Xtra_Parens_Precedence): New.
(P_Case_Statement): Add -gnatyx style check.
* sem_ch4.adb: Replace calls to Check_Xtra_Parens by
Check_Xtra_Parens_Precedence.
* stylesw.ads, stylesw.adb, usage.adb: Add support for
-gnatyz.
* doc/gnat_ugn/building_executable_programs_with_gnat.rst:
Update -gnatyxzg doc.
* sem_prag.adb, libgnat/s-regpat.adb,
libgnarl/s-interr__hwint.adb, libgnarl/s-interr__vxworks.adb:
Remove extra parens.
* par-ch3.adb (P_Discrete_Range): Do not emit a style check if
the expression is not a simple expression.
* gnat_ugn.texi: Regenerate.
|
|
Offset calculations should use the operator of System.Storage_Elements.
gcc/ada/
* libgnat/s-dwalin.adb (Enable_Cache): Use the subtract operator of
System.Storage_Elements to compute the offset.
(Symbolic_Address): Likewise.
|
|
The resolution must be identical inside and outside the System hierarchy.
gcc/ada/
* sem_res.adb (Resolve_Intrinsic_Operator): Always perform the same
resolution for the special mod operator of System.Storage_Elements.
|
|
gcc/ada/
* doc/gnat_rm.rst, doc/gnat_rm/gnat_language_extensions.rst,
doc/gnat_rm/implementation_defined_pragmas.rst:
* gnat_rm.texi: Regenerate.
|