Age | Commit message (Collapse) | Author | Files | Lines |
|
We now inline main_1, confusing the expected number of vectorizations.
PR testsuite/120222
* gcc.dg/tree-ssa/gen-vect-28.c: Use noipa on main_1.
|
|
Since df_insn_rescan has been called by emit_insn_*, there is no need
to call it after calling emit_insn_*. Remove its unnecessary usages.
PR target/120228
* config/i386/i386-features.cc (ix86_place_single_vector_set):
Remove df_insn_rescan after emit_insn_*.
(remove_partial_avx_dependency): Likewise.
(replace_vector_const): Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
Fix incorrect regular expression.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/arch-52.c: Fix regular expression.
|
|
Like r9-5152-gd1409ea5a2f759 but for the mips testcase.
gcc/testsuite/
* gcc.target/mips/pr54240.c: Scan phiopt2.
Signed-off-by: Chao-ying Fu <cfu@mips.com>
Signed-off-by: Aleksandar Rakic <aleksandar.rakic@htecgroup.com>
|
|
|
|
This patch complements the change to stv and uses COSTS_N_INSNS (...)/2
to convert move costs to COSTS_N_INSNS based costs used by vectorizer.
The patch makes pr9981 to XPASS so I removed xfail but it also makes
pr91446 fail. This is about SLP
/* { dg-options "-O2 -march=icelake-server -ftree-slp-vectorize -mtune-ctrl=^sse_typeless_stores" } */
typedef struct
{
unsigned long long width, height;
long long x, y;
} info;
extern void bar (info *);
void
foo (unsigned long long width, unsigned long long height,
long long x, long long y)
{
info t;
t.width = width;
t.height = height;
t.x = x;
t.y = y;
bar (&t);
}
/* { dg-final { scan-assembler-times "vmovdqa\[^\n\r\]*xmm\[0-9\]" 2 } } */
With fixed cost the construction cost is now too large so vectorization does
not happen. This is the hack increasing cost to account integer->sse move which
I think we can handle incrementally.
gcc/ChangeLog:
* config/i386/i386.cc (ix86_widen_mult_cost): Use sse_op to cost
SSE integer addition.
(ix86_multiplication_cost): Use COSTS_N_INSNS (...)/2 to cost sse
loads.
(ix86_shift_rotate_cost): Likewise.
(ix86_vector_costs::add_stmt_cost): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr91446.c: xfail.
* gcc.target/i386/pr99881.c: remove xfail.
|
|
Add new function check_effective_target_xtensa_atomic and use it in the
check_effective_target_sync_int_long and
check_effective_target_sync_char_short.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp
(check_effective_target_xtensa_atomic): New function.
(check_effective_target_sync_int_long)
(check_effective_target_sync_char_short): Add test for xtensa.
|
|
arguments/returns
Until now (presumably after transition to LRA), hard registers storing
function arguments or return values were spilling undesirably when
TARGET_HARD_FLOAT is enabled.
/* example */
float test0(float a, float b) {
return a + b;
}
extern float foo(void);
float test1(void) {
return foo() * 3.14f;
}
;; before
test0:
entry sp, 48
wfr f0, a2
wfr f1, a3
add.s f0, f0, f1
s32i.n a2, sp, 0 ;; unwanted spilling-out
s32i.n a3, sp, 4 ;;
rfr a2, f0
retw.n
.literal .LC1, 1078523331
test1:
entry sp, 48
call8 foo
l32r a8, .LC1
wfr f0, a10
wfr f1, a8
mul.s f0, f0, f1
s32i.n a10, sp, 0 ;; unwanted spilling-out
rfr a2, f0
retw.n
Ultimately, that is because the costs of moving between integer and
floating-point hard registers are undefined and the default (large value)
is used. This patch fixes this.
;; after
test0:
entry sp, 32
wfr f1, a2
wfr f0, a3
add.s f0, f1, f0
rfr a2, f0
retw.n
.literal .LC1, 1078523331
test1:
entry sp, 32
call8 foo
l32r a8, .LC1
wfr f1, a10
wfr f0, a8
mul.s f0, f1, f0
rfr a2, f0
retw.n
gcc/ChangeLog:
* config/xtensa/xtensa.cc (xtensa_register_move_cost):
Add appropriate move costs between AR_REGS and FP_REGS.
|
|
By changing the type of a variable in the cbl_declarative_t structure from "bool"
to "uint32_t", three uninitialized padding bytes were turned into initialized
bytes. This eliminates the valgrind error caused by those uninitialized values.
This is an interim fix, which expediently eliminates the valgrind problem. The
underlying design flaw, which involves turning a host-side C++ structure into
a run-time data block, is slated for complete replacement in the next few weeks.
libgcobol/ChangeLog:
PR cobol/119377
* common-defs.h: (struct cbl_declaratives_t): Change "bool global" to
"uint32_t global".
|
|
Eighty-six testcases extracted from the run_move and run_misc COBOLworx
testsuite.
gcc/testsuite/ChangeLog:
* cobol.dg/group2/258_Nested_PERFORM.cob: New testcase.
* cobol.dg/group2/259_PERFORM_VARYING_BY_-0.2.cob: Likewise.
* cobol.dg/group2/338_Default_Arithmetic__1_.cob: Likewise.
* cobol.dg/group2/access_to_OPTIONAL_LINKAGE_item_not_passed.cob: Likewise.
* cobol.dg/group2/ALLOCATE___FREE_basic_default_versions.cob: Likewise.
* cobol.dg/group2/ALLOCATE___FREE_with_BASED_item__1_.cob: Likewise.
* cobol.dg/group2/ALLOCATE___FREE_with_BASED_item__2_.cob: Likewise.
* cobol.dg/group2/ALLOCATE_Rule_8_OPTION_INITIALIZE_with_figconst.cob: Likewise.
* cobol.dg/group2/Alphanumeric_and_binary_numeric.cob: Likewise.
* cobol.dg/group2/Alphanumeric_MOVE_with_truncation.cob: Likewise.
* cobol.dg/group2/ANY_LENGTH__1_.cob: Likewise.
* cobol.dg/group2/ANY_LENGTH__2_.cob: Likewise.
* cobol.dg/group2/ANY_LENGTH__3_.cob: Likewise.
* cobol.dg/group2/ANY_LENGTH__4_.cob: Likewise.
* cobol.dg/group2/ANY_LENGTH__5_.cob: Likewise.
* cobol.dg/group2/CALL_with_OMITTED_parameter.cob: Likewise.
* cobol.dg/group2/Class_check_with_reference_modification.cob: Likewise.
* cobol.dg/group2/Complex_HEX__VALUE_and_MOVE.cob: Likewise.
* cobol.dg/group2/Complex_IF.cob: Likewise.
* cobol.dg/group2/Concatenation_operator.cob: Likewise.
* cobol.dg/group2/CONTINUE_AFTER_1_SECONDS.cob: Likewise.
* cobol.dg/group2/CURRENCY_SIGN.cob: Likewise.
* cobol.dg/group2/CURRENCY_SIGN_WITH_PICTURE_SYMBOL.cob: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__1_.cob: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__2_.cob: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__3_.cob: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__4_.cob: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__5_.cob: Likewise.
* cobol.dg/group2/EC-SIZE-TRUNCATION_EC-SIZE-OVERFLOW.cob: Likewise.
* cobol.dg/group2/EC-SIZE-ZERO-DIVIDE__fixed_and_float.cob: Likewise.
* cobol.dg/group2/EXIT_PARAGRAPH.cob: Likewise.
* cobol.dg/group2/EXIT_PERFORM.cob: Likewise.
* cobol.dg/group2/EXIT_PERFORM_CYCLE.cob: Likewise.
* cobol.dg/group2/EXIT_SECTION.cob: Likewise.
* cobol.dg/group2/Fixed_continuation_indicator.cob: Likewise.
* cobol.dg/group2/FLOAT-LONG_with_SIZE_ERROR.cob: Likewise.
* cobol.dg/group2/FLOAT-SHORT___FLOAT-LONG_w_o_SIZE_ERROR.cob: Likewise.
* cobol.dg/group2/FLOAT-SHORT_with_SIZE_ERROR.cob: Likewise.
* cobol.dg/group2/Index_and_parenthesized_expression.cob: Likewise.
* cobol.dg/group2/LENGTH_OF_omnibus.cob: Likewise.
* cobol.dg/group2/LOCAL-STORAGE__3__with_recursive_PROGRAM-ID.cob: Likewise.
* cobol.dg/group2/LOCAL-STORAGE__4__with_recursive_PROGRAM-ID_..._USING.cob: Likewise.
* cobol.dg/group2/MOVE_indexes.cob: Likewise.
* cobol.dg/group2/MOVE_integer_literal_to_alphanumeric.cob: Likewise.
* cobol.dg/group2/MOVE_to_edited_item__1_.cob: Likewise.
* cobol.dg/group2/MOVE_to_edited_item__2_.cob: Likewise.
* cobol.dg/group2/MOVE_to_item_with_simple_and_floating_insertion.cob: Likewise.
* cobol.dg/group2/MOVE_to_itself.cob: Likewise.
* cobol.dg/group2/MOVE_to_JUSTIFIED_item.cob: Likewise.
* cobol.dg/group2/MOVE_with_group_refmod.cob: Likewise.
* cobol.dg/group2/MOVE_with_refmod.cob: Likewise.
* cobol.dg/group2/MOVE_with_refmod__variable_.cob: Likewise.
* cobol.dg/group2/MOVE_Z_literal_.cob: Likewise.
* cobol.dg/group2/Multi-target_MOVE_with_subscript_re-evaluation.cob: Likewise.
* cobol.dg/group2/Non-numeric_data_in_numeric_items__1_.cob: Likewise.
* cobol.dg/group2/Non-numeric_data_in_numeric_items__2_.cob: Likewise.
* cobol.dg/group2/Non-overflow_after_overflow.cob: Likewise.
* cobol.dg/group2/OCCURS_clause_with_1_entry.cob: Likewise.
* cobol.dg/group2/OSVS_Arithmetic_Test__2_.cob: Likewise.
* cobol.dg/group2/PERFORM_..._CONTINUE.cob: Likewise.
* cobol.dg/group2/PERFORM_inline__1_.cob: Likewise.
* cobol.dg/group2/PERFORM_inline__2_.cob: Likewise.
* cobol.dg/group2/PERFORM_type_OSVS.cob: Likewise.
* cobol.dg/group2/PIC_ZZZ-__ZZZ_.cob: Likewise.
* cobol.dg/group2/Quick_check_of_PIC_XX_COMP-5.cob: Likewise.
* cobol.dg/group2/Quote_marks_in_comment_paragraphs.cob: Likewise.
* cobol.dg/group2/Recursive_PERFORM_paragraph.cob: Likewise.
* cobol.dg/group2/REDEFINES_values_on_FILLER_and_INITIALIZE.cob: Likewise.
* cobol.dg/group2/SORT__EBCDIC_table_sort__1_.cob: Likewise.
* cobol.dg/group2/SORT__EBCDIC_table_sort__2_.cob: Likewise.
* cobol.dg/group2/SORT__table_sort__2_.cob: Likewise.
* cobol.dg/group2/SORT__table_sort__3A_.cob: Likewise.
* cobol.dg/group2/SORT__table_sort__3B_.cob: Likewise.
* cobol.dg/group2/SORT__table_sort.cob: Likewise.
* cobol.dg/group2/SOURCE_FIXED_FREE_directives.cob: Likewise.
* cobol.dg/group2/Static_CALL_with_ON_EXCEPTION__with_-fno-static-call_.cob: Likewise.
* cobol.dg/group2/_-static__compilation.cob: Likewise.
* cobol.dg/group2/STOP_RUN_WITH_ERROR_STATUS.cob: Likewise.
* cobol.dg/group2/STOP_RUN_WITH_NORMAL_STATUS.cob: Likewise.
* cobol.dg/group2/STRING___UNSTRING__NOT__ON_OVERFLOW.cob: Likewise.
* cobol.dg/group2/STRING_with_subscript_reference.cob: Likewise.
* cobol.dg/group2/UNSTRING_DELIMITED_ALL_LOW-VALUE.cob: Likewise.
* cobol.dg/group2/UNSTRING_DELIMITED_ALL_SPACE-2.cob: Likewise.
* cobol.dg/group2/UNSTRING_DELIMITED_POINTER.cob: Likewise.
* cobol.dg/group2/UNSTRING_DELIMITER_IN.cob: Likewise.
* cobol.dg/group2/UNSTRING_with_FUNCTION___literal.cob: Likewise.
* cobol.dg/group2/258_Nested_PERFORM.out: Known-good results file.
* cobol.dg/group2/259_PERFORM_VARYING_BY_-0.2.out: Likewise.
* cobol.dg/group2/338_Default_Arithmetic__1_.out: Likewise.
* cobol.dg/group2/access_to_OPTIONAL_LINKAGE_item_not_passed.out: Likewise.
* cobol.dg/group2/ALLOCATE___FREE_basic_default_versions.out: Likewise.
* cobol.dg/group2/ALLOCATE_Rule_8_OPTION_INITIALIZE_with_figconst.out: Likewise.
* cobol.dg/group2/Alphanumeric_MOVE_with_truncation.out: Likewise.
* cobol.dg/group2/ANY_LENGTH__1_.out: Likewise.
* cobol.dg/group2/ANY_LENGTH__2_.out: Likewise.
* cobol.dg/group2/ANY_LENGTH__3_.out: Likewise.
* cobol.dg/group2/ANY_LENGTH__5_.out: Likewise.
* cobol.dg/group2/CALL_with_OMITTED_parameter.out: Likewise.
* cobol.dg/group2/Complex_HEX__VALUE_and_MOVE.out: Likewise.
* cobol.dg/group2/Complex_IF.out: Likewise.
* cobol.dg/group2/Concatenation_operator.out: Likewise.
* cobol.dg/group2/CONTINUE_AFTER_1_SECONDS.out: Likewise.
* cobol.dg/group2/CURRENCY_SIGN.out: Likewise.
* cobol.dg/group2/CURRENCY_SIGN_WITH_PICTURE_SYMBOL.out: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__1_.out: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__2_.out: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__3_.out: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__4_.out: Likewise.
* cobol.dg/group2/DECIMAL-POINT_is_COMMA__5_.out: Likewise.
* cobol.dg/group2/EC-SIZE-TRUNCATION_EC-SIZE-OVERFLOW.out: Likewise.
* cobol.dg/group2/EC-SIZE-ZERO-DIVIDE__fixed_and_float.out: Likewise.
* cobol.dg/group2/EXIT_PERFORM_CYCLE.out: Likewise.
* cobol.dg/group2/EXIT_PERFORM.out: Likewise.
* cobol.dg/group2/Fixed_continuation_indicator.out: Likewise.
* cobol.dg/group2/FLOAT-LONG_with_SIZE_ERROR.out: Likewise.
* cobol.dg/group2/FLOAT-SHORT___FLOAT-LONG_w_o_SIZE_ERROR.out: Likewise.
* cobol.dg/group2/FLOAT-SHORT_with_SIZE_ERROR.out: Likewise.
* cobol.dg/group2/Index_and_parenthesized_expression.out: Likewise.
* cobol.dg/group2/LENGTH_OF_omnibus.out: Likewise.
* cobol.dg/group2/LOCAL-STORAGE__3__with_recursive_PROGRAM-ID.out: Likewise.
* cobol.dg/group2/LOCAL-STORAGE__4__with_recursive_PROGRAM-ID_..._USING.out: Likewise.
* cobol.dg/group2/MOVE_integer_literal_to_alphanumeric.out: Likewise.
* cobol.dg/group2/MOVE_to_edited_item__1_.out: Likewise.
* cobol.dg/group2/MOVE_to_edited_item__2_.out: Likewise.
* cobol.dg/group2/MOVE_to_item_with_simple_and_floating_insertion.out: Likewise.
* cobol.dg/group2/MOVE_to_JUSTIFIED_item.out: Likewise.
* cobol.dg/group2/MOVE_Z_literal_.out: Likewise.
* cobol.dg/group2/Multi-target_MOVE_with_subscript_re-evaluation.out: Likewise.
* cobol.dg/group2/Non-numeric_data_in_numeric_items__1_.out: Likewise.
* cobol.dg/group2/Non-numeric_data_in_numeric_items__2_.out: Likewise.
* cobol.dg/group2/OSVS_Arithmetic_Test__2_.out: Likewise.
* cobol.dg/group2/Quick_check_of_PIC_XX_COMP-5.out: Likewise.
* cobol.dg/group2/Quote_marks_in_comment_paragraphs.out: Likewise.
* cobol.dg/group2/Recursive_PERFORM_paragraph.out: Likewise.
* cobol.dg/group2/REDEFINES_values_on_FILLER_and_INITIALIZE.out: Likewise.
* cobol.dg/group2/SORT__table_sort__2_.out: Likewise.
* cobol.dg/group2/SORT__table_sort__3A_.out: Likewise.
* cobol.dg/group2/SORT__table_sort__3B_.out: Likewise.
* cobol.dg/group2/SOURCE_FIXED_FREE_directives.out: Likewise.
* cobol.dg/group2/Static_CALL_with_ON_EXCEPTION__with_-fno-static-call_.out: Likewise.
* cobol.dg/group2/_-static__compilation.out: Likewise.
* cobol.dg/group2/STRING___UNSTRING__NOT__ON_OVERFLOW.out: Likewise.
* cobol.dg/group2/UNSTRING_with_FUNCTION___literal.out: Likewise.
|
|
The PR120089 fix added more PHIs to LOOP_VINFO_EARLY_BREAKS_LIVE_IVS
but not checking that we only add PHIs with a latch argument. The
following adds this missing check.
PR tree-optimization/120211
* tree-vect-stmts.cc (vect_stmt_relevant_p): Only add PHIs
from the loop header to LOOP_VINFO_EARLY_BREAKS_LIVE_IVS.
* gcc.dg/vect/vect-early-break_135-pr120211.c: New testcase.
* gcc.dg/torture/pr120211-1.c: Likewise.
|
|
This bug was another case of generating a formal arglist from
an actual one where we should not have done so. The fix is
straightforward: If we have resolved the formal arglist, we should
not generare a new one.
OK for trunk and backport?
gcc/fortran/ChangeLog:
PR fortran/120163
* gfortran.h: Add formal_resolved to gfc_symbol.
* resolve.cc (gfc_resolve_formal_arglist): Set it.
(resolve_function): Do not call gfc_get_formal_from_actual_arglist
if we already resolved a formal arglist.
(resolve_call): Likewise.
gcc/testsuite/ChangeLog:
PR fortran/120163
* gfortran.dg/interface_61.f90: New test.
|
|
This patch introduces support for RISC-V Profiles RV23A and RV23B [1],
enabling developers to utilize these profiles through the -march option.
[1] https://github.com/riscv/riscv-profiles/releases/tag/rva23-rvb23-ratified
Version log:
Update the testcases, using lowercase letter.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: New profile.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/arch-53.c: New test.
* gcc.target/riscv/arch-54.c: New test.
|
|
This patch introduces support for RISC-V Profiles RV20 and RV22 [1],
enabling developers to utilize these profiles through the -march option.
[1] https://github.com/riscv/riscv-profiles/releases/tag/v1.0
Version log:
Using lowercase letters to present Profiles.
Using '_' as divsor between Profiles and other RISC-V extension.
Add descriptions in invoke.texi.
Checking if there exist '_' between Profiles and additional extensions.
Using std::string to avoid memory problems.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc (struct riscv_profiles): New struct.
(riscv_subset_list::parse_profiles): New parser.
(riscv_subset_list::parse_base_ext): Ditto.
* config/riscv/riscv-subset.h: New def.
* doc/invoke.texi: New option descriptions.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/arch-49.c: New test.
* gcc.target/riscv/arch-50.c: New test.
* gcc.target/riscv/arch-51.c: New test.
* gcc.target/riscv/arch-52.c: New test.
|
|
vectors of DFP [PR119909]
On PowerPC, there is a psabi warning for argument passing of a DFP vector.
We are not expecting this warning and we get a failure due to it.
Adding -Wno-psabi fixes the testcase.
Committed as obvious after a quick test.
gcc/testsuite/ChangeLog:
PR testsuite/119909
* gcc.dg/torture/pr119131-1.c: Add -Wno-psabi.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
|
|
|
|
This commit includes changes to the parser's auto-detection heuristic for source
code formatting. The heuristic now examines the line containing "program-id" to
determine whether the code is in ISO "fixed-form reference format", or ISO
"free-form reference format", or the IBM "extended source format".
Changes to the parser also changes to token processing.
On the code generation side, there are some changes that begin to process
numeric literals in order generate more efficient code using information known
at compilation time.
gcc/cobol/ChangeLog:
PR cobol/119337
* Make-lang.in: Change how $(FLEX) is invoked.
* cdf.y: Change parser tokens.
* gcobc: Changed how name is inferred for PR119337
* gcobol.1: Documentation for SOURCE format heuristic
* genapi.cc: Eliminate __gg__odo_violation.
(parser_display_field): Change comment.
* genutil.cc:Eliminate __gg__odo_violation.
(REFER): New macro for analyzing subscript/refmod calculations.
(get_integer_value): Likewise.
(get_data_offset): Eliminate __gg__odo_violation.
(scale_by_power_of_ten_N): Eliminate unnecessary var_decl_rdigits operation.
(refer_is_clean): Check for FldLiteralN.
(REFER_CHECK): Eliminate.
(refer_refmod_length): Streamline var_decl_rdigits processing.
(refer_fill_depends): Likewise.
(refer_offset): Streamline processing when FldLiteralN.
(refer_size): Tag with REFER macro.
(refer_size_dest): Likewise.
(refer_size_source): Likewise.
* genutil.h (get_integer_value): Delete declaration for odo_violation;
change comment for get_integer_value
(REFER_CHECK): Delete declaration.
(refer_check): Delete #define.
* lexio.cc (is_fixed_format): Changes for source format auto-detect.
(is_reference_format): Likewise.
(check_source_format_directive): Likewise.
(valid_sequence_area): Likewise.
(is_p): Likewise.
(is_program_id): Likewise.
(likely_nist_file): Likewise.
(infer_reference_format): Likewise.
(cdftext::free_form_reference_format): Likewise.
* parse.y: Token changes.
* parse_ante.h (class tokenset_t): Likewise.
(class current_tokens_t): Likewise.
(cmd_or_env_special_of): Likewise.
* scan.l: Likewise.
* scan_ante.h (bcomputable): Likewise.
(keyword_alias_add): Likewise.
(struct bint_t): Likewise.
(binary_integer_usage): Likewise.
(binary_integer_usage_of): Likewise.
* scan_post.h (start_condition_str): Likewise.
* symbols.cc (symbol_table_init): Formatting.
* symbols.h (struct cbl_field_data_t): Add "input" method to field_data_t.
(keyword_alias_add): Add forward declaration.
(binary_integer_usage_of): Likewise.
* token_names.h: Change list of tokens.
* util.cc (iso_cobol_word): Change list of COBOL reserved words.
libgcobol/ChangeLog:
* common-defs.h (ec_cmp): Delete "getenv("match_declarative")" calls.
(enabled_exception_match): Delete "getenv("match_declarative")" calls.
* libgcobol.cc: Eliminate __gg__odo_violation.
gcc/testsuite/ChangeLog:
* cobol.dg/group1/simple-if.cob: Make explicitly >>SOURCE FREE
|
|
Replace
rtx dest = SET_SRC (set);
with
rtx src = SET_SRC (set);
in replace_vector_const to avoid confusion.
PR target/92080
PR target/117839
* config/i386/i386-features.cc (replace_vector_const): Change
dest to src.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
PR fortran/102891
gcc/fortran/ChangeLog:
* dependency.cc (gfc_ref_needs_temporary_p): Within an array
reference, inquiry references of complex variables generally
need a temporary.
gcc/testsuite/ChangeLog:
* gfortran.dg/transfer_array_subref.f90: New test.
|
|
this patch fixes some of problems with cosint in scalar to vector pass.
In particular
1) the pass uses optimize_insn_for_size which is intended to be used by
expanders and splitters and requires the optimization pass to use
set_rtl_profile (bb) for currently processed bb.
This is not done, so we get random stale info about hotness of insn.
2) register allocator move costs are all realtive to integer reg-reg move
which has cost of 2, so it is (except for size tables and i386)
a latency of instruction multiplied by 2.
These costs has been duplicated and are now used in combination with
rtx costs which are all based to COSTS_N_INSNS that multiplies latency
by 4.
Some of vectorizer costing contains COSTS_N_INSNS (move_cost) / 2
to compensate, but some new code does not. This patch adds compensatoin.
Perhaps we should update the cost tables to use COSTS_N_INSNS everywher
but I think we want to first fix inconsistencies. Also the tables will
get optically much longer, since we have many move costs and COSTS_N_INSNS
is a lot of characters.
3) variable m which decides how much to multiply integer variant (to account
that with -m32 all 64bit computations needs 2 instructions) is declared
unsigned which makes the signed computation of instruction gain to be
done in unsigned type and breaks i.e. for division.
4) I added integer_to_sse costs which are currently all duplicationof
sse_to_integer. AMD chips are asymetric and moving one direction is faster
than another. I will chance costs incremnetally once vectorizer part
is fixed up, too.
There are two failures gcc.target/i386/minmax-6.c and gcc.target/i386/minmax-7.c.
Both test stv on hasswell which no longer happens since SSE->INT and INT->SSE moves
are now more expensive.
There is only one instruction to convert:
Computing gain for chain #1...
Instruction gain 8 for 11: {r110:SI=smax(r116:SI,0);clobber flags:CC;}
Instruction conversion gain: 8
Registers conversion cost: 8 <- this is integer_to_sse and sse_to_integer
Total gain: 0
total gain used to be 4 since the patch doubles the conversion costs.
According to agner fog's tables the costs should be 1 cycle which is correct
here.
Final code gnerated is:
vmovd %esi, %xmm0 * latency 1
cmpl %edx, %esi
je .L2
vpxor %xmm1, %xmm1, %xmm1 * latency 1
vpmaxsd %xmm1, %xmm0, %xmm0 * latency 1
vmovd %xmm0, %eax * latency 1
imull %edx, %eax
cltq
movzwl (%rdi,%rax,2), %eax
ret
cmpl %edx, %esi
je .L2
xorl %eax, %eax * latency 1
testl %esi, %esi * latency 1
cmovs %eax, %esi * latency 2
imull %edx, %esi
movslq %esi, %rsi
movzwl (%rdi,%rsi,2), %eax
ret
Instructions with latency info are those really different.
So the uncoverted code has sum of latencies 4 and real latency 3.
Converted code has sum of latencies 4 and real latency 3 (vmod+vpmaxsd+vmov).
So I do not quite see it should be a win.
There is also a bug in costing MIN/MAX
case ABS:
case SMAX:
case SMIN:
case UMAX:
case UMIN:
/* We do not have any conditional move cost, estimate it as a
reg-reg move. Comparisons are costed as adds. */
igain += m * (COSTS_N_INSNS (2) + ix86_cost->add);
/* Integer SSE ops are all costed the same. */
igain -= ix86_cost->sse_op;
break;
Now COSTS_N_INSNS (2) is not quite right since reg-reg move should be 1 or perhaps 0.
For Haswell cmov really is 2 cycles, but I guess we want to have that in cost vectors
like all other instructions.
I am not sure if this is really a win in this case (other minmax testcases seems to make
sense). I have xfailed it for now and will check if that affects specs on LNT testers.
I will proceed with similar fixes on vectorizer cost side. Sadly those introduces
quite some differences in the testuiste (partly triggered by other costing problems,
such as one of scatter/gather)
gcc/ChangeLog:
* config/i386/i386-features.cc
(general_scalar_chain::vector_const_cost): Add BB parameter; handle
size costs; use COSTS_N_INSNS to compute move costs.
(general_scalar_chain::compute_convert_gain): Use optimize_bb_for_size
instead of optimize_insn_for size; use COSTS_N_INSNS to compute move costs;
update calls of general_scalar_chain::vector_const_cost; use
ix86_cost->integer_to_sse.
(timode_immed_const_gain): Add bb parameter; use
optimize_bb_for_size_p.
(timode_scalar_chain::compute_convert_gain): Use optimize_bb_for_size_p.
* config/i386/i386-features.h (class general_scalar_chain): Update
prototype of vector_const_cost.
* config/i386/i386.h (struct processor_costs): Add integer_to_sse.
* config/i386/x86-tune-costs.h (struct processor_costs): Copy
sse_to_integer to integer_to_sse everywhere.
gcc/testsuite/ChangeLog:
* gcc.target/i386/minmax-6.c: xfail test that pmax is used.
* gcc.target/i386/minmax-7.c: xfall test that pmin is used.
|
|
As the following testcase shows, debug info for unsigned(kind=1)
and unsigned(kind=4) vars is wrong while unsigned(kind=2), unsigned(kind=8)
and unsigned(kind=16) look right.
Instead of objects having unsigned(kind=1) type they have character(kind=1)
and instead of unsigned(kind=4) they have character(kind=4).
This means in gdb e.g. unsigned(kind=1) :: a(2) variable initialized to
97 will print as 'aa' rather than (97, 97) etc.
While there can be just one unsigned_char_type_node and one
unsigned_type_node type, each can have arbitrary number of variants
(e.g. consider C
typedef unsigned char uc;
where uc is a variant type to unsigned char) or even distinct types
with different TYPE_MAIN_VARIANT.
The following patch uses a variant of the character(kind=4) type
for unsigned(kind=4) and a distinct type based on character(kind=1)
type for unsigned(kind=1). The reason for the latter is that
unsigned_char_type_node has TYPE_STRING_FLAG set on it, so it has
DW_AT_encoding DW_ATE_unsigned_char rather than DW_ATE_unsigned and
so the debugger then likes to print it as characters rather than numbers.
That is IMHO in Fortran desirable for character(kind=1) but not for
unsigned(kind=1). I've made sure TYPE_CANONICAL of the unsigned(kind=1)
type is still character(kind=1), so they are considered compatible by
the middle-end also e.g. for aliasing etc.
2025-05-10 Jakub Jelinek <jakub@redhat.com>
PR fortran/120193
* trans-types.cc (gfc_init_types): For flag_unsigned use
build_distinct_type_copy or build_variant_type_copy from
gfc_character_types[index_char] if index_char > -1 instead of
gfc_character_types[index_char] or
gfc_build_unsigned_type (&gfc_unsigned_kinds[index]).
* gfortran.dg/guality/pr120193.f90: New test.
|
|
This patch fix a simple typo in the comment of libgfortran.
No user facing change here.
libgfortran/ChangeLog:
* io/read.c (read_f): Comment typo, explict -> explicit.
Signed-off-by: Yuao Ma <c8ef@outlook.com>
|
|
My recent changes to bit-test switch lowering broke pr99988.c testcase.
The testcase assumes a switch will be lowered using jump tables. Make
the testcase run with -fno-bit-tests.
Pushed as obvious.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/pr99988.c: Add -fno-bit-tests.
Signed-off-by: Filip Kastl <fkastl@suse.cz>
|
|
I have mistakenly assumed that switch lowering cannot encounter a switch
with zero clusters. This patch removes the relevant assert and instead
gives up bit-test lowering when this happens.
PR tree-optimization/120080
gcc/ChangeLog:
* tree-switch-conversion.cc (bit_test_cluster::find_bit_tests):
Replace assert with return.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr120080.c: New test.
Signed-off-by: Filip Kastl <fkastl@suse.cz>
|
|
So mvconst_internal's primary benefit is in constant synthesis not impacting
the combine budget in terms of the number of instructions it is willing to
combine together at any given time. The downside is mvconst_internal breaks
combine's toplevel costing model and as a result many other patterns have to be
implemented as define_insn_and_splits rather than the often more natural
define_splits.
This primarily impacts logical operations where we want to see the constant
operand and potentially simplify the logical with other nearby logicals or
shifts.
We can reduce our reliance on mvconst_internal and generate better code for
various cases by generating better initial code for logical operations.
So let's assume we have a inclusive-or of a register with a nontrivial
constant. Right now we will load the nontrivial constant into a new pseudo
(using multiple instructions), then emit a two register source ior operation.
For some cases we can just generate the code we want at expansion time.
Concretely let's take this testcase:
> unsigned long foo(unsigned long src) { return src | 0x8800000000000007; }
Right now we generate this code:
> li a5,-15
> slli a5,a5,59
> addi a5,a5,7
> or a0,a0,a5
The first three instructions are synthesizing the constant. The last
instruction performs the desired operation. But we can do better:
> ori a0,a0,7
> bseti a0,a0,59
> bseti a0,a0,63
Notice how we never even bother to synthesize the constant.
IOR/XOR are pretty simple and this patch focuses exclusively on those. We use
[x]ori to set whatever low 11 bits we need, then bset/binv for a small number
of higher bits. We use the cost of constant synthesis as our budget.
We also support a couple special cases. First, we might be able to rotate the
source value such that all the bits we want to manipulate are in the low 11
bits. So we rotate the source, manipulate the bits, then rotate things back to
where they belong. I didn't see this trigger in spec, but I did trivially find
a testcase where it was likely faster.
Second, we can have cases where we want to invert most of the bits, but a small
number are supposed to be preserved. We can pre-flip the bits we want to
preserve with binv, then invert the whole register with not (which puts the
bits to be preserved back in their original state).
I suspect there are likely a few more cases that could be improved, but the
patch should stand on its own now and getting it out of the way allows us to
focus on logical AND which is far tougher, but also more important in the task
of removing mvconst_internal.
As we're not removing mvconst_internal yet, this patch is mostly a nop. I did
look at spec before/after and didn't see anything particular interesting. I
also temporarily removed mvconst_internal and looked at spec before/after to
hopefully ensure we weren't missing anything obvious in the XOR/IOR cases.
Obviously that latter test showed all kinds of regressions with AND.
We're still working through implementation details on the AND case and
determining what bridge patterns we're going to need to ensure we don't
regress. But this XOR/IOR patch is in good enough shape that it can go
forward now.
Naturally this has been run through my tester (bootstrap & regression test is
in flight, but won't finish for many more hours). Obviously I'm quite
interested in anything spit out by the pre-commit CI system.
gcc/
* config/riscv/iterators.md (OPTAB): New iterator.
* config/riscv/predicates.md (arith_or_zbs_operand): Remove.
(reg_or_const_int_operand): New predicate.
* config/riscv/riscv-protos.h (synthesize_ior_xor): Prototype.
* config/riscv/riscv.cc (synthesize_ior_xor): New function.
* config/riscv/riscv.md (ior/xor expander): Use synthesize_ior_xor.
gcc/testsuite/
* gcc.target/riscv/ior-synthesis-1.c: New test.
* gcc.target/riscv/ior-synthesis-2.c: New test.
* gcc.target/riscv/xor-synthesis-1.c: New test.
* gcc.target/riscv/xor-synthesis-2.c: New test.
* gcc.target/riscv/xor-synthesis-3.c: New test.
Co-authored-by: Jeff Law <jlaw@ventanamicro.com>
|
|
This commit decreases the default preferred stack boundary to 4.
In i386-options.cc, there's
ix86_default_incoming_stack_boundary = PREFERRED_STACK_BOUNDARY;
which sets the default incoming stack boundary to this value, if it's not
overridden by other options or attributes.
Previously, GCC preferred 16-byte alignment like other platforms, unless
`-miamcu` was specified. However, the Microsoft x86 ABI only requires the
stack be aligned to 4-byte boundaries. Callback functions from MSVC code may
break this assumption by GCC (see reference below), causing local variables
to be misaligned.
For compatibility reasons, when the attribute `force_align_arg_pointer` is
attached to a function, it continues to ensure the stack is at least aligned
to a 16-byte boundary, as the documentation seems to suggest.
After this change, `STACK_REALIGN_DEFAULT` no longer has an effect on this
target, so it is removed.
Reference: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111107#c9
Signed-off-by: LIU Hao <lh_mouse@126.com>
Signed-off-by: Jonathan Yong <10walls@gmail.com>
gcc/ChangeLog:
PR target/111107
* config/i386/cygming.h (PREFERRED_STACK_BOUNDARY_DEFAULT): Override
definition from i386.h.
(STACK_REALIGN_DEFAULT): Undefine, as it no longer has an effect.
* config/i386/i386.cc (ix86_update_stack_boundary): Force minimum
128-bit alignment if `force_align_arg_pointer`.
|
|
If the vector version of clmul (vclmul) is available and the scalar
one is not, use it for CRC expansion.
gcc/
* config/riscv/bitmanip.md (crc_rev<ANYI1:mode><ANYI:mode>4): Check
TARGET_ZVBC.
* config/riscv/riscv.cc (expand_crc_using_clmul): Emit code using
vclmul if TARGET_ZVBC.
gcc/testsuite
* gcc.target/riscv/rvv/base/crc-builtin-zvbc.c: New test.
|
|
gcc.dg/pr87600.h and gcc.dg/pr89313.c test for __powerpc__ and
__POWERPC__ to choose ppc register names, but ppc-elf defines neither;
it defines __PPC__, so test for that as well.
for gcc/testsuite/ChangeLog
* gcc.dg/pr87600.h (REG1, REG2): Test for __PPC__ as well.
* gcc.dg/pr89313.c (REG): Likewise.
|
|
gcc.target/powerpc/block-cmp-8.c is an execution test on ilp32. It
tests for support for the 64-bit ISA in the compiler, but not for the
ability to execute powerpc64 instructions, so the test fails on 32-bit
hardware. Require powerpc64 instead.
for gcc/testsuite/ChangeLog
* gcc.target/powerpc/block-cmp-8.c: Require powerpc64
instruction execution support.
|
|
vxworks's dup function is not declared in unistd.h, but c++23/print.cc
expects to be able to call it if unistd.h is available. On vxworks,
the function is only declared in ioLib.h, so arrange to include it.
for libstdc++-v3/ChangeLog
* src/c++23/print.cc [__VXWORKS__]: Include ioLib.h.
|
|
Here tsubst_baselink was returning error_mark_node silently despite
tf_error; we need to actually give an error.
PR c++/120204
gcc/cp/ChangeLog:
* pt.cc (tsubst_baselink): Always error if lookup fails.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1y/constexpr-recursion3.C: New test.
|
|
|
|
In 20_util/variant/visit_member.cc, instantiation of the variant friend
declaration of __get for variant<test01()::X> was being marked as internal
because that variant specialization is itself internal. And therefore
check_module_override didn't try to merge it with the non-exported
namespace-scope declaration of __get.
But the template parms of variant are not part of the friend template's
identity, so they should not affect its visibility. If they are substituted
into the friend declaration, we'll handle that when looking at the
declaration itself.
This change no longer seems necessary to fix the testcase, but does still
seem correct. We definitely still get here during tsubst_friend_function.
gcc/cp/ChangeLog:
* decl2.cc (determine_visibility): Ignore args for friend templates.
|
|
My r16-479 adjustment to the PR99599 workaround broke on a class with a
varargs constructor.
It also occurred to me that we don't need to do non-dep conversion checking
in two phases when concepts aren't supported.
PR c++/99599
PR c++/120185
gcc/cp/ChangeLog:
* class.cc (type_has_converting_constructor): Handle null parm.
* pt.cc (fn_type_unification): Skip early non-dep checking if
no concepts.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/concepts-nondep6.C: New test.
|
|
The VRP2 pass turns:
# prephitmp_3 = PHI <0(4)>
_1 = prephitmp_3 == 0;
_5 = stretch_14(D) ^ 1;
_39 = _1 & _5;
_40 = _39 | last_20(D);
into
_5 = stretch_14(D) ^ 1;
_42 = ~stretch_14(D);
_39 = _42;
_40 = last_20(D) | _39;
using the following step:
Folding statement: _1 = prephitmp_3 == 0;
Queued stmt for removal. Folds to: 1
Folding statement: _5 = stretch_14(D) ^ 1;
Not folded
Folding statement: _39 = _1 & _5;
gimple_simplified to _42 = ~stretch_14(D);
_39 = _42 & 1;
Folded into: _39 = _42;
Folding statement: _40 = _39 | last_20(D);
Folded into: _40 = last_20(D) | _39;
but stretch_14 is a 8-bit boolean so the two forms are not equivalent, that
is to say dropping the "& 1" is wrong. It's another instance of the issue:
https://gcc.gnu.org/pipermail/gcc-patches/2020-November/558537.html
Here it's the reverse case: the bitwise NOT (~) is treated as logical by the
machinery in range-op.cc but the bitwise AND (&) is *not* treated as logical
by that of vr-values.cc, leading to the same problematic outcome.
gcc/
* vr-values.cc (simplify_using_ranges::simplify) <BIT_AND_EXPR>:
Do not call simplify_bit_ops_using_ranges for boolean types whose
precision is not 1.
gcc/testsuite/
* gnat.dg/opt106.adb: New test.
* gnat.dg/opt106_pkg1.ads, gnat.dg/opt106_pkg1.adb: New helper.
* gnat.dg/opt106_pkg2.ads, gnat.dg/opt106_pkg2.adb: Likewise.
|
|
The following adjusts the non-PLUS/MINUS/NEGATE_EXPR vectorizations
of "word_mode" vectors to emit the form vector lowering will later use.
This allows us to move the vector lowering pass before vectorization,
specifically closing the gap between vectorization and lowering,
so we can eventually assert the vectorizer doesn't emit any code
that's not directly supported by the target.
PR tree-optimization/114166
* tree-vect-stmts.cc (vectorizable_operation): Lower also
bitwise operations on word-mode vectors.
|
|
This removes the non-SLP path from vectorizable_operation and folds
away ncopies, replaces STMT_VINFO_VECTYPE with SLP_TREE_VECTYPE
and removes a big comment that's inaccurate in many details since
a long time. It does not get rid of the 'vec_stmt' argument
since splitting the function into analysis and transform would
require storing analysis results somewhere which should be done
separately.
* tree-vect-stmts.cc (vectorizable_operation): Remve non-SLP
path.
|
|
GIMPLE_COND
This is like the patch where we don't want to replace `bool_name != 0`
with `bool_name` but for instead for INTEGER_CST. The only thing
difference is there are a few different forms for always true/always
false; only handle it if it was in the canonical form. A few new helpers are
added for the canonical form detection.
This also replaces the previous version of the patch which did an early
exit from fold_stmt_1 instead so we can change the non-canonical form
into a canonical in the end.
gcc/ChangeLog:
* gimple.h (gimple_cond_true_canonical_p): New function.
(gimple_cond_false_canonical_p): New function.
* gimple-fold.cc (replace_stmt_with_simplification): Return
false if replacing the operands of GIMPLE_COND with an INTEGER_CST
and already in canonical form.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
|
|
RTL DSE forms store groups from unique invariant bases but that is
confused when presented with constant addresses where it assigns
one store group per unique address. That causes it to not consider
0x101:QI to alias 0x100:SI. Constant accesses can really alias
to every object, in practice they appear for I/O and for access
to objects fixed via linker scripts for example. So simply avoid
registering a store group for them.
PR rtl-optimization/120182
* dse.cc (canon_address): Constant addresses have no
separate store group.
* gcc.dg/torture/pr120182.c: New testcase.
|
|
While the tests checked whether the CUDA/HIP runtime is available
before processing them, the execution was then done unconditionally,
leading to FAIL when the default device was the host (or the wrong
offload device).
Now the test is only executed ('run') when the default device is an
Nvidia or AMD GPU (depending on the test case, cf. the test file name).
Otherwise, only a 'link' test is done. (Except when the effective-target
check cannot find the runtime lib - then the test is skipped [as before].)
Note: The cublas/hipblas tests use variant functions and iterate over
all devices, such that the cublas or hipblas, respectively, is only
called when the active device is an AMD or Nvidia device, respectively,
while for the host and other device types the fallback is called.
libgomp/ChangeLog:
* testsuite/libgomp.c/interop-cuda-full.c: Use 'link' instead
of 'run' when the default device is "! offload_device_nvptx".
* testsuite/libgomp.c/interop-cuda-libonly.c: Likewise.
* testsuite/libgomp.c/interop-hip-nvidia-full.c: Likewise.
* testsuite/libgomp.c/interop-hip-nvidia-no-headers.c: Likewise.
* testsuite/libgomp.c/interop-hip-nvidia-no-hip-header.c: Likewise.
* testsuite/libgomp.fortran/interop-hip-nvidia-full.F90: Likewise.
* testsuite/libgomp.fortran/interop-hip-nvidia-no-module.F90: Likewise.
* testsuite/libgomp.c/interop-hip-amd-full.c: Use 'link' instead
of 'run' when the default device is "! offload_device_gcn".
* testsuite/libgomp.c/interop-hip-amd-no-hip-header.c: Likewise.
* testsuite/libgomp.fortran/interop-hip-amd-full.F90: Likewise.
* testsuite/libgomp.fortran/interop-hip-amd-no-module.F90: Likewise.
|
|
The following addresses a too conservative sanity check of SLP nodes
we want to promote external. The issue lies in code generation
for such external which relies on get_later_stmt to figure an
insert location. But get_later_stmt relies on the ability to
totally order stmts, specifically implementation-wise that they
are all from the same BB, which is what is verified at the moment.
The patch changes this to require stmts to be orderable by
dominance queries. For simplicity and seemingly enough for the
testcase in PR119960, this handles the case of two distinct BBs.
PR tree-optimization/119960
* tree-vect-slp.cc (vect_slp_can_convert_to_external):
Handle cases where defs from multiple BBs are ordered
by their dominance relation.
* gcc.dg/vect/bb-slp-pr119960-1.c: New testcase.
|
|
This test is 'dg-do compile', so require tls instead of tls_runtime.
This enables it on targets such as arm-none-eabi configured with
--enable-threads=no.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/constinit16.C: Require tls.
|
|
Since this test is a 'dg-do run', it requires tls_runtime rather than
just tls.
This makes the test UNSUPPORTED on targets such as arm-non-eabi,
instead of FAIL/UNRESOLVED because __aeabi_read_tp is not provided
(e.g. when GCC is configured with --enable-threads=no.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/decomp2.C: Require tls_runtime.
|
|
Some systems don't support the %zu format modifier for size_t, such as
hppa64-hp-hpux. We don't really need the full width of size_t for
printing the number of prime paths as path counts of those sizes
would've already blown up the machine. For printing the vector size we
can use the formatting directives from hwint.h.
PR gcov-profile/120086
gcc/ChangeLog:
* gcov.cc (print_prime_path_lines): Use unsigned, format with
%u.
(print_prime_path_source): Likewise.
(output_path_coverage): Format with HOST_SIZE_T_PRINT_UNSIGNED,
use unsigned for pathno.
|
|
Limit option '-mgeneral-regs-only' to those in supported backends.
Version log:
https://patchwork.sourceware.org/project/gcc/patch/20250508080102.1340059-1-jiawei@iscas.ac.cn/
gcc/testsuite/ChangeLog:
* gcc.dg/pr119160.c: Limit backends.
|
|
instructions.
SVE loads and stores where the predicate is all-true can be optimized to
unpredicated instructions. For example,
svuint8_t foo (uint8_t *x)
{
return svld1 (svptrue_b8 (), x);
}
was compiled to:
foo:
ptrue p3.b, all
ld1b z0.b, p3/z, [x0]
ret
but can be compiled to:
foo:
ldr z0, [x0]
ret
Late_combine2 had already been trying to do this, but was missing the
instruction:
(set (reg/i:VNx16QI 32 v0)
(unspec:VNx16QI [
(const_vector:VNx16BI repeat [
(const_int 1 [0x1])
])
(mem:VNx16QI (reg/f:DI 0 x0 [orig:106 x ] [106])
[0 MEM <svuint8_t> [(unsigned char *)x_2(D)]+0 S[16, 16] A8])
] UNSPEC_PRED_X))
This patch adds a new define_insn_and_split that matches the missing
instruction and splits it to an unpredicated load/store. Because LDR
offers fewer addressing modes than LD1[BHWD], the pattern is
guarded under reload_completed to only apply the transform once the
address modes have been chosen during RA.
The patch was bootstrapped and tested on aarch64-linux-gnu, no regression.
OK for mainline?
Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
* config/aarch64/aarch64-sve.md (*aarch64_sve_ptrue<mode>_ldr_str):
Add define_insn_and_split to fold predicated SVE loads/stores with
ptrue predicates to unpredicated instructions.
gcc/testsuite/
* gcc.target/aarch64/sve/ptrue_ldr_str.c: New test.
* gcc.target/aarch64/sve/acle/general/attributes_6.c: Adjust
expected outcome.
* gcc.target/aarch64/sve/cost_model_14.c: Adjust expected outcome.
* gcc.target/aarch64/sve/cost_model_4.c: Adjust expected outcome.
* gcc.target/aarch64/sve/cost_model_5.c: Adjust expected outcome.
* gcc.target/aarch64/sve/cost_model_6.c: Adjust expected outcome.
* gcc.target/aarch64/sve/cost_model_7.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_f16.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_f32.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_f64.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_mf8.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_s16.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_s32.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_s64.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_s8.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_u16.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_u32.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_u64.c: Adjust expected outcome.
* gcc.target/aarch64/sve/pcs/varargs_2_u8.c: Adjust expected outcome.
* gcc.target/aarch64/sve/peel_ind_2.c: Adjust expected outcome.
* gcc.target/aarch64/sve/single_1.c: Adjust expected outcome.
* gcc.target/aarch64/sve/single_2.c: Adjust expected outcome.
* gcc.target/aarch64/sve/single_3.c: Adjust expected outcome.
* gcc.target/aarch64/sve/single_4.c: Adjust expected outcome.
|
|
The path "b/binutils/dwarf.c" should be printed as binutils/dwarf.c",
not "inutils/dwarf.c".
contrib/ChangeLog:
* check_GNU_style_lib.py: Remove literal prefix.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
|
|
Formatting code is extracted to _M_format_to function, that produced output
to specified iterator. This function is now invoked either with __fc.out()
directly (if width is not specified) or _Padding_sink::out().
This avoid formatting to temporary string if no padding is requested,
and minimize allocations otherwise. For more details see commit message of
r16-142-g01e5ef3e8b91288f5d387a27708f9f8979a50edf.
This should not increase number of instantiations, as implementation only
produce basic_format_context with _Sink_iter as iterator, which is also
_Padding_sink iterator.
libstdc++-v3/ChangeLog:
* include/bits/chrono_io.h (__formatter_chrono::_M_format_to):
Extracted from _M_format.
(__formatter_chrono::_M_format): Use _Padding_sink and delegate
to _M_format_to.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
|
|
This patch provides _M_discarding functiosn for _Sink_iter and _Sink function
that returns true, if any further writes to the _Sink_iter and underlying _Sink,
will be discared, and thus can be omitted.
Currently only the _Padding_sink reports discarding mode of if width of sequence
characters is greater than _M_maxwidth (precision), or underlying _Sink is
discarding characters. The _M_discarding override, is separate function from
_M_ignoring, that remain annotated with [[__gnu__::__always_inline__]].
Despite having notion of maximum characters to be written (_M_max), _Iter_sink
nevers discard characters, as the total number of characters that would be written
needs to be returned by format_to_n. This is documented in-source by providing an
_Iter_sink::_M_discarding override, that always returns false.
The function is currently queried only by the _Padding_sinks, that may be stacked
for example a range is formatted, with padding with being specified both for range
itself and it's elements. The state of underlying sink is checked during construction
and after each write (_M_sync_discarding).
libstdc++-v3/ChangeLog:
* include/std/format (__Sink_iter<_CharT>::_M_discarding)
(__Sink<_CharT>::_M_discarding, _Iter_sink<_CharT, _OutIter>::_M_discarding)
(_Padding_sinl<_CharT, _Out>::_M_padwidth)
(_Padding_sink<_CharT, _Out>::_M_maxwidth): Remove const.
(_Padding_sink<_CharT, _Out>::_M_sync_discarding)
(_Padding_sink<_CharT, _Out>::_M_discarding): Define.
(_Padding_sink<_CharT, _Out>::_Padding_sink(_Out, size_t, size_t))
(_Padding_sink<_CharT, _Out>::_M_force_update):
(_Padding_sink<_CharT, _Out>::_M_flush): Call _M_sync_discarding.
(_Padding_sink<_CharT, _Out>::_Padding_sink(_Out, size_t)): Delegate.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
|
|
[PR116792]
In r15-3752-g48261bd26df624 I added a test plugin that overrode the
regular output, instead emitting diagnostics in crude HTML form.
In r15-4760-g0b73e9382ab51c I added support for multiple kinds of
diagnostic output simultaneously, adding
-fdiagnostics-add-output=DIAGNOSTICS-OUTPUT-SPEC
-fdiagnostics-set-output=DIAGNOSTICS-OUTPUT-SPEC
for adding/changing the kind of diagnostics output, supporting
"text" and "sarif" output schemes.
This patch promotes the HTML output code from the test plugins so
that it is available from "-fdiagnostics-add-output=", using a
new "experimental-html" scheme, to allow simultaneous text, sarif
and html output, and to make it easier to experiment with. The
patch adds Python-based testing of the emitted HTML.
The patch does not affect the generated HTML, which is still crude, and
not yet ready for end-users. I hope to improve it in followups.
gcc/ChangeLog:
PR other/116792
* Makefile.in (OBJS-libcommon): Add diagnostic-format-html.o.
* diagnostic-format-html.cc: Move here from
testsuite/gcc.dg/plugin/diagnostic_plugin_xhtml_format.cc.
Simplify includes. Rename "xhtml" to "html" throughout.
(write_escaped_text): Drop.
(class xhtml_stream_output_format): Drop.
(class html_file_output_format): Reimplement using
diagnostic_output_file.
(diagnostic_output_format_init_xhtml): Drop.
(diagnostic_output_format_init_xhtml_stderr): Drop.
(diagnostic_output_format_init_xhtml_file): Drop.
(diagnostic_output_format_open_html_file): New.
(make_html_sink): New.
(xhtml_format_selftests): Convert to...
(diagnostic_format_html_cc_tests): ...this.
(plugin_is_GPL_compatible): Drop.
(plugin_init): Drop.
* diagnostic-format-html.h: New file.
* doc/invoke.texi (-fdiagnostics-add-output=): Add
"experimental-html" scheme.
* opts-diagnostic.cc: Include "diagnostic-format-html.h".
(class html_scheme_handler): New.
(output_factory::output_factory): Add html_scheme_handler.
(html_scheme_handler::make_sink): New.
* selftest-run-tests.cc (selftest::run_tests): Call the new
selftests.
* selftest.h (selftest::diagnostic_format_html_cc_tests): New
decl.
gcc/testsuite/ChangeLog:
PR other/116792
* gcc.dg/plugin/diagnostic_plugin_xhtml_format.cc: Move to
gcc/diagnostic-format-html.cc.
* gcc.dg/html-output/html-output.exp: New support script.
* gcc.dg/html-output/missing-semicolon.c: New test.
* gcc.dg/html-output/missing-semicolon.py: New test script.
* gcc.dg/plugin/diagnostic-test-xhtml-1.c: Deleted test.
* gcc.dg/plugin/plugin.exp (plugin_test_list): Drop moved plugin
and its deleted test.
* lib/gcc-dg.exp (load_lib): Add load_lib of scanhtml.exp.
* lib/htmltest.py: New support script.
* lib/scanhtml.exp: New support script, based on scansarif.exp.
libatomic/ChangeLog:
PR other/116792
* testsuite/lib/libatomic.exp: Add load_lib of scanhtml.exp.
libgomp/ChangeLog:
PR other/116792
* testsuite/lib/libgomp.exp: Add load_lib of scanhtml.exp.
libitm/ChangeLog:
PR other/116792
* testsuite/lib/libitm.exp: Add load_lib of scanhtml.exp.
libphobos/ChangeLog:
PR other/116792
* testsuite/lib/libphobos-dg.exp: Add load_lib of scanhtml.exp.
libvtv/ChangeLog:
PR other/116792
* testsuite/lib/libvtv-dg.exp: Add load_lib of scanhtml.exp.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
|