Age | Commit message (Collapse) | Author | Files | Lines |
|
* attr-fnspec.h: Update topleve comment.
(attr_fnspec::arg_direct_p): Accept 1...9.
(attr_fnspec::arg_maybe_written_p): Reject 1...9.
(attr_fnspec::arg_copied_to_arg_p): New member function.
* builtins.c (builtin_fnspec): Update fnspec of block copy.
* tree-ssa-alias.c (attr_fnspec::verify): Update.
|
|
gcc/fortran/ChangeLog:
PR fortran/97782
* trans-openmp.c (gfc_trans_oacc_construct, gfc_trans_omp_parallel_do,
gfc_trans_omp_parallel_do_simd, gfc_trans_omp_parallel_sections,
gfc_trans_omp_parallel_workshare, gfc_trans_omp_sections
gfc_trans_omp_single, gfc_trans_omp_task, gfc_trans_omp_teams
gfc_trans_omp_target, gfc_trans_omp_target_data,
gfc_trans_omp_workshare): Use code->loc instead of input_location
when building the OMP_/OACC_ construct.
gcc/testsuite/ChangeLog:
PR fortran/97782
* gfortran.dg/goacc/classify-kernels-unparallelized.f95: Move dg-message
one line up.
* gfortran.dg/goacc/classify-kernels.f95: Likewise.
|
|
gcc/testsuite/ChangeLog:
* gfortran.dg/entry_23.f: New test.
|
|
The following make sure to only iterate PRE insertion when
necessary - which is when AVAIL_OUT of a predecessor of a
block we already visited changed (that's backedge destinations).
To not regress this also makes sure to locally iterate insertion
since even topological sort of expressions isn't enough to
guarantee we get all opportunities of a block in one iteration.
This avoids costly re-compute of the topologically sorted expression
array (more micro-optimization is possible here).
2020-11-12 Richard Biener <rguenther@suse.de>
* tree-ssa-pre.c (bitmap_value_replace_in_set): Return
whether we have changed anything.
(do_pre_regular_insertion): Get topologically sorted array
of expressions from caller.
(do_pre_partial_partial_insertion): Likewise.
(insert): Compute topologically sorted arrays of expressions
here and locally iterate actual insertion. Iterate only
when AVAIL_OUT of an already visited block source changed.
|
|
This patch adds a missing not to the SVE2 BCAX (Bitwise clear and
exclusive or) pattern, fixing the PR. Since SVE doesn't have an
unpredicated not instruction, we need to use a (vacuously) predicated
not here.
To ensure that the predicate is instantiated correctly (to all 1s) for
the intrinsics, we pull out a separate expander from the define_insn.
From the ISA reference [1]:
> Bitwise AND elements of the second source vector with the
> corresponding inverted elements of the third source vector, then
> exclusive OR the results with corresponding elements of the first
> source vector.
[1] : https://developer.arm.com/docs/ddi0602/g/a64-sve-instructions-alphabetic-order/bcax-bitwise-clear-and-exclusive-or
gcc/ChangeLog:
PR target/97730
* config/aarch64/aarch64-sve2.md (@aarch64_sve2_bcax<mode>):
Change to define_expand, add missing (trivially-predicated) not
rtx to fix wrong code bug.
(*aarch64_sve2_bcax<mode>): New.
gcc/testsuite/ChangeLog:
PR target/97730
* gcc.target/aarch64/sve2/bcax_1.c (OP): Add missing bitwise not
to match correct bcax semantics.
* gcc.dg/vect/pr97730.c: New test.
|
|
This fixes the postorder compute for the case of multiple
expression leaders for a value.
2020-11-12 Richard Biener <rguenther@suse.de>
PR tree-optimization/97806
* tree-ssa-pre.c (pre_expr_DFS): New overload for visiting
values, visiting all leaders for a value. Use a bitmap
for visited values.
(sorted_array_from_bitmap_set): Walk over values and adjust.
* gcc.dg/pr97806.c: New testcase.
|
|
As the testcase shows, CLEANUP_POINT_EXPR (and I think TRY_FINALLY_EXPR too)
suffer from the same problem that I was trying to fix in
r10-3597-g1006c9d4395a939820df76f37c7b085a4a1a003f
for CLEANUP_STMT, namely that if in the middle of the body expression of
those stmts is e.g. return stmt, goto, break or continue (something that
changes *jump_target and makes it start skipping stmts), we then skip the
cleanups too, which is not appropriate - the cleanups were either queued up
during the non-skipping execution of the body (for CLEANUP_POINT_EXPR), or
for TRY_FINALLY_EXPR are relevant already after entering the body block.
> Would it make sense to always use a NULL jump_target when evaluating
> cleanups?
I was afraid of that, especially for TRY_FINALLY_EXPR, but it seems that
during constexpr evaluation the cleanups will most often be just very simple
destructor calls (or calls to cleanup attribute functions).
Furthermore, for neither of these 3 tree codes we'll reach that code if
jump_target && *jump_target initially (there is a return NULL_TREE much
earlier for those except for trees that could embed labels etc. in it and
clearly these 3 don't count in that).
2020-11-12 Jakub Jelinek <jakub@redhat.com>
PR c++/97790
* constexpr.c (cxx_eval_constant_expression) <case CLEANUP_POINT_EXPR,
case TRY_FINALLY_EXPR, case CLEANUP_STMT>: Don't pass jump_target to
cxx_eval_constant_expression when evaluating the cleanups.
* g++.dg/cpp2a/constexpr-dtor9.C: New test.
|
|
gcc/ChangeLog:
PR target/97326
* config/s390/vector.md: Support vector floating point modes in
vec_cmp.
|
|
Just a preparation to add a lower-case tointvec.
gcc/ChangeLog:
* config/s390/vector.md: Rename tointvec to TOINTVEC.
* config/s390/vx-builtins.md: Likewise.
|
|
If DECL_INITIAL isn't set, we can't emit anything about the body of the
function, so add the declaration attribute.
gcc/ChangeLog:
PR debug/97060
* dwarf2out.c (gen_subprogram_die): It's a declaration
if DECL_INITIAL isn't set.
gcc/testsuite/ChangeLog:
PR debug/97060
* gcc.dg/debug/dwarf2/pr97060.c: New test.
|
|
New test gcc.dg/tree-ssa/pr96789.c fails on
arm-none-linux-gnueabihf since loop vectorizer is able to optimize
the two loops which operate on array tmp with load_lanes feature
support, it make dse3 fail to find expected inputs.
As Richard suggested, this patch is to replace option
-ftree-vectorize to -ftree-slp-vectorize -fno-tree-loop-vectorize.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr96789.c: Adjusted by disabling loop
vectorization.
|
|
This patch adds a custom event to paths emitted by
-Wanalyzer-stale-setjmp-buffer highlighting the place where the
pertinent stack frame is popped, and updates the final event in
the path to reference this.
gcc/analyzer/ChangeLog:
* checker-path.h (checker_event::get_id_ptr): New.
* diagnostic-manager.cc (path_builder::path_builder): Add "sd"
param and use it to initialize new field "m_sd".
(path_builder::get_pending_diagnostic): New.
(path_builder::m_sd): New field.
(diagnostic_manager::emit_saved_diagnostic): Pass sd to
path_builder ctor.
(diagnostic_manager::add_events_for_superedge): Call new
maybe_add_custom_events_for_superedge vfunc.
* engine.cc (stale_jmp_buf::stale_jmp_buf): Add "setjmp_point"
param and use it to initialize new field "m_setjmp_point".
Initialize new field "m_stack_pop_event".
(stale_jmp_buf::maybe_add_custom_events_for_superedge): New vfunc
implementation.
(stale_jmp_buf::describe_final_event): New vfunc implementation.
(stale_jmp_buf::m_setjmp_point): New field.
(stale_jmp_buf::m_stack_pop_event): New field.
(exploded_node::on_longjmp): Pass setjmp_point to stale_jmp_buf
ctor.
* pending-diagnostic.h
(pending_diagnostic::maybe_add_custom_events_for_superedge): New
vfunc.
gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/setjmp-5.c: Update expected path output to show
an event where the pertinent stack frame is popped. Update
expected message from final event to reference this event.
|
|
This patch implements -Wanalyzer-shift-count-negative
and -Wanalyzer-shift-count-overflow, analogous to the C/C++
warnings -Wshift-count-negative and -Wshift-count-overflow, but
implemented via interprocedural path analysis rather than via parsing
in a front end, and thus capable of detecting interprocedural cases that the
warnings implemented in the front ends can miss.
gcc/analyzer/ChangeLog:
PR tree-optimization/97424
* analyzer.opt (Wanalyzer-shift-count-negative): New.
(Wanalyzer-shift-count-overflow): New.
* region-model.cc (class shift_count_negative_diagnostic): New.
(class shift_count_overflow_diagnostic): New.
(region_model::get_gassign_result): Complain about shift counts that
are negative or are >= the operand's type's width.
gcc/ChangeLog:
PR tree-optimization/97424
* doc/invoke.texi (Static Analyzer Options): Add
-Wno-analyzer-shift-count-negative and
-Wno-analyzer-shift-count-overflow.
(-Wno-analyzer-shift-count-negative): New.
(-Wno-analyzer-shift-count-overflow): New.
gcc/testsuite/ChangeLog:
PR tree-optimization/97424
* gcc.dg/analyzer/invalid-shift-1.c: New test.
|
|
|
|
indirections.
At present, the output of .cfi_personality and .cfi_lsda assumes
ELF semantics for indirections. This isn't suitable for all targets
and is one blocker to moving Darwin to use .cfi_xxxx.
The patch adds a target hook that allows non-ELF targets to use
indirections appropriate to their needs.
gcc/ChangeLog:
* config/darwin-protos.h (darwin_make_eh_symbol_indirect): New.
* config/darwin.c (darwin_make_eh_symbol_indirect): New. Use
Mach-O semantics for personality and ldsa indirections.
* config/darwin.h (TARGET_ASM_MAKE_EH_SYMBOL_INDIRECT): New.
* doc/tm.texi: Regenerate.
* doc/tm.texi.in: Add TARGET_ASM_MAKE_EH_SYMBOL_INDIRECT hook.
* dwarf2out.c (dwarf2out_do_cfi_startproc): If the target defines
a hook for indirecting personality and ldsa references, use that
otherwise default to ELF semantics.
* target.def (make_eh_symbol_indirect): New target hook.
|
|
For Objective-C++, this combines prefix attributes from before and
after top level linkage specs. The "reference implementation" for
Objective-C++ allows this, and system headers depend on it.
e.g.
__attribute__((__deprecated__))
extern "C" __attribute__((__visibility__("default")))
@interface MyClass
...
@end
Would consider the list of prefix attributes to the interface for
MyClass to include both the visibility and deprecated ones.
When we are compiling regular C++, this emits a warning and discards
any prefix attributes before a linkage spec.
gcc/cp/ChangeLog:
* parser.c (cp_parser_declaration): Unless we are compiling for
Ojective-C++, warn about and discard any attributes that prefix
a linkage specification.
|
|
This patch changes the mangling of __alignof__ to v111__alignof__,
making its mangling distinct from that of alignof(type) and
alignof(expr).
How we mangle ALIGNOF_EXPR now depends on its ALIGNOF_EXPR_STD_P flag,
which after the previous patch gets consistently set for alignof(type)
as well as alignof(expr).
gcc/c-family/ChangeLog:
PR c++/88115
* c-opts.c (c_common_post_options): Update latest_abi_version.
gcc/ChangeLog:
PR c++/88115
* common.opt (-fabi-version): Document =15.
* doc/invoke.texi (C++ Dialect Options): Likewise.
gcc/cp/ChangeLog:
PR c++/88115
* mangle.c (write_expression): Mangle __alignof_ differently
from alignof when the ABI version is at least 15.
libiberty/ChangeLog:
PR c++/88115
* cp-demangle.c (d_print_comp_inner)
<case DEMANGLE_COMPONENT_EXTENDED_OPERATOR>: Don't print the
"operator " prefix for __alignof__.
<case DEMANGLE_COMPONENT_UNARY>: Always print parens around the
operand of __alignof__.
* testsuite/demangle-expected: Test demangling for __alignof__.
gcc/testsuite/ChangeLog:
PR c++/88115
* g++.dg/abi/macro0.C: Adjust.
* g++.dg/cpp0x/alignof7.C: New test.
* g++.dg/cpp0x/alignof8.C: New test.
|
|
We're currently neglecting to set the ALIGNOF_EXPR_STD_P flag on an
ALIGNOF_EXPR when its operand is an expression. This leads to us
handling alignof(expr) as if it were written __alignof__(expr), and
returning the preferred alignment instead of the ABI alignment. In the
testcase below, this causes the first and third static_assert to fail on
x86.
gcc/cp/ChangeLog:
PR c++/88115
* cp-tree.h (cxx_sizeof_or_alignof_expr): Add bool parameter.
* decl.c (fold_sizeof_expr): Pass false to
cxx_sizeof_or_alignof_expr.
* parser.c (cp_parser_unary_expression): Pass std_alignof to
cxx_sizeof_or_alignof_expr.
* pt.c (tsubst_copy): Pass false to cxx_sizeof_or_alignof_expr.
(tsubst_copy_and_build): Pass std_alignof to
cxx_sizeof_or_alignof_expr.
* typeck.c (cxx_alignof_expr): Add std_alignof bool parameter
and pass it to cxx_sizeof_or_alignof_type. Set ALIGNOF_EXPR_STD_P
appropriately.
(cxx_sizeof_or_alignof_expr): Add std_alignof bool parameter
and pass it to cxx_alignof_expr. Assert op is either
SIZEOF_EXPR or ALIGNOF_EXPR.
libcc1/ChangeLog:
PR c++/88115
* libcp1plugin.cc (plugin_build_unary_expr): Pass true to
cxx_sizeof_or_alignof_expr.
gcc/testsuite/ChangeLog:
PR c++/88115
* g++.dg/cpp0x/alignof6.C: New test.
|
|
Retain the location when tsubstituting a qualified-id so that our
static_assert diagnostic can benefit. Don't create useless location
wrappers for temporary variables.
gcc/ChangeLog:
PR c++/97518
* tree.c (maybe_wrap_with_location): Don't add a location
wrapper around an artificial and ignored decl.
gcc/cp/ChangeLog:
PR c++/97518
* pt.c (tsubst_qualified_id): Use EXPR_LOCATION of the qualified-id.
Use it to maybe_wrap_with_location the final expression.
gcc/testsuite/ChangeLog:
PR c++/97518
* g++.dg/diagnostic/static_assert3.C: New test.
|
|
Accesses to NEW_SETS should be properly guarded. Committed
as obvious.
2020-11-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/97623
* tree-ssa-pre.c (create_expression_by_pieces): Guard
NEW_SETS access.
(insert_into_preds_of_block): Likewise.
|
|
This fixes sorted_array_from_bitmap_set to do a topological sort
as required by re-using what PHI-translation does, namely a DFS
walk with the help of bitmap_find_leader. The proper result
is verified by extra checking in clean () (which would have tripped
before) and for the testcase I'm working at during the last
patches (PR97623) it is neutral in compile-time cost.
2020-11-11 Richard Biener <rguenther@suse.de>
* tree-ssa-pre.c (pre_expr_DFS): New function.
(sorted_array_from_bitmap_set): Use it to properly
topologically sort the expression set.
(clean): Verify we've cleaned everything we should.
|
|
The added (?:_ull) match on 32-bit targets, but are equivalent to just
adding _ull into the strings, i.e. require the _ull substrings, while
the intent is that they are optional, so we should use (?:_ull)? instead.
2020-11-11 Jakub Jelinek <jakub@redhat.com>
* gfortran.dg/gomp/workshare-reduction-3.f90: Use (?:_ull)? instead
of (?:_ull) in the scan-tree-dump-times directives.
* gfortran.dg/gomp/workshare-reduction-26.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-27.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-28.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-36.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-37.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-38.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-39.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-40.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-41.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-42.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-43.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-44.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-45.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-46.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-47.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-56.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-57.f90: Likewise.
|
|
gcc/ada/ChangeLog:
* gcc-interface/gigi.h: Remove ^L characters throughout.
* gcc-interface/decl.c: Likewise.
* gcc-interface/utils.c: Likewise.
* gcc-interface/utils2.c: Likewise.
* gcc-interface/trans.c (gnat_to_gnu) <N_Allocator>: Do not explicitly
go to the base type for the Has_Constrained_Partial_View flag.
|
|
The Ada compiler uses a biased representation when a size clause reserves
fewer bits than normal either for the lower or for the upper bound.
gcc/ada/ChangeLog:
* gcc-interface/trans.c (build_binary_op_trapv): Convert operands
to the result type before doing generic overflow checking.
gcc/testsuite/ChangeLog:
* gnat.dg/bias2.adb: New test.
|
|
This is a rather obscure case where the elaboration of an empty array
whose base type is an array type of length at most 1 goes awry when
the code is compiled with optimization.
gcc/ada/ChangeLog:
* gcc-interface/trans.c (can_be_lower_p): Remove.
(Regular_Loop_to_gnu): Add ENTRY_COND unconditionally if
BOTTOM_COND is non-zero.
gcc/testsuite/ChangeLog:
* gnat.dg/opt89.adb: New test.
|
|
gcc/ada/ChangeLog:
* gcc-interface/decl.c (gnat_to_gnu_entity) <E_Constant>: In case
the constant is not being defined, get the expression in type
annotation mode only if its type is elementary.
|
|
This is a regression present on the mainline and 10 branch in the form
of an ICE with a shift operator applied to a variable of a signed type,
and which is caused by a type mismatch.
gcc/ada/ChangeLog:
* gcc-interface/trans.c (gnat_to_gnu) <N_Op_Shift>: Also convert
GNU_MAX_SHIFT if the type of the operation has been changed.
* gcc-interface/utils.c (can_materialize_object_renaming_p): Add
pair of missing parentheses.
gcc/testsuite/ChangeLog:
* gnat.dg/shift1.adb: New test.
|
|
Tested on x86_64-unknown-linux-gnu, pushed.
2020-11-11 Richard Biener <rguenther@suse.de>
PR testsuite/97797
* gcc.dg/torture/ssa-fre-5.c: Use __SIZETYPE__ where
appropriate.
* gcc.dg/torture/ssa-fre-6.c: Likewise.
|
|
The recent previous change in this area limited hoist insertion
iteration via a param but the following is IMHO better since
we are not really interested in PRE opportunities exposed by
hoisting but only the other way around. So this moves hoist
insertion after PRE iteration finished and removes hoist
insertion iteration alltogether.
2020-11-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/97623
* params.opt (-param=max-pre-hoist-insert-iterations): Remove
again.
* doc/invoke.texi (max-pre-hoist-insert-iterations): Likewise.
* tree-ssa-pre.c (insert): Move hoist insertion after PRE
insertion iteration and do not iterate it.
* gcc.dg/tree-ssa/ssa-hoist-3.c: Adjust.
* gcc.dg/tree-ssa/ssa-hoist-7.c: Likewise.
* gcc.dg/tree-ssa/ssa-pre-30.c: Likewise.
|
|
This patch adds support for comparing unpacked SVE integer vectors,
such as byte elements stored in the bottom bytes of halfword
containers. It also adds support for selects between unpacked
SVE vectors (both integer and floating-point), since selects and
compares are closely tied via the vcond optab interface.
gcc/
* config/aarch64/aarch64-sve.md (@vcond_mask_<mode><vpred>): Extend
from SVE_FULL to SVE_ALL.
(*vcond_mask_<mode><vpred>): Likewise.
(@aarch64_sel_dup<mode>): Likewise.
(vcond<SVE_FULL:mode><v_int_equiv>): Extend to...
(vcond<SVE_ALL:mode><SVE_I:mode>): ...this, but requiring the
sizes of the container modes to match.
(vcondu<SVE_FULL:mode><v_int_equiv>): Extend to...
(vcondu<SVE_ALL:mode><SVE_I:mode>): ...this.
(vec_cmp<SVE_FULL_I:mode><vpred>): Extend to...
(vec_cmp<SVE_I:mode><vpred>): ...this.
(vec_cmpu<SVE_FULL_I:mode><vpred>): Extend to...
(vec_cmpu<SVE_I:mode><vpred>): ...this.
(@aarch64_pred_cmp<cmp_op><SVE_FULL_I:mode>): Extend to...
(@aarch64_pred_cmp<cmp_op><SVE_I:mode>): ...this.
(*cmp<cmp_op><SVE_FULL_I:mode>_cc): Extend to...
(*cmp<cmp_op><SVE_I:mode>_cc): ...this.
(*cmp<cmp_op><SVE_FULL_I:mode>_ptest): Extend to...
(*cmp<cmp_op><SVE_I:mode>_ptest): ...this.
(*cmp<cmp_op><SVE_FULL_I:mode>_and): Extend to...
(*cmp<cmp_op><SVE_I:mode>_and): ...this.
gcc/testsuite/
* gcc.target/aarch64/sve/cmp_1.c: New test.
* gcc.target/aarch64/sve/cmp_2.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_1.c: Add --param
aarch64-sve-compare-costs=0
* gcc.target/aarch64/sve/cond_arith_1_run.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_3.c: Likewise.
* gcc.target/aarch64/sve/cond_arith_3_run.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise.
* gcc.target/aarch64/sve/mask_load_slp_1.c: Likewise.
* gcc.target/aarch64/sve/vcond_11.c: Likewise.
* gcc.target/aarch64/sve/vcond_11_run.c: Likewise.
|
|
The vcond code requires the compared vectors and the selected
vectors to have both the same size and the same number of elements
as each other. But the operation makes logical sense even for
different vector sizes. E.g. you could compare two V4SIs and
use the result to select between two V4DIs.
The underlying optab already allows the compared mode and the selected
mode to be specified separately. Since the vectoriser now also
supports mixed vector sizes, I think we can simply remove the
equal-size check and just keep the equal-lanes check. It's then
up to the target to decide which (if any) mixtures of sizes it
supports.
gcc/
* optabs-tree.c (expand_vec_cond_expr_p): Allow the compared values
and the selected values to have different mode sizes.
* gimple-isel.cc (gimple_expand_vec_cond_expr): Likewise.
|
|
2020-10-13 Hongtao Liu <hongtao.liu@intel.com>
Hongyu Wang <hongyu.wang@intel.com>
gcc/
* common/config/i386/cpuinfo.h (get_available_features):
Detect AVXVNNI.
* common/config/i386/i386-common.c
(OPTION_MASK_ISA2_AVXVNNI_SET,
OPTION_MASK_ISA2_AVXVNNI_UNSET): New.
(OPTION_MASK_ISA2_AVX2_UNSET): Add AVXVNNI.
(ix86_hanlde_option): Handle -mavxvnni, unset avxvnni when
avx2 is disabled.
* common/config/i386/i386-cpuinfo.h (enum processor_features):
Add FEATURE_AVXVNNI.
* common/config/i386/i386-isas.h: Add ISA_NAMES_TABLE_ENTRY
for avxvnni.
* config.gcc: Add avxvnniintrin.h.
* config/i386/avx512vnnivlintrin.h: Reimplement 128/256 bit non-mask
intrinsics with macros to support unified interface.
* config/i386/avxvnniintrin.h: New header file.
* config/i386/cpuid.h (bit_AVXVNNI): New.
* config/i386/i386-builtins.c (def_builtin): Handle AVXVNNI mask
for unified builtin.
* config/i386/i386-builtin.def (BDESC): Adjust AVX512VNNI
builtins for AVXVNNI.
* config/i386/i386-c.c (ix86_target_macros_internal): Define
__AVXVNNI__.
* config/i386/i386-expand.c (ix86_expand_builtin): Handle bisa
for AVXVNNI to support unified intrinsic name, since there is no
dependency between AVX512VNNI and AVXVNNI.
* config/i386/i386-options.c (isa2_opts): Add -mavxvnni.
(ix86_valid_target_attribute_inner_p): Handle avxnnni.
(ix86_option_override_internal): Ditto.
* config/i386/i386.h (TARGET_AVXVNNI, TARGET_AVXVNNI_P,
TARGET_AVXVNNI_P, PTA_AVXVNNI): New.
(PTA_SAPPHIRERAPIDS): Add AVX_VNNI.
(PTA_ALDERLAKE): Likewise.
* config/i386/i386.md ("isa"): Add avxvnni, avx512vnnivl.
("enabled"): Adjust for avxvnni and avx512vnnivl.
* config/i386/i386.opt: Add option -mavxvnni.
* config/i386/immintrin.h: Include avxvnniintrin.h.
* config/i386/sse.md (vpdpbusd_<mode>): Adjust for AVXVNNI.
(vpdpbusds_<mode>): Likewise.
(vpdpwssd_<mode>): Likewise.
(vpdpwssds_<mode>): Likewise.
(vpdpbusd_v16si): New.
(vpdpbusds_v16si): Likewise.
(vpdpwssd_v16si): Likewise.
(vpdpwssds_v16si): Likewise.
* doc/invoke.texi: Document -mavxvnni.
* doc/extend.texi: Document avxvnni.
* doc/sourcebuild.texi: Document target avxvnni.
gcc/testsuite/
* gcc.target/i386/avx512vl-vnni-1.c: Rename..
* gcc.target/i386/avx512vl-vnni-1a.c: To This.
* gcc.target/i386/avx512vl-vnni-1b.c: New test.
* gcc.target/i386/avx512vl-vnni-2.c: Ditto.
* gcc.target/i386/avx512vl-vnni-3.c: Ditto.
* gcc.target/i386/avx-vnni-1.c: Ditto.
* gcc.target/i386/avx-vnni-2.c: Ditto.
* gcc.target/i386/avx-vnni-3.c: Ditto.
* gcc.target/i386/avx-vnni-4.c: Ditto.
* gcc.target/i386/avx-vnni-5.c: Ditto.
* gcc.target/i386/avx-vnni-6.c: Ditto.
* gcc.target/i386/avx-vpdpbusd-2.c: Ditto.
* gcc.target/i386/avx-vpdpbusds-2.c: Ditto.
* gcc.target/i386/avx-vpdpwssd-2.c: Ditto.
* gcc.target/i386/avx-vpdpwssds-2.c: Ditto.
* gcc.target/i386/vnni_inline_error.c: Ditto.
* gcc.target/i386/avx512vnnivl-builtin.c: Ditto.
* gcc.target/i386/avxvnni-builtin.c: Ditto.
* gcc.target/i386/funcspec-56.inc: Add new target attribute.
* gcc.target/i386/sse-12.c: Add -mavxvnni.
* gcc.target/i386/sse-13.c: Ditto.
* gcc.target/i386/sse-14.c: Ditto.
* gcc.target/i386/sse-22.c: Ditto.
* gcc.target/i386/sse-23.c: Ditto.
* g++.dg/other/i386-2.C: Ditto.
* g++.dg/other/i386-3.C: Ditto.
* lib/target-supports.exp (check_effective_target_avxvnni):
New proc.
|
|
gcc/ChangeLog:
* tree.c (copy_node): Fix spelling.
|
|
The topological sort sorted_array_from_bitmap_set is supposed to
provide isn't one since quite some time since value_ids are
assigned first to SSA names in the order of SSA_NAME_VERSION
and then to hashtable entries in the order they appear in the
table. One can even argue that expression-ids provide a closer
approximation of a topological sort since those are assigned
during AVAIL_OUT computation which is done in a dominator walk.
Now - phi-translation is not even depending on topological sorting
but it essentially does a DFS walk, phi-translating expressions
it depends on and relying on phi-translation caching to avoid
doing redundant work.
So this patch drops the use of sorted_array_from_bitmap_set from
phi_translate_set because this function is quite expensive.
2020-11-11 Richard Biener <rguenther@suse.de>
* tree-ssa-pre.c (phi_translate_set): Do not sort the
expression set topologically.
|
|
gcc/ChangeLog:
* value-range.cc (irange::set): Early exit on VR_VARYING.
|
|
2020-11-11 Zhiheng Xie <xiezhiheng@huawei.com>
Nannan Zheng <zhengnannan@huawei.com>
gcc/ChangeLog:
* config/aarch64/aarch64-simd-builtins.def: Add proper FLAG
for arithmetic operation intrinsics.
|
|
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/workshare-reduction-26.f90: Add (?:_ull) to
scan-tree-dump-times regex for -m32.
* gfortran.dg/gomp/workshare-reduction-27.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-28.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-3.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-36.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-37.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-38.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-39.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-40.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-41.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-42.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-43.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-44.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-45.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-46.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-47.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-56.f90: Likewise.
* gfortran.dg/gomp/workshare-reduction-57.f90: Likewise.
|
|
The first testcase below ICEs when f951 is 32-bit (or 64-bit big-endian).
The problem is that ex->ts.u.cl && ex->ts.u.cl->length are both non-NULL,
but ex->ts.u.cl->length->expr_type is not EXPR_CONSTANT, but EXPR_FUNCTION.
value.function.actual and value.function.name are in that case pointers,
but value._mp_alloc and value._mp_size are 4 byte integers no matter what.
So, in 64-bit little-endian the function returns most of the time incorrect
CHARACTER(0) because the most significant 32 bits of the
value.function.actual pointer are likely 0.
Anyway, the following patch is an attempt to get all the cases right.
Uses ex->value.character.length only for ex->expr_type == EXPR_CONSTANT
(i.e. CHARACTER literals), handles the deferred lengths, assumed lengths,
known constant lengths and finally if the length is something other,
just doesn't print it, i.e. prints just CHARACTER (for default kind)
or CHARACTER(KIND=4) (for e.g. kind 4).
2020-11-11 Jakub Jelinek <jakub@redhat.com>
PR fortran/97768
gcc/fortran/
* misc.c (gfc_typename): Use ex->value.character.length only if
ex->expr_type == EXPR_CONSTANT. If ex->ts.deferred, print : instead
of length. If ex->ts.u.cl && ex->ts.u.cl->length == NULL, print *
instead of length. Otherwise if character length is non-constant,
print just CHARACTER or CHARACTER(KIND=N).
gcc/testsuite/
* gfortran.dg/pr97768_1.f90: New test.
* gfortran.dg/pr97768_2.f90: New test.
|
|
* go-gcc.cc (Gcc_backend::global_variable_set_init): Cast NULL to
avoid ambiguous overloaded call.
|
|
gcc/
* cgraph.h (symtab_node::set_section_for_node): Declare new
overload.
(symtab_node::set_section_from_string): Rename from set_section.
(symtab_node::set_section_from_node): Declare.
* symtab.c (symtab_node::set_section_for_node): Define new
overload.
(symtab_node::set_section_from_string): Rename from set_section.
(symtab_node::set_section_from_node): Define.
(symtab_node::set_section): Call renamed set_section_from_string.
(symtab_node::set_section): Call new set_section_from_node.
|
|
gcc/
* symtab.c (symtab_node::set_section_for_node): Extract reference
counting logic into ...
(retain_section_hash_entry): ... here (new function) and ...
(release_section_hash_entry): ... here (new function).
|
|
gcc/ChangeLog
* config/i386/i386.h (PTA_MOVDIRI, PTA_MOVDIR64B,
PTA_AMX_TILE, PTA_AMX_INT8, PTA_AMX_BF16, PTA_HRESET):
Formatting.
|
|
gcc/testsuite
* gcc.target/microblaze/others/strings1.c: Update
to include $LC label.
|
|
These tests are unsupported on PowerPC.
gcc/testsuite/ChangeLog:
* c-c++-common/zero-scratch-regs-10.c: Skip on powerpc*-*-*.
* c-c++-common/zero-scratch-regs-11.c: Skip on powerpc*-*-*.
* c-c++-common/zero-scratch-regs-5.c: Skip on powerpc*-*-aix*.
* c-c++-common/zero-scratch-regs-8.c: Skip on powerpc*-*-*.
* c-c++-common/zero-scratch-regs-9.c: Skip on powerpc*-*-*.
|
|
|
|
Commit e627cda56865 ("IBM Z: Store long doubles in vector registers
when possible") introduced HAVE_TF macro which expands to a logical
"or" of HAVE_ constants. Not all of these constants are available in
GENERATOR_FILE context, so a hack was used: simply expand to true in
this case, because the actual value matters only during compiler
runtime and not during generation.
However, one aspect of this value matters during generation after all:
whether or not it's a constant, which in this case it appears to be.
This results in incorrect values in insn-flags.h and broken bootstrap
for some configurations.
Fix by using a dummy value that is not a constant.
gcc/ChangeLog:
2020-11-10 Ilya Leoshkevich <iii@linux.ibm.com>
* config/s390/s390.h (HAVE_TF): Use opaque value when
GENERATOR_FILE is defined.
|
|
Currently, when a static_assert fails, we only say "static assertion failed".
It would be more useful if we could also print the expression that
evaluated to false; this is especially useful when the condition uses
template parameters. Consider the motivating example, in which we have
this line:
static_assert(is_same<X, Y>::value);
if this fails, the user has to play dirty games to get the compiler to
print the template arguments. With this patch, we say:
error: static assertion failed
note: 'is_same<int*, int>::value' evaluates to false
which I think is much better. However, always printing the condition that
evaluated to 'false' wouldn't be very useful: e.g. noexcept(fn) is
always parsed to true/false, so we would say "'false' evaluates to false"
which doesn't help. So I wound up only printing the condition when it was
instantiation-dependent, that is, we called finish_static_assert from
tsubst_expr.
Moreover, this patch also improves the diagnostic when the condition
consists of a logical AND. Say you have something like this:
static_assert(fn1() && fn2() && fn3() && fn4() && fn5());
where fn4() evaluates to false and the other ones to true. Highlighting
the whole thing is not that helpful because it won't say which clause
evaluated to false. With the find_failing_clause tweak in this patch
we emit:
error: static assertion failed
6 | static_assert(fn1() && fn2() && fn3() && fn4() && fn5());
| ~~~^~
so you know right away what's going on. Unfortunately, when you combine
both things, that is, have an instantiation-dependent expr and && in
a static_assert, we can't yet quite point to the clause that failed. It
is because when we tsubstitute something like is_same<X, Y>::value, we
generate a VAR_DECL that doesn't have any location. It would be awesome
if we could wrap it with a location wrapper, but I didn't see anything
obvious.
In passing, I've cleaned up some things:
* use iloc_sentinel when appropriate,
* it's nicer to call contextual_conv_bool instead of the rather verbose
perform_implicit_conversion_flags,
* no need to check for INTEGER_CST before calling integer_zerop.
gcc/cp/ChangeLog:
PR c++/97518
* cp-tree.h (finish_static_assert): Adjust declaration.
* parser.c (cp_parser_static_assert): Pass false to
finish_static_assert.
* pt.c (tsubst_expr): Pass true to finish_static_assert.
* semantics.c (find_failing_clause_r): New function.
(find_failing_clause): New function.
(finish_static_assert): Add a bool parameter. Use
iloc_sentinel. Call contextual_conv_bool instead of
perform_implicit_conversion_flags. Don't check for INTEGER_CST before
calling integer_zerop. Call find_failing_clause and maybe use its
location. Print the original condition or the failing clause if
SHOW_EXPR_P.
gcc/testsuite/ChangeLog:
PR c++/97518
* g++.dg/diagnostic/pr87386.C: Adjust expected output.
* g++.dg/diagnostic/static_assert1.C: New test.
* g++.dg/diagnostic/static_assert2.C: New test.
libcc1/ChangeLog:
PR c++/97518
* libcp1plugin.cc (plugin_add_static_assert): Pass false to
finish_static_assert.
|
|
A couple of dg-ice tests.
gcc/testsuite/ChangeLog:
PR c++/52830
PR c++/88982
PR c++/90799
PR c++/87765
PR c++/89565
* g++.dg/cpp0x/constexpr-52830.C: New test.
* g++.dg/cpp0x/vt-88982.C: New test.
* g++.dg/cpp1z/class-deduction76.C: New test.
* g++.dg/cpp1z/constexpr-lambda26.C: New test.
* g++.dg/cpp2a/nontype-class39.C: New test.
|
|
gcc/
* cgraph.h (symtab_node::get_section): Constify.
(symtab_node::set_section): Declare new overload.
* symtab.c (symtab_node::set_section): Define new overload.
(symtab_node::copy_visibility_from): Use new overload of
symtab_node::set_section.
(symtab_node::resolve_alias): Same.
* tree.h (set_decl_section_name): Declare new overload.
* tree.c (set_decl_section_name): Define new overload.
* tree-emutls.c (get_emutls_init_templ_addr): Same.
* cgraphclones.c (cgraph_node::create_virtual_clone): Use new
overload of symtab_node::set_section.
(cgraph_node::create_version_clone_with_body): Same.
* trans-mem.c (ipa_tm_create_version): Same.
gcc/c
* c-decl.c (merge_decls): Use new overload of
set_decl_section_name.
gcc/cp
* decl.c (duplicate_decls): Use new overload of
set_decl_section_name.
* method.c (use_thunk): Same.
* optimize.c (maybe_clone_body): Same.
* coroutines.cc (act_des_fn): Same.
gcc/d
* decl.cc (finish_thunk): Use new overload of
set_decl_section_name
|
|
My previous cleanups to irange::set moved the early exit when
VARYING. This caused poly int varyings to be created with
incorrect min/max.
We can just set varying and exit for all poly ints.
gcc/ChangeLog:
* value-range.cc (irange::set): Early exit for poly ints.
|