aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-12-17c++: Seed namespaces for bindings [PR106363]Nathaniel Shead3-3/+20
Currently the first depset for an EK_BINDING is not seeded. This breaks the attached testcase as then the namespace is not considered referenced yet during streaming, but we've already finished importing. There doesn't seem to be any particular reason I could find for skipping the first depset for bindings, and removing the condition doesn't appear to cause any test failures, so this patch removes that check. PR c++/106363 gcc/cp/ChangeLog: * module.cc (module_state::write_cluster): Don't skip first depset for bindings. gcc/testsuite/ChangeLog: * g++.dg/modules/pr106363_a.C: New test. * g++.dg/modules/pr106363_b.C: New test. Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
2023-12-16analyzer: add sarif properties for bounds checking diagnosticsDavid Malcolm7-0/+208
As a followup to r14-6057-g12b67d1e13b3cf, add SARIF property bags for -Wanalyzer-out-of-bounds, to help with debugging these warnings. This was very helpful with PR analyzer/112792. gcc/analyzer/ChangeLog: * analyzer.cc: Include "tree-pretty-print.h" and "diagnostic-event-id.h". (tree_to_json): New. (diagnostic_event_id_to_json): New. (bit_offset_to_json): New. (byte_offset_to_json): New. * analyzer.h (tree_to_json): New decl. (diagnostic_event_id_to_json): New decl. (bit_offset_to_json): New decl. (byte_offset_to_json): New decl. * bounds-checking.cc: Include "diagnostic-format-sarif.h". (out_of_bounds::maybe_add_sarif_properties): New. (concrete_out_of_bounds::maybe_add_sarif_properties): New. (concrete_past_the_end::maybe_add_sarif_properties): New. (symbolic_past_the_end::maybe_add_sarif_properties): New. * region-model.cc (region_to_value_map::to_json): New. (region_model::to_json): New. * region-model.h (region_to_value_map::to_json): New decl. (region_model::to_json): New decl. * store.cc (bit_range::to_json): New. (byte_range::to_json): New. * store.h (bit_range::to_json): New decl. (byte_range::to_json): New decl. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2023-12-16json: fix escaping of object keysDavid Malcolm1-40/+54
gcc/ChangeLog: * json.cc (print_escaped_json_string): New, taken from string::print. (object::print): Use it for printing keys. (string::print): Move implementation to print_escaped_json_string. (selftest::test_writing_objects): Add a key containing quote, backslash, and control characters. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2023-12-16libstdc++: Update some baseline_symbols.txt (x32)H.J. Lu1-1/+110
* config/abi/post/x86_64-linux-gnu/x32/baseline_symbols.txt: Updated.
2023-12-16libstdc++: Optimize std::remove_pointer compilation performanceKen Matsui1-1/+7
This patch optimizes the compilation performance of std::remove_pointer by dispatching to the new remove_pointer built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (remove_pointer): Use __remove_pointer built-in trait. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_object compilation performanceKen Matsui1-0/+14
This patch optimizes the compilation performance of std::is_object by dispatching to the new __is_object built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_object): Use __is_object built-in trait. (is_object_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_function compilation performanceKen Matsui1-2/+21
This patch optimizes the compilation performance of std::is_function by dispatching to the new __is_function built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_function): Use __is_function built-in trait. (is_function_v): Likewise. Optimize its implementation. Move this under is_const_v as this depends on is_const_v. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_reference compilation performanceKen Matsui1-0/+14
This patch optimizes the compilation performance of std::is_reference by dispatching to the new __is_reference built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_reference): Use __is_reference built-in trait. (is_reference_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_member_object_pointer compilation performanceKen Matsui1-1/+16
This patch optimizes the compilation performance of std::is_member_object_pointer by dispatching to the new __is_member_object_pointer built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_member_object_pointer): Use __is_member_object_pointer built-in trait. (is_member_object_pointer_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_member_function_pointer compilation performanceKen Matsui1-0/+16
This patch optimizes the compilation performance of std::is_member_function_pointer by dispatching to the new __is_member_function_pointer built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_member_function_pointer): Use __is_member_function_pointer built-in trait. (is_member_function_pointer_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_member_pointer compilation performanceKen Matsui1-1/+15
This patch optimizes the compilation performance of std::is_member_pointer by dispatching to the new __is_member_pointer built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_member_pointer): Use __is_member_pointer built-in trait. (is_member_pointer_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_scoped_enum compilation performanceKen Matsui1-0/+12
This patch optimizes the compilation performance of std::is_scoped_enum by dispatching to the new __is_scoped_enum built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_scoped_enum): Use __is_scoped_enum built-in trait. (is_scoped_enum_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_bounded_array compilation performanceKen Matsui1-0/+5
This patch optimizes the compilation performance of std::is_bounded_array by dispatching to the new __is_bounded_array built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_bounded_array_v): Use __is_bounded_array built-in trait. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16libstdc++: Optimize std::is_array compilation performanceKen Matsui1-0/+12
This patch optimizes the compilation performance of std::is_array by dispatching to the new __is_array built-in trait. libstdc++-v3/ChangeLog: * include/std/type_traits (is_array): Use __is_array built-in trait. (is_array_v): Likewise. Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
2023-12-16analyzer: use bit-level granularity for concrete bounds-checking [PR112792]David Malcolm6-183/+512
PR analyzer/112792 reports false positives from -fanalyzer's bounds-checking on certain packed structs containing bitfields e.g. in the Linux kernel's drivers/dma/idxd/device.c: union msix_perm { struct { u32 rsvd2 : 8; u32 pasid : 20; }; u32 bits; } __attribute__((__packed__)); The root cause is that the bounds-checking is done using byte offsets and ranges; in the above, an access of "pasid" is treated as a 32-bit access starting one byte inside the union, thus accessing byte offsets 1-4 when only offsets 0-3 are valid. This patch updates the bounds-checking to use bit offsets and ranges wherever possible - for concrete offsets and capacities. In the above accessing "pasid" is treated as bits 8-27 of a 32-bit region, fixing the false positive. Symbolic offsets and ranges are still handled at byte granularity. gcc/analyzer/ChangeLog: PR analyzer/112792 * bounds-checking.cc (out_of_bounds::oob_region_creation_event_capacity): Rename "capacity" to "byte_capacity". Layout fix. (out_of_bounds::::add_region_creation_events): Rename "capacity" to "byte_capacity". (class concrete_out_of_bounds): Rename m_out_of_bounds_range to m_out_of_bounds_bits and convert from a byte_range to a bit_range. (concrete_out_of_bounds::get_out_of_bounds_bytes): New. (concrete_past_the_end::concrete_past_the_end): Rename param "byte_bound" to "bit_bound". Initialize m_byte_bound. (concrete_past_the_end::subclass_equal_p): Update for renaming of m_byte_bound to m_bit_bound. (concrete_past_the_end::m_bit_bound): New field. (concrete_buffer_overflow::concrete_buffer_overflow): Convert param "range" from byte_range to bit_range. Rename param "byte_bound" to "bit_bound". (concrete_buffer_overflow::emit): Update for bits vs bytes. (concrete_buffer_overflow::describe_final_event): Split into... (concrete_buffer_overflow::describe_final_event_as_bytes): ...this (concrete_buffer_overflow::describe_final_event_as_bits): ...and this. (concrete_buffer_over_read::concrete_buffer_over_read): Convert param "range" from byte_range to bit_range. Rename param "byte_bound" to "bit_bound". (concrete_buffer_over_read::emit): Update for bits vs bytes. (concrete_buffer_over_read::describe_final_event): Split into... (concrete_buffer_over_read::describe_final_event_as_bytes): ...this (concrete_buffer_over_read::describe_final_event_as_bits): ...and this. (concrete_buffer_underwrite::concrete_buffer_underwrite): Convert param "range" from byte_range to bit_range. (concrete_buffer_underwrite::describe_final_event): Split into... (concrete_buffer_underwrite::describe_final_event_as_bytes): ...this (concrete_buffer_underwrite::describe_final_event_as_bits): ...and this. (concrete_buffer_under_read::concrete_buffer_under_read): Convert param "range" from byte_range to bit_range. (concrete_buffer_under_read::describe_final_event): Split into... (concrete_buffer_under_read::describe_final_event_as_bytes): ...this (concrete_buffer_under_read::describe_final_event_as_bits): ...and this. (region_model::check_region_bounds): Use bits for concrete values, and rename locals to indicate whether we're dealing with bits or bytes. Specifically, replace "num_bytes_sval" with "num_bits_sval", and get it from reg's "get_bit_size_sval". Replace "num_bytes_tree" with "num_bits_tree". Rename "capacity" to "byte_capacity". Rename "cst_capacity_tree" to "cst_byte_capacity_tree". Replace "offset" and "num_bytes_unsigned" with "bit_offset" and "num_bits_unsigned" respectively, converting from byte_offset_t to bit_offset_t. Replace "out" and "read_bytes" with "bits_outside" and "read_bits" respectively, converting from byte_range to bit_range. Convert "buffer" from byte_range to bit_range. Replace "byte_bound" with "bit_bound". * region.cc (region::get_bit_size_sval): New. (offset_region::get_bit_offset): New. (offset_region::get_bit_size_sval): New. (sized_region::get_bit_size_sval): New. (bit_range_region::get_bit_size_sval): New. * region.h (region::get_bit_size_sval): New vfunc. (offset_region::get_bit_offset): New decl. (offset_region::get_bit_size_sval): New decl. (sized_region::get_bit_size_sval): New decl. (bit_range_region::get_bit_size_sval): New decl. * store.cc (bit_range::intersects_p): New, based on byte_range::intersects_p. (bit_range::exceeds_p): New, based on byte_range::exceeds_p. (bit_range::falls_short_of_p): New, based on byte_range::falls_short_of_p. (byte_range::intersects_p): Delete. (byte_range::exceeds_p): Delete. (byte_range::falls_short_of_p): Delete. * store.h (bit_range::intersects_p): New overload. (bit_range::exceeds_p): New. (bit_range::falls_short_of_p): New. (byte_range::intersects_p): Delete. (byte_range::exceeds_p): Delete. (byte_range::falls_short_of_p): Delete. gcc/testsuite/ChangeLog: PR analyzer/112792 * c-c++-common/analyzer/out-of-bounds-pr112792.c: New test. Signed-off-by: David Malcolm <dmalcolm@redhat.com>
2023-12-16Fortran: Prevent unwanted finalization with -w option [PR112459]Paul Thomas3-2/+43
2023-12-16 Paul Thomas <pault@gcc.gnu.org> gcc/fortran PR fortran/112459 * trans-array.cc (gfc_trans_array_constructor_value): Replace gfc_notification_std with explicit logical expression that selects F2003/2008 and excludes -std=default/gnu. * trans-expr.cc (gfc_conv_expr): Ditto. gcc/testsuite/ PR fortran/112459 * gfortran.dg/pr112459.f90: New test.
2023-12-16Fortran: Fix problems with class array function selectors [PR112834]Paul Thomas6-6/+109
2023-12-16 Paul Thomas <pault@gcc.gnu.org> gcc/fortran PR fortran/112834 * match.cc (build_associate_name): Fix whitespace issues. (select_type_set_tmp): If the selector is of unknown type, go the SELECT TYPE selector to see if this is a function and, if the result is available, use its typespec. * parse.cc (parse_associate): Again, use the function result if the type of the selector result is unknown. * trans-stmt.cc (trans_associate_var): The expression has to be of type class, for class_target to be true. Convert and fix class functions. Pass the fixed expression. PR fortran/111853 * resolve.cc (gfc_expression_rank): Avoid null dereference. gcc/testsuite/ PR fortran/112834 * gfortran.dg/associate_63.f90 : New test. PR fortran/111853 * gfortran.dg/pr111853.f90 : New test.
2023-12-16c++: Fix unchecked use of CLASSTYPE_AS_BASE [PR113031]Nathaniel Shead2-1/+36
My previous commit (naively) assumed that a TREE_CODE of RECORD_TYPE or UNION_TYPE was sufficient for optype to be considered a "class type". However, this does not account for e.g. template type parameters of record or union type. This patch corrects to check for CLASS_TYPE_P before checking for as-base conversion. PR c++/113031 gcc/cp/ChangeLog: * constexpr.cc (cxx_fold_indirect_ref_1): Check for CLASS_TYPE before using CLASSTYPE_AS_BASE. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/pr113031.C: New test. Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
2023-12-16[aarch64] Add function multiversioning supportAndrew Carlotti19-124/+1141
This adds initial support for function multiversioning on aarch64 using the target_version and target_clones attributes. This loosely follows the Beta specification in the ACLE [1], although with some differences that still need to be resolved (possibly as follow-up patches). Existing function multiversioning implementations are broken in various ways when used across translation units. This includes placing resolvers in the wrong translation units, and using symbol mangling that callers to unintentionally bypass the resolver in some circumstances. Fixing these issues for aarch64 will require modifications to our ACLE specification. It will also require further adjustments to existing middle end code, to facilitate different mangling and resolver placement while preserving existing target behaviours. The list of function multiversioning features specified in the ACLE is also inconsistent with the list of features supported in target option extensions. I intend to resolve some or all of these inconsistencies at a later stage. The target_version attribute is currently only supported in C++, since this is the only frontend with existing support for multiversioning using the target attribute. On the other hand, this patch happens to enable multiversioning with the target_clones attribute in Ada and D, as well as the entire C family, using their existing frontend support. This patch also does not support the following aspects of the Beta specification: - The target_clones attribute should allow an implicit unlisted "default" version. - There should be an option to disable function multiversioning at compile time. - Unrecognised target names in a target_clones attribute should be ignored (with an optional warning). This current patch raises an error instead. [1] https://github.com/ARM-software/acle/blob/main/main/acle.md#function-multi-versioning gcc/ChangeLog: * config/aarch64/aarch64-feature-deps.h (fmv_deps_<FEAT_NAME>): Define aarch64_feature_flags mask foreach FMV feature. * config/aarch64/aarch64-option-extensions.def: Use new macros to define FMV feature extensions. * config/aarch64/aarch64.cc (aarch64_option_valid_attribute_p): Check for target_version attribute after processing target attribute. (aarch64_fmv_feature_data): New. (aarch64_parse_fmv_features): New. (aarch64_process_target_version_attr): New. (aarch64_option_valid_version_attribute_p): New. (get_feature_mask_for_version): New. (compare_feature_masks): New. (aarch64_compare_version_priority): New. (build_ifunc_arg_type): New. (make_resolver_func): New. (add_condition_to_bb): New. (dispatch_function_versions): New. (aarch64_generate_version_dispatcher_body): New. (aarch64_get_function_versions_dispatcher): New. (aarch64_common_function_versions): New. (aarch64_mangle_decl_assembler_name): New. (TARGET_OPTION_VALID_VERSION_ATTRIBUTE_P): New implementation. (TARGET_OPTION_EXPANDED_CLONES_ATTRIBUTE): New implementation. (TARGET_OPTION_FUNCTION_VERSIONS): New implementation. (TARGET_COMPARE_VERSION_PRIORITY): New implementation. (TARGET_GENERATE_VERSION_DISPATCHER_BODY): New implementation. (TARGET_GET_FUNCTION_VERSIONS_DISPATCHER): New implementation. (TARGET_MANGLE_DECL_ASSEMBLER_NAME): New implementation. * config/aarch64/aarch64.h (TARGET_HAS_FMV_TARGET_ATTRIBUTE): Set target macro. * config/arm/aarch-common.h (enum aarch_parse_opt_result): Add new value to report duplicate FMV feature. * common/config/aarch64/cpuinfo.h: New file. libgcc/ChangeLog: * config/aarch64/cpuinfo.c (enum CPUFeatures): Move to shared copy in gcc/common gcc/testsuite/ChangeLog: * gcc.target/aarch64/options_set_17.c: Reorder expected flags. * gcc.target/aarch64/cpunative/native_cpu_0.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_13.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_16.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_17.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_18.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_19.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_20.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_21.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_22.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_6.c: Ditto. * gcc.target/aarch64/cpunative/native_cpu_7.c: Ditto.
2023-12-16Add support for target_version attributeAndrew Carlotti14-23/+124
This patch adds support for the "target_version" attribute to the middle end and the C++ frontend, which will be used to implement function multiversioning in the aarch64 backend. On targets that don't use the "target" attribute for multiversioning, there is no conflict between the "target" and "target_clones" attributes. This patch therefore makes the mutual exclusion in C-family, D and Ada conditonal upon the value of the expanded_clones_attribute target hook. The "target_version" attribute is only added to C++ in this patch, because this is currently the only frontend which supports multiversioning using the "target" attribute. Support for the "target_version" attribute will be extended to C at a later date. Targets that currently use the "target" attribute for function multiversioning (i.e. i386 and rs6000) are not affected by this patch. gcc/ChangeLog: * attribs.cc (decl_attributes): Pass attribute name to target. (is_function_default_version): Update comment to specify incompatibility with target_version attributes. * cgraphclones.cc (cgraph_node::create_version_clone_with_body): Call valid_version_attribute_p for target_version attributes. * defaults.h (TARGET_HAS_FMV_TARGET_ATTRIBUTE): New macro. * target.def (valid_version_attribute_p): New hook. * doc/tm.texi.in: Add new hook. * doc/tm.texi: Regenerate. * multiple_target.cc (create_dispatcher_calls): Remove redundant is_function_default_version check. (expand_target_clones): Use target macro to pick attribute name. * targhooks.cc (default_target_option_valid_version_attribute_p): New. * targhooks.h (default_target_option_valid_version_attribute_p): New. * tree.h (DECL_FUNCTION_VERSIONED): Update comment to include target_version attributes. gcc/c-family/ChangeLog: * c-attribs.cc (attr_target_exclusions): Make target/target_clones exclusion target-dependent. (attr_target_clones_exclusions): Ditto, and add target_version. (attr_target_version_exclusions): New. (c_common_attribute_table): Add target_version. (handle_target_version_attribute): New. (handle_target_attribute): Amend comment. (handle_target_clones_attribute): Ditto. gcc/ada/ChangeLog: * gcc-interface/utils.cc (attr_target_exclusions): Make target/target_clones exclusion target-dependent. (attr_target_clones_exclusions): Ditto. gcc/d/ChangeLog: * d-attribs.cc (attr_target_exclusions): Make target/target_clones exclusion target-dependent. (attr_target_clones_exclusions): Ditto. gcc/cp/ChangeLog: * decl2.cc (check_classfn): Update comment to include target_version attributes.
2023-12-16ada: Improve attribute exclusion handlingAndrew Carlotti1-37/+33
Change the handling of some attribute mutual exclusions to use the generic attribute exclusion lists, and fix some asymmetric exclusions by adding the exclusions for always_inline after noinline or target_clones. Aside from the new always_inline exclusions, the only change is functionality is the choice of warning message displayed. All warnings about attribute mutual exclusions now use the same message. gcc/ada/ChangeLog: * gcc-interface/utils.cc (attr_noinline_exclusions): New. (attr_always_inline_exclusions): Ditto. (attr_target_exclusions): Ditto. (attr_target_clones_exclusions): Ditto. (gnat_internal_attribute_table): Add new exclusion lists. (handle_noinline_attribute): Remove custom exclusion handling. (handle_target_attribute): Ditto. (handle_target_clones_attribute): Ditto.
2023-12-16c-family: Simplify attribute exclusion handlingAndrew Carlotti3-52/+34
This patch changes the handling of mutual exclusions involving the target and target_clones attributes to use the generic attribute exclusion lists. Additionally, the duplicate handling for the always_inline and noinline attribute exclusion is removed. The only change in functionality is the choice of warning message displayed - due to either a change in the wording for mutual exclusion warnings, or a change in the order in which different checks occur. gcc/c-family/ChangeLog: * c-attribs.cc (attr_always_inline_exclusions): New. (attr_target_exclusions): Ditto. (attr_target_clones_exclusions): Ditto. (c_common_attribute_table): Add new exclusion lists. (handle_noinline_attribute): Remove custom exclusion handling. (handle_always_inline_attribute): Ditto. (handle_target_attribute): Ditto. (handle_target_clones_attribute): Ditto. gcc/testsuite/ChangeLog: * g++.target/i386/mvc2.C: * g++.target/i386/mvc3.C:
2023-12-16aarch64: Add cpu feature detection to libgccAndrew Carlotti2-0/+501
This is added to enable function multiversioning, but can also be used directly. The interface is chosen to match that used in LLVM's compiler-rt, to facilitate cross-compiler compatibility. The content of the patch is derived almost entirely from Pavel's prior contributions to compiler-rt/lib/builtins/cpu_model.c. I have made minor changes to align more closely with GCC coding style, and to exclude any code from other LLVM contributors, and am adding this to GCC with Pavel's approval. libgcc/ChangeLog: * config/aarch64/t-aarch64: Include cpuinfo.c * config/aarch64/cpuinfo.c: New file (__init_cpu_features_constructor) New. (__init_cpu_features_resolver) New. (__init_cpu_features) New. Co-authored-by: Pavel Iliin <Pavel.Iliin@arm.com>
2023-12-16aarch64: Fix +nopredres, +nols64 and +nomopsAndrew Carlotti3-10/+23
For native cpu feature detection, certain features have no entry in /proc/cpuinfo, so have to be assumed to be present whenever the detected cpu is supposed to support that feature. However, the logic for this was mistakenly implemented by excluding these features from part of aarch64_get_extension_string_for_isa_flags. This function is also used elsewhere when canonicalising explicit feature sets, which may require removing features that are normally implied by the specified architecture version. This change reenables generation of +nopredres, +nols64 and +nomops during canonicalisation, by relocating the misplaced native cpu detection logic. gcc/ChangeLog: * common/config/aarch64/aarch64-common.cc (struct aarch64_option_extension): Remove unused field. (all_extensions): Ditto. (aarch64_get_extension_string_for_isa_flags): Remove filtering of features without native detection. * config/aarch64/driver-aarch64.cc (host_detect_local_cpu): Explicitly add expected features that lack cpuinfo detection. gcc/testsuite/ChangeLog: * gcc.target/aarch64/options_set_28.c: New test.
2023-12-16aarch64: Fix +nocrypto handlingAndrew Carlotti5-15/+43
Additionally, replace all checks for the AARCH64_FL_CRYPTO bit with checks for (AARCH64_FL_AES | AARCH64_FL_SHA2) instead. The value of the AARCH64_FL_CRYPTO bit within isa_flags is now ignored, but it is retained because removing it would make processing the data in option-extensions.def significantly more complex. This bug should have been picked up by an existing test, but a missing newline meant that the pattern incorrectly allowed "+crypto+nocrypto". gcc/ChangeLog: * common/config/aarch64/aarch64-common.cc (aarch64_get_extension_string_for_isa_flags): Fix generation of the "+nocrypto" extension. * config/aarch64/aarch64.h (AARCH64_ISA_CRYPTO): Remove. (TARGET_CRYPTO): Remove. * config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins): Don't use TARGET_CRYPTO. gcc/testsuite/ChangeLog: * gcc.target/aarch64/options_set_4.c: Add terminating newline. * gcc.target/aarch64/options_set_27.c: New test.
2023-12-16Daily bump.GCC Administrator10-1/+510
2023-12-15[PATCH v4 2/3] RISC-V: Update XCValu constraints to match other vendorsMary Bennett2-9/+10
gcc/ChangeLog: * config/riscv/constraints.md: CVP2 -> CV_alu_pow2. * config/riscv/corev.md: Likewise.
2023-12-15[PATCH v4 1/3] RISC-V: Add support for XCVelw extension in CV32E40PMary Bennett10-0/+60
Spec: github.com/openhwgroup/core-v-sw/blob/master/specifications/corev-builtin-spec.md Contributors: Mary Bennett <mary.bennett@embecosm.com> Nandni Jamnadas <nandni.jamnadas@embecosm.com> Pietra Ferreira <pietra.ferreira@embecosm.com> Charlie Keaney Jessica Mills Craig Blackmore <craig.blackmore@embecosm.com> Simon Cook <simon.cook@embecosm.com> Jeremy Bennett <jeremy.bennett@embecosm.com> Helene Chelin <helene.chelin@embecosm.com> gcc/ChangeLog: * common/config/riscv/riscv-common.cc: Add XCVelw. * config/riscv/corev.def: Likewise. * config/riscv/corev.md: Likewise. * config/riscv/riscv-builtins.cc (AVAIL): Likewise. * config/riscv/riscv-ftypes.def: Likewise. * config/riscv/riscv.opt: Likewise. * doc/extend.texi: Add XCVelw builtin documentation. * doc/sourcebuild.texi: Likewise. gcc/testsuite/ChangeLog: * gcc.target/riscv/cv-elw-elw-compile-1.c: Create test for cv.elw. * lib/target-supports.exp: Add proc for the XCVelw extension.
2023-12-15[PATCH] RISC-V: Add -fno-vect-cost-model to pr112773 testcasePatrick O'Neill1-1/+1
The testcase for pr112773 started passing after r14-6472-g8501edba91e which was before the actual fix. This patch adds -fno-vect-cost-model which prevents the testcase from passing due to the vls change. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/partial/pr112773.c: Add -fno-vect-cost-model. Signed-off-by: Patrick O'Neill <patrick@rivosinc.com>
2023-12-15Re: [PATCH] RISC-V: fix scalar crypto patternsJeff Law15-67/+298
A handful of the scalar crypto instructions are supposed to take a constant integer argument 0..3 inclusive and one should accept 0..10. A suitable constraint was created and used for this purpose (D03 and DsA), but the operand's predicate is "register_operand". That's just wrong. This patch adds a new predicates "const_0_3_operand" and "const_0_10_operand" and fixes the relevant insns to use the appropriate predicate. It drops the now unnecessary constraints. The testsuite was broken in a way that made it consistent with the compiler, so the tests passed, when they really should have been issuing errors all along. This patch adjusts the existing tests so that they all expect a diagnostic on the invalid operand usage (including out of range constants). It adds new tests with proper constants, testing the extremes of valid values. PR target/110201 gcc/ * config/riscv/constraints.md (D03, DsA): Remove unused constraints. * config/riscv/predicates.md (const_0_3_operand): New predicate. (const_0_10_operand): Likewise. * config/riscv/crypto.md (riscv_aes32dsi): Use new predicate. Drop unnecessary constraint. (riscv_aes32dsmi, riscv_aes64im, riscv_aes32esi): Likewise. (riscv_aes32esmi, *riscv_<sm4_op>_si): Likewise. (riscv_<sm4_op>_di_extend, riscv_<sm4_op>_si): Likewise. gcc/testsuite * gcc.target/riscv/zknd32.c: Verify diagnostics are issued for invalid builtin arguments. * gcc.target/riscv/zknd64.c: Likewise. * gcc.target/riscv/zkne32.c: Likewise. * gcc.target/riscv/zkne64.c: Likewise. * gcc.target/riscv/zksed32.c: Likewise. * gcc.target/riscv/zksed64.c: Likewise. * gcc.target/riscv/zknd32-2.c: New test * gcc.target/riscv/zknd64-2.c: Likewise. * gcc.target/riscv/zkne32-2.c: Likewise. * gcc.target/riscv/zkne64-2.c: Likewise. * gcc.target/riscv/zksed32-2.c: Likewise. * gcc.target/riscv/zksed64-2.c: Likewise. Co-authored-by: Liao Shihua <shihua@iscas.ac.cn>
2023-12-15fortran: Update degree trigs documentation.Jerry DeLisle2-22/+19
This is only some cleanup. gcc/fortran/ChangeLog: PR fortran/112783 * intrinsic.texi: Fix where no COMPLEX allowed. * invoke.texi: Clarify -fdev-math.
2023-12-15aarch64: Add new load/store pair fusion pass.Alex Coplan7-2/+2763
This adds a new aarch64-specific RTL-SSA pass dedicated to forming load and store pairs (LDPs and STPs). As a motivating example for the kind of thing this improves, take the following testcase: extern double c[20]; double f(double x) { double y = x*x; y += c[16]; y += c[17]; y += c[18]; y += c[19]; return y; } for which we currently generate (at -O2): f: adrp x0, c add x0, x0, :lo12:c ldp d31, d29, [x0, 128] ldr d30, [x0, 144] fmadd d0, d0, d0, d31 ldr d31, [x0, 152] fadd d0, d0, d29 fadd d0, d0, d30 fadd d0, d0, d31 ret but with the pass, we generate: f: .LFB0: adrp x0, c add x0, x0, :lo12:c ldp d31, d29, [x0, 128] fmadd d0, d0, d0, d31 ldp d30, d31, [x0, 144] fadd d0, d0, d29 fadd d0, d0, d30 fadd d0, d0, d31 ret The pass is local (only considers a BB at a time). In theory, it should be possible to extend it to run over EBBs, at least in the case of pure (MEM_READONLY_P) loads, but this is left for future work. The pass works by identifying two kinds of bases: tree decls obtained via MEM_EXPR, and RTL register bases in the form of RTL-SSA def_infos. If a candidate memory access has a MEM_EXPR base, then we track it via this base, and otherwise if it is of a simple reg + <imm> form, we track it via the RTL-SSA def_info for the register. For each BB, for a given kind of base, we build up a hash table mapping the base to an access_group. The access_group data structure holds a list of accesses at each offset relative to the same base. It uses a splay tree to support efficient insertion (while walking the bb), and the nodes are chained using a linked list to support efficient iteration (while doing the transformation). For each base, we then iterate over the access_group to identify adjacent accesses, and try to form load/store pairs for those insns that access adjacent memory. The pass is currently run twice, both before and after register allocation. The first copy of the pass is run late in the pre-RA RTL pipeline, immediately after sched1, since it was found that sched1 was increasing register pressure when the pass was run before. The second copy of the pass runs immediately before peephole2, so as to get any opportunities that the existing ldp/stp peepholes can handle. There are some cases that we punt on before RA, e.g. accesses relative to eliminable regs (such as the soft frame pointer). We do this since we can't know the elimination offset before RA, and we want to avoid the RA reloading the offset (due to being out of ldp/stp immediate range) as this can generate worse code. The post-RA copy of the pass is there to pick up the crumbs that were left behind / things we punted on in the pre-RA pass. Among other things, it's needed to handle accesses relative to the stack pointer. It can also handle code that didn't exist at the time the pre-RA pass was run (spill code, prologue/epilogue code). This is an initial implementation, and there are (among other possible improvements) the following notable caveats / missing features that are left for future work, but could give further improvements: - Moving accesses between BBs within in an EBB, see above. - Out-of-range opportunities: currently the pass refuses to form pairs if there isn't a suitable base register with an immediate in range for ldp/stp, but it can be profitable to emit anchor addresses in the case that there are four or more out-of-range nearby accesses that can be formed into pairs. This is handled by the current ldp/stp peepholes, so it would be good to support this in the future. - Discovery: currently we prioritize MEM_EXPR bases over RTL bases, which can lead to us missing opportunities in the case that two accesses have distinct MEM_EXPR bases (i.e. different DECLs) but they are still adjacent in memory (e.g. adjacent variables on the stack). I hope to address this for GCC 15, hopefully getting to the point where we can remove the ldp/stp peepholes and scheduling hooks. Furthermore it would be nice to make the pass aware of section anchors (adding these as a third kind of base) allowing merging accesses to adjacent variables within the same section. gcc/ChangeLog: * config.gcc: Add aarch64-ldp-fusion.o to extra_objs for aarch64. * config/aarch64/aarch64-passes.def: Add copies of pass_ldp_fusion before and after RA. * config/aarch64/aarch64-protos.h (make_pass_ldp_fusion): Declare. * config/aarch64/aarch64.opt (-mearly-ldp-fusion): New. (-mlate-ldp-fusion): New. (--param=aarch64-ldp-alias-check-limit): New. (--param=aarch64-ldp-writeback): New. * config/aarch64/t-aarch64: Add rule for aarch64-ldp-fusion.o. * config/aarch64/aarch64-ldp-fusion.cc: New file. * doc/invoke.texi (AArch64 Options): Document new -m{early,late}-ldp-fusion options.
2023-12-15aarch64: Rewrite non-writeback ldp/stp patternsAlex Coplan8-334/+293
This patch overhauls the load/store pair patterns with two main goals: 1. Fixing a correctness issue (the current patterns are not RA-friendly). 2. Allowing more flexibility in which operand modes are supported, and which combinations of modes are allowed in the two arms of the load/store pair, while reducing the number of patterns required both in the source and in the generated code. The correctness issue (1) is due to the fact that the current patterns have two independent memory operands tied together only by a predicate on the insns. Since LRA only looks at the constraints, one of the memory operands can get reloaded without the other one being changed, leading to the insn becoming unrecognizable after reload. We fix this issue by changing the patterns such that they only ever have one memory operand representing the entire pair. For the store case, we use an unspec to logically concatenate the register operands before storing them. For the load case, we use unspecs to extract the "lanes" from the pair mem, with the second occurrence of the mem matched using a match_dup (such that there is still really only one memory operand as far as the RA is concerned). In terms of the modes used for the pair memory operands, we canonicalize these to V2x4QImode, V2x8QImode, and V2x16QImode. These modes have not only the correct size but also correct alignment requirement for a memory operand representing an entire load/store pair. Unlike the other two, V2x4QImode didn't previously exist, so had to be added with the patch. As with the previous patch generalizing the writeback patterns, this patch aims to be flexible in the combinations of modes supported by the patterns without requiring a large number of generated patterns by using distinct mode iterators. The new scheme means we only need a single (generated) pattern for each load/store operation of a given operand size. For the 4-byte and 8-byte operand cases, we use the GPI iterator to synthesize the two patterns. The 16-byte case is implemented as a separate pattern in the source (due to only having a single possible alternative). Since the UNSPEC patterns can't be interpreted by the dwarf2cfi code, we add REG_CFA_OFFSET notes to the store pair insns emitted by aarch64_save_callee_saves, so that correct CFI information can still be generated. Furthermore, we now unconditionally generate these CFA notes on frame-related insns emitted by aarch64_save_callee_saves. This is done in case that the load/store pair pass forms these into pairs, in which case the CFA notes would be needed. We also adjust the ldp/stp peepholes to generate the new form. This is done by switching the generation to use the aarch64_gen_{load,store}_pair interface, making it easier to change the form in the future if needed. (Likewise, the upcoming aarch64 load/store pair pass also makes use of this interface). This patch also adds an "ldpstp" attribute to the non-writeback load/store pair patterns, which is used by the post-RA load/store pair pass to identify existing patterns and see if they can be promoted to writeback variants. One potential concern with using unspecs for the patterns is that it can block optimization by the generic RTL passes. This patch series tries to mitigate this in two ways: 1. The pre-RA load/store pair pass runs very late in the pre-RA pipeline. 2. A later patch in the series adjusts the aarch64 mem{cpy,set} expansion to emit individual loads/stores instead of ldp/stp. These should then be formed back into load/store pairs much later in the RTL pipeline by the new load/store pair pass. gcc/ChangeLog: * config/aarch64/aarch64-ldpstp.md: Abstract ldp/stp representation from peepholes, allowing use of new form. * config/aarch64/aarch64-modes.def (V2x4QImode): Define. * config/aarch64/aarch64-protos.h (aarch64_finish_ldpstp_peephole): Declare. (aarch64_swap_ldrstr_operands): Delete declaration. (aarch64_gen_load_pair): Adjust parameters. (aarch64_gen_store_pair): Likewise. * config/aarch64/aarch64-simd.md (load_pair<DREG:mode><DREG2:mode>): Delete. (vec_store_pair<DREG:mode><DREG2:mode>): Delete. (load_pair<VQ:mode><VQ2:mode>): Delete. (vec_store_pair<VQ:mode><VQ2:mode>): Delete. * config/aarch64/aarch64.cc (aarch64_pair_mode_for_mode): New. (aarch64_gen_store_pair): Adjust to use new unspec form of stp. Drop second mem from parameters. (aarch64_gen_load_pair): Likewise. (aarch64_pair_mem_from_base): New. (aarch64_save_callee_saves): Emit REG_CFA_OFFSET notes for frame-related saves. Adjust call to aarch64_gen_store_pair (aarch64_restore_callee_saves): Adjust calls to aarch64_gen_load_pair to account for change in interface. (aarch64_process_components): Likewise. (aarch64_classify_address): Handle 32-byte pair mems in LDP_STP_N case. (aarch64_print_operand): Likewise. (aarch64_copy_one_block_and_progress_pointers): Adjust calls to account for change in aarch64_gen_{load,store}_pair interface. (aarch64_set_one_block_and_progress_pointer): Likewise. (aarch64_finish_ldpstp_peephole): New. (aarch64_gen_adjusted_ldpstp): Adjust to use generation helper. * config/aarch64/aarch64.md (ldpstp): New attribute. (load_pair_sw_<SX:mode><SX2:mode>): Delete. (load_pair_dw_<DX:mode><DX2:mode>): Delete. (load_pair_dw_<TX:mode><TX2:mode>): Delete. (*load_pair_<ldst_sz>): New. (*load_pair_16): New. (store_pair_sw_<SX:mode><SX2:mode>): Delete. (store_pair_dw_<DX:mode><DX2:mode>): Delete. (store_pair_dw_<TX:mode><TX2:mode>): Delete. (*store_pair_<ldst_sz>): New. (*store_pair_16): New. (*load_pair_extendsidi2_aarch64): Adjust to use new form. (*zero_extendsidi2_aarch64): Likewise. * config/aarch64/iterators.md (VPAIR): New. * config/aarch64/predicates.md (aarch64_mem_pair_operand): Change to a special predicate derived from aarch64_mem_pair_operator.
2023-12-15aarch64: Generalize writeback ldp/stp patternsAlex Coplan4-118/+261
Thus far the writeback forms of ldp/stp have been exclusively used in prologue and epilogue code for saving/restoring of registers to/from the stack. As such, forms of ldp/stp that weren't needed for prologue/epilogue code weren't supported by the aarch64 backend. This patch generalizes the load/store pair writeback patterns to allow: - Base registers other than the stack pointer. - Modes that weren't previously supported. - Combinations of distinct modes provided they have the same size. - Pre/post variants that weren't previously needed in prologue/epilogue code. We make quite some effort to avoid a combinatorial explosion in the number of patterns generated (and those in the source) by making extensive use of special predicates. An updated version of the upcoming ldp/stp pass can generate the writeback forms, so this patch is motivated by that. This patch doesn't add zero-extending or sign-extending forms of the writeback patterns; that is left for future work. gcc/ChangeLog: * config/aarch64/aarch64-protos.h (aarch64_ldpstp_operand_mode_p): Declare. * config/aarch64/aarch64.cc (aarch64_gen_storewb_pair): Build RTL directly instead of invoking named pattern. (aarch64_gen_loadwb_pair): Likewise. (aarch64_ldpstp_operand_mode_p): New. * config/aarch64/aarch64.md (loadwb_pair<GPI:mode>_<P:mode>): Replace with ... (*loadwb_post_pair_<ldst_sz>): ... this. Generalize as described in cover letter. (loadwb_pair<GPF:mode>_<P:mode>): Delete (superseded by the above). (*loadwb_post_pair_16): New. (*loadwb_pre_pair_<ldst_sz>): New. (loadwb_pair<TX:mode>_<P:mode>): Delete. (*loadwb_pre_pair_16): New. (storewb_pair<GPI:mode>_<P:mode>): Replace with ... (*storewb_pre_pair_<ldst_sz>): ... this. Generalize as described in cover letter. (*storewb_pre_pair_16): New. (storewb_pair<GPF:mode>_<P:mode>): Delete. (*storewb_post_pair_<ldst_sz>): New. (storewb_pair<TX:mode>_<P:mode>): Delete. (*storewb_post_pair_16): New. * config/aarch64/predicates.md (aarch64_mem_pair_operator): New. (pmode_plus_operator): New. (aarch64_ldp_reg_operand): New. (aarch64_stp_reg_operand): New.
2023-12-15aarch64: Fix up printing of ldp/stp with -msve-vector-bits=128Alex Coplan1-1/+7
Later patches allow using SVE modes in ldp/stp with -msve-vector-bits=128, so we need to make sure that we don't use SVE addressing modes when printing the address for the ldp/stp. This patch does that. gcc/ChangeLog: * config/aarch64/aarch64.cc (aarch64_print_address_internal): Handle SVE modes when printing ldp/stp addresses.
2023-12-15aarch64: Fix up aarch64_print_operand xzr/wzr caseAlex Coplan2-2/+11
This adjusts aarch64_print_operand to recognize zero rtxes in modes other than VOIDmode. This allows us to use xzr/wzr for zero vectors, for example. We extract the test into a helper function, aarch64_const_zero_rtx_p, since this predicate is needed by later patches. gcc/ChangeLog: * config/aarch64/aarch64-protos.h (aarch64_const_zero_rtx_p): New. * config/aarch64/aarch64.cc (aarch64_const_zero_rtx_p): New. Use it ... (aarch64_print_operand): ... here. Recognize CONST0_RTXes in modes other than VOIDmode.
2023-12-15aarch64, testsuite: Fix up pr103147-10.[cC]Alex Coplan2-2/+2
This disables scheduling in the pr103147-10 tests. The tests use check-function-bodies, and upcoming changes lead to a different schedule. gcc/testsuite/ChangeLog: * g++.target/aarch64/pr103147-10.C: Add -fno-schedule-insns{,2} to dg-options. * gcc.target/aarch64/pr103147-10.c: Likewise.
2023-12-15aarch64, testsuite: Allow ldp/stp on SVE regs with -msve-vector-bits=128Alex Coplan2-0/+61
Later patches in the series allow ldp and stp to use SVE modes if -msve-vector-bits=128 is provided. This patch therefore adjusts tests that pass -msve-vector-bits=128 to allow ldp/stp to save/restore SVE registers. gcc/testsuite/ChangeLog: * gcc.target/aarch64/sve/pcs/stack_clash_1_128.c: Allow ldp/stp saves of SVE registers. * gcc.target/aarch64/sve/pcs/struct_3_128.c: Likewise.
2023-12-15aarch64, testsuite: Fix up auto-init-padding testsAlex Coplan5-12/+16
The tests currently depend on memcpy lowering forming stps at -O0, but we no longer want to form stps during memcpy lowering, but instead in the upcoming load/store pair fusion pass. This patch therefore tweaks affected tests to enable optimizations (-O1), and adjusts the tests to avoid parts of the structures being optimized away where necessary. gcc/testsuite/ChangeLog: * gcc.target/aarch64/auto-init-padding-1.c: Add -O to options, adjust test to work with optimizations enabled. * gcc.target/aarch64/auto-init-padding-2.c: Add -O to options. * gcc.target/aarch64/auto-init-padding-3.c: Add -O to options, adjust test to work with optimizations enabled. * gcc.target/aarch64/auto-init-padding-4.c: Likewise. * gcc.target/aarch64/auto-init-padding-9.c: Likewise.
2023-12-15[PATCH] RISC-V: Add Zvfbfmin extension to the -march= optionXiao Zeng6-0/+104
This patch would like to add new sub extension (aka Zvfbfmin) to the -march= option. It introduces a new data type BF16. Depending on different usage scenarios, the Zvfbfmin extension may depend on 'V' or 'Zve32f'. This patch only implements dependencies in scenario of Embedded Processor. In scenario of Application Processor, it is necessary to explicitly indicate the dependent 'V' extension. You can locate more information about Zvfbfmin from below spec doc. https://github.com/riscv/riscv-bfloat16/releases/download/20231027/riscv-bfloat16.pdf gcc/ChangeLog: * common/config/riscv/riscv-common.cc: (riscv_implied_info): Add zvfbfmin item. (riscv_ext_version_table): Ditto. (riscv_ext_flag_table): Ditto. * config/riscv/riscv.opt: (MASK_ZVFBFMIN): New macro. (MASK_VECTOR_ELEN_BF_16): Ditto. (TARGET_ZVFBFMIN): Ditto. gcc/testsuite/ChangeLog: * gcc.target/riscv/arch-31.c: New test. * gcc.target/riscv/arch-32.c: New test. * gcc.target/riscv/predef-32.c: New test. * gcc.target/riscv/predef-33.c: New test.
2023-12-15PR modula2/112946 ICE assignment of string to enumeration or setGaius Mulley7-107/+324
This patch introduces type checking during FoldBecomes and also adds set/string/enum checking to the type checker. FoldBecomes has been re-written, tidied up and re-factored. gcc/m2/ChangeLog: PR modula2/112946 * gm2-compiler/M2Check.mod (checkConstMeta): New procedure function. (checkConstEquivalence): New procedure function. (doCheckPair): Add call to checkConstEquivalence. * gm2-compiler/M2GenGCC.mod (ResolveConstantExpressions): Call FoldBecomes with reduced parameters. (FoldBecomes): Re-write. (TryDeclareConst): New procedure. (RemoveQuads): New procedure. (DeclaredOperandsBecomes): New procedure function. (TypeCheckBecomes): New procedure function. (PerformFoldBecomes): New procedure. * gm2-compiler/M2Range.mod (FoldAssignment): Call AssignmentTypeCompatible to check des expr compatibility. * gm2-compiler/M2SymInit.mod (CheckReadBeforeInitQuad): Remove parameter lst. (FilterCheckReadBeforeInitQuad): Remove parameter lst. (CheckReadBeforeInitFirstBasicBlock): Remove parameter lst. Call FilterCheckReadBeforeInitQuad without lst. gcc/testsuite/ChangeLog: PR modula2/112946 * gm2/iso/fail/badassignment.mod: New test. * gm2/iso/fail/badexpression.mod: New test. * gm2/iso/fail/badexpression2.mod: New test. Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
2023-12-15c++: section attribute on templates [PR70435, PR88061]Patrick Palka6-0/+59
The section attribute currently has no effect on templates because the call to set_decl_section_name only happens at parse time (on the dependent decl) and not also at instantiation time. This patch fixes this by propagating the section name from the template to the instantiation. PR c++/70435 PR c++/88061 gcc/cp/ChangeLog: * pt.cc (tsubst_function_decl): Propagate DECL_SECTION_NAME via set_decl_section_name. (tsubst_decl) <case VAR_DECL>: Likewise. gcc/testsuite/ChangeLog: * g++.dg/ext/attr-section1.C: New test. * g++.dg/ext/attr-section1a.C: New test. * g++.dg/ext/attr-section2.C: New test. * g++.dg/ext/attr-section2a.C: New test. * g++.dg/ext/attr-section2b.C: New test.
2023-12-15c++: abi_tag attribute on templates [PR109715]Patrick Palka3-0/+38
We need to look through TEMPLATE_DECL when looking up the abi_tag attribute (as with other function/variable declaration attributes). PR c++/109715 gcc/cp/ChangeLog: * mangle.cc (get_abi_tags): Strip TEMPLATE_DECL before looking up the abi_tag attribute. gcc/testsuite/ChangeLog: * g++.dg/abi/abi-tag25.C: New test. * g++.dg/abi/abi-tag25a.C: New test.
2023-12-15Fix tests for gompAndre Vieira5-20/+9
This is to fix testisms initially introduced by: commit f5fc001a84a7dbb942a6252b3162dd38b4aae311 Author: Andre Vieira <andre.simoesdiasvieira@arm.com> Date: Mon Dec 11 14:24:41 2023 +0000 aarch64: enable mixed-types for aarch64 simdclones gcc/testsuite/ChangeLog: * gcc.dg/gomp/pr87887-1.c: Fixed test. * gcc.dg/gomp/pr89246-1.c: Likewise. * gcc.dg/gomp/simd-clones-2.c: Likewise. libgomp/ChangeLog: * testsuite/libgomp.c/declare-variant-1.c: Fixed test. * testsuite/libgomp.fortran/declare-simd-1.f90: Likewise.
2023-12-15libstdc++: Fix std::print test case for WindowsJonathan Wakely2-2/+18
libstdc++-v3/ChangeLog: * src/c++23/print.cc (__write_to_terminal) [_WIN32]: If handle does not refer to the console then just write to it using normal file I/O. * testsuite/27_io/print/2.cc (as_printed_to_terminal): Print error message on failure. (test_utf16_transcoding): Adjust for as_printed_to_terminal modifying its argument.
2023-12-15libstdc++: Simplify std::vprint_unicode for non-Windows targetsJonathan Wakely2-11/+30
Since we don't need to do anything special to print Unicode on non-Windows targets, we might as well just use std::vprint_nonunicode to implement std::vprint_unicode. Removing the duplicated code should reduce code size in cases where those calls aren't inlined. Also use an RAII type for the unused case where a non-Windows target calls __open_terminal(streambuf*) and needs to fclose the result. This makes the code futureproof in case we ever start using the __write_terminal function for non-Windows targets. libstdc++-v3/ChangeLog: * include/std/ostream (vprint_unicode) [_WIN32]: Use RAII guard. (vprint_unicode) [!_WIN32]: Just call vprint_nonunicode. * include/std/print (vprint_unicode) [!_WIN32]: Likewise.
2023-12-15libstdc++: Do not add padding for std::print to std::ostreamJonathan Wakely2-48/+8
Tim Song pointed out that although std::print behaves as a formatted output function, it does "determine padding" using the stream's flags. libstdc++-v3/ChangeLog: * include/std/ostream (vprint_nonunicode, vprint_unicode): Do not insert padding. * testsuite/27_io/basic_ostream/print/1.cc: Adjust expected behaviour.
2023-12-15libatomic: Enable lock-free 128-bit atomics on AArch64Wilco Dijkstra2-50/+161
Enable lock-free 128-bit atomics on AArch64. This is backwards compatible with existing binaries (as for these GCC always calls into libatomic, so all 128-bit atomic uses in a process are switched), gives better performance than locking atomics and is what most users expect. 128-bit atomic loads use a load/store exclusive loop if LSE2 is not supported. This results in an implicit store which is invisible to software as long as the given address is writeable (which will be true when using atomics in real code). This doesn't yet change __atomic_is_lock_free eventhough all atomics are finally lock-free on AArch64. libatomic: * config/linux/aarch64/atomic_16.S: Implement lock-free ARMv8.0 atomics. (libat_exchange_16): Merge RELEASE and ACQ_REL/SEQ_CST cases. * config/linux/aarch64/host-config.h: Use atomic_16.S for baseline v8.0.
2023-12-15AArch64: Add inline memmove expansionWilco Dijkstra6-113/+123
Add support for inline memmove expansions. The generated code is identical as for memcpy, except that all loads are emitted before stores rather than being interleaved. The maximum size is 256 bytes which requires at most 16 registers. gcc/ChangeLog: * config/aarch64/aarch64.opt (aarch64_mops_memmove_size_threshold): Change default. * config/aarch64/aarch64.md (cpymemdi): Add a parameter. (movmemdi): Call aarch64_expand_cpymem. * config/aarch64/aarch64.cc (aarch64_copy_one_block): Rename function, simplify, support storing generated loads/stores. (aarch64_expand_cpymem): Support expansion of memmove. * config/aarch64/aarch64-protos.h (aarch64_expand_cpymem): Add bool arg. gcc/testsuite/ChangeLog: * gcc.target/aarch64/memmove.c: Add new test. * gcc.target/aarch64/memmove2.c: Likewise.
2023-12-15RISC-V: Fix vmerge optimization bug in vec_perm vectorizationJuzhe-Zhong2-8/+90
This patch fixes the following FAILs in "full coverage" testing: Running target riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax FAIL: gcc.dg/vect/vect-strided-mult-char-ls.c -flto -ffat-lto-objects execution test FAIL: gcc.dg/vect/vect-strided-mult-char-ls.c execution test FAIL: gcc.dg/vect/vect-strided-u8-i2.c -flto -ffat-lto-objects execution test FAIL: gcc.dg/vect/vect-strided-u8-i2.c execution test The root cause is vmerge optimization on this following IR: _45 = VEC_PERM_EXPR <vect__3.13_47, vect__4.14_46, { 0, 257, 2, 259, 4, 261, 6, 263, 8, 265, 10, 267, 12, 269, 14, 271, 16, 273, 18, 275, 20, 277, 22, 279, 24, 281, 26, 283, 28, 285, 30, 287, 32, 289, 34, 291, 36, 293, 38, 295, 40, 297, 42, 299, 44, 301, 46, 303, 48, 305, 50, 307, 52, 309, 54, 311, 56, 313, 58, 315, 60, 317, 62, 319, 64, 321, 66, 323, 68, 325, 70, 327, 72, 329, 74, 331, 76, 333, 78, 335, 80, 337, 82, 339, 84, 341, 86, 343, 88, 345, 90, 347, 92, 349, 94, 351, 96, 353, 98, 355, 100, 357, 102, 359, 104, 361, 106, 363, 108, 365, 110, 367, 112, 369, 114, 371, 116, 373, 118, 375, 120, 377, 122, 379, 124, 381, 126, 383, 128, 385, 130, 387, 132, 389, 134, 391, 136, 393, 138, 395, 140, 397, 142, 399, 144, 401, 146, 403, 148, 405, 150, 407, 152, 409, 154, 411, 156, 413, 158, 415, 160, 417, 162, 419, 164, 421, 166, 423, 168, 425, 170, 427, 172, 429, 174, 431, 176, 433, 178, 435, 180, 437, 182, 439, 184, 441, 186, 443, 188, 445, 190, 447, 192, 449, 194, 451, 196, 453, 198, 455, 200, 457, 202, 459, 204, 461, 206, 463, 208, 465, 210, 467, 212, 469, 214, 471, 216, 473, 218, 475, 220, 477, 222, 479, 224, 481, 226, 483, 228, 485, 230, 487, 232, 489, 234, 491, 236, 493, 238, 495, 240, 497, 242, 499, 244, 501, 246, 503, 248, 505, 250, 507, 252, 509, 254, 511 }>; It's obvious we have many index > 255 in shuffle indice. Here we use vmerge optimizaiton which is available but incorrect codgen cause run fail. The bug codegen: vsetvli zero,a4,e8,m8,ta,ma vmsltu.vi v0,v0,0 -> it should be 256 instead of 0, but since it is EEW8 vector, 256 is not a available value that 8bit register can hold it. vmerge.vvm v8,v8,v16,v0 After this patch: vmv.v.x v0,a6 vmerge.vvm v8,v8,v16,v0 gcc/ChangeLog: * config/riscv/riscv-v.cc (shuffle_merge_patterns): Fix bug. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/bug-1.c: New test.