aboutsummaryrefslogtreecommitdiff
path: root/gcc
AgeCommit message (Collapse)AuthorFilesLines
2021-10-13Refactor HIR to use new Mutability enumDavid Faust19-111/+163
Introduce a new header rust/util/rust-common.h and move the enum previously Rust::TyTy::TypeMutability there as Rust::Mutability. Update the following objects to use Mutability enum rather than a bool: - HIR::IdentifierPattern - HIR::ReferencePattern - HIR::StructPatternFieldIdent - HIR::BorrowExpr - HIR::RawPointerType - HIR::ReferenceType - HIR::StaticItem - HIR::ExternalStaticItem Also add a HIR::SelfParam::get_mut () helper, mapping its internal custom mutability to the common Rust::Mutability. Fixes: #731
2021-10-13Merge #728bors[bot]3-23/+13
728: Remove `AST::BlockExpr` lambda and add `get_statements` r=philberty a=wan-nyan-wan This PR Fixes #724. This patch removes lambda iterators in `AST::BlockExpr` and replace these with `get_statements`. These lambda iterators need to be removed they make working with the IR's more complex for static analysis. Co-authored-by: wan-nyan-wan <distributed.system.love@gmail.com>
2021-10-13remove AST::BlockExpr lambda and add get_statementswan-nyan-wan3-23/+13
Signed-off-by: Kazuki Hanai <distributed.system.love@gmail.com>
2021-10-13Merge #710 #727bors[bot]17-300/+430
710: Ensure for Coercion Sites we emit the code nessecary r=philberty a=philberty Coercion sites in Rust can require extra code generation for CallExpressions arguments for example. This ensures we detect those cases and emit the extra code necessary. Please read the individual commit messages for more detail on how this works. Fixes #700 #708 #709 727: Remove lambda iterators in various HIR classes r=philberty a=dafaust (This is a revision of #726 with formatting fixes) This patch removes the lambda iterators used in various HIR objects. These iterators make interacting with the IR for static analysis more difficult. Instead, get_X () helpers are added for accessing elements, and uses of the iterators replaced with for loops. The following objects are adjusted in this patch: - HIR::ArrayElemsValues - HIR::TupleExpr - HIR::StructExprField - HIR::StructStruct - HIR::TupleStruct Fixes: #703 Fixes: #704 Fixes: #705 Fixes: #706 Fixes: #707 Co-authored-by: Philip Herron <philip.herron@embecosm.com> Co-authored-by: David Faust <david.faust@oracle.com>
2021-10-12Refactor TyTy with new Mutability enumDavid Faust9-36/+51
Add a new TyTy::TypeMutability enum for describing mutability. Use it for ReferenceType and PointerType, rather than a boolean is_mut, to make the code more readable. TyTy::TypeMutability can be used in several other places in the future. Fixes: #677
2021-10-11Remove lambda iterators in various HIR classesDavid Faust10-148/+107
This patch removes the lambda iterators used in various HIR objects. These iterators make interacting with the IR for static analysis more difficult. Instead, get_X () helpers are added for accessing elements, and uses of the iterators replaced with for loops. The following objects are adjusted in this patch: - HIR::ArrayElemsValues - HIR::TupleExpr - HIR::StructExprField - HIR::StructStruct - HIR::TupleStruct Fixes: #703, #704, #705, #706, #707
2021-10-05Ensure we emit the code for coercion sites on CallExpr and MethodCallExprPhilip Herron4-18/+199
When we coerce the types of arguments to the parameters of functions for example we must store the actual type of the argument at that HIR ID not the coerced ones. This gives the backend a chance to then figure out when to actually implement any coercion site code. such as computing the dynamic objects. Fixes: #700
2021-10-05Coercion site type checking in CallExprs must hold onto the argument typePhilip Herron2-10/+13
Coercion sites like CallExpr arguments must coerce the arguments for type checking, but we must still insert the type of the actual argument for that mapping not the coerced type. For example we might have: ```rust fn dynamic_dispatch(t: &dyn Bar) { t.baz(); } fn main() { let a = &Foo(123); dynamic_dispatch(a); } ``` Here the argument 'a' has a type of (&ADT{Foo}) but this is coerceable to (&dyn{Bar}) which is fine. The backend needs to be able to detect the coercion from the two types in order to generate the vtable code. This patch fixes the type checking such that we store the actual type of (&ADT{Foo}) at that argument mapping instead of the coerced one. Addresses: #700
2021-10-05Remove lambda iterator from HIR::MethodCallExprPhilip Herron4-48/+39
This removes the bad code style lambda iterators for arguments. They are a bad design choice for static analysis code since the user of the api looses scope to break/return outside from the caller api. This will need to be added to a style-guide in the future. Fixes: #709
2021-10-05Remove lambda iterator from HIR::CallExprPhilip Herron5-91/+87
This removes the bad code style lambda iterators for arguments. They are a bad design choice for static analysis code since the user of the api looses scope to break/return outside from the caller api. This will need to be added to a style-guide in the future. Fixes: #708
2021-10-04Merge #698 #701bors[bot]5-23/+216
698: Implement Byte Strings r=philberty a=philberty Byte strings are not str's they are arrays of [u8; capacity], this preserves their type guarantees as a byte string. This patch merges work from Mark to implement the correct typing, the missing piece was that each implicit type needed its own implicit id, other wise their is a loop in looking up the covariant types. Fixes #697 Co-authored-by: Mark Wielaard <mark@klomp.org> 701: Fix lexer to not produce bad unicode escape values r=philberty a=CohenArthur There were a couple of issues in the lexer unicode escape code. Unicode escape sequences must always start with an opening curly bracket (and end with a closing one). Underscores are not allowed as starting character. And the produced values must be unicode scalar values, which excludes surrogate values (D800 to DFFF) or values larger than 10FFFF. Also try to recover more gracefully from errors by trying to skip past any bad characters to the end of the escape sequence. Test all of the above in a new testcase unicode_escape.rs. Patch: https://git.sr.ht/~mjw/gccrs/commit/unicode_escape Mail: https://gcc.gnu.org/pipermail/gcc-rust/2021-October/000231.html Co-authored-by: Philip Herron <philip.herron@embecosm.com> Co-authored-by: Mark Wielaard <mark@klomp.org>
2021-10-02Fix lexer to not produce bad unicode escape valuesMark Wielaard2-16/+132
There were a couple of issues in the lexer unicode escape code. Unicode escape sequences must always start with an opening curly bracket (and end with a closing one). Underscores are not allowed as starting character. And the produced values must be unicode scalar values, which excludes surrogate values (D800 to DFFF) or values larger than 10FFFF. Also try to recover more gracefully from errors by trying to skip past any bad characters to the end of the escape sequence. Test all of the above in a new testcase unicode_escape.rs.
2021-09-30Remove raw string and raw byte string references from ast and hirMark Wielaard5-14/+1
Raw strings and raw byte strings are simply different ways to create string and byte string literals. Only the lexer cares how those literals are constructed and which escapes are used to construct them. The parser and hir simply see strings or byte strings.
2021-09-30Implement Byte StringsPhilip Herron3-7/+84
Byte strings are not str's they are arrays of [u8; capacity], this preserves their type guarantees as a byte string. This patch merges work from Mark to implement the correct typing, the missing piece was that each implicit type needed its own implicit id, other wise their is a loop in looking up the covariant types. Fixes #697 Co-authored-by: Mark Wielaard <mark@klomp.org>
2021-09-29Fix raw byte string parsing of zero and out of range bytesMark Wielaard2-5/+15
Allow \0 escape in raw byte string and reject non-ascii byte values. Change parse_partial_hex_escapes to not skip bad characters to provide better error messages. Add rawbytestring.rs testcase to check string, raw string, byte string and raw byte string parsing.
2021-09-24Merge commit '2961ac45b9e19523958757e607d11c5893d6368b' [#247]Thomas Schwinge7286-284586/+514694
2021-09-24Merge #689bors[bot]2-101/+101
689: x86: Instead of 'TARGET_ISA_[...]', 'TARGET_ISA2_[...]', use 'TARGET_[...]' [#247] r=philberty a=tschwinge ... in preparation for a merge from GCC upstream, where the former disappear. Co-authored-by: Thomas Schwinge <thomas@codesourcery.com>
2021-09-24A bit of 'RichLocation' C++ tuning [#247], [#97, #374]Thomas Schwinge4-6/+7
... in preparation for a merge from GCC upstream, where we otherwise run into several different build errors. Follow-up to commit ed651fcdec170456f7460703edbd0ca5901f0026 "Add basic wrapper over gcc rich_location".
2021-09-23x86: Instead of 'TARGET_ISA_[...]', 'TARGET_ISA2_[...]', use 'TARGET_[...]' ↵Thomas Schwinge2-101/+101
[#247] ... in preparation for a merge from GCC upstream, where the former disappear.
2021-09-23Merge #688bors[bot]1-2/+6
688: Remove warnings from v0_mangle functions in rust-mangle.cc r=CohenArthur a=philberty With this and that patch applied there are no more warnings building the rust frontend, so a --enable-bootstrap (-Werror) build completes successfully. Fixes #336 Co-authored-by: Mark Wielaard <mark@klomp.org>
2021-09-23Remove warnings from v0_mangle functions in rust-mangle.ccMark Wielaard1-2/+6
There were two warnings in rust-mangle.cc rust-mangle.cc: In function ‘std::string Rust::Compile::v0_mangle_item (const Rust::TyTy::BaseType*, const Rust::Resolver::CanonicalPath&, const string&)’: rust-mangle.cc:198:1: warning: no return statement in function returning non-void rust-mangle.cc: At global scope: rust-mangle.cc:201:1: warning: ‘std::string Rust::Compile::v0_mangle_impl_item (const Rust::TyTy::BaseType*, const Rust::TyTy::BaseType*, const string&, const string&)’ declared ‘static’ but never defined [-Wunused-function] The first results in undefined behaviour, the second points out that the function isn't ever called/used. Fix the first by adding a gcc_unreachable () to turn the calling of the function into an abort (). Fix the second by adding the call in Mangler::mangle_impl_item. And add an implementation simply calling gcc-unreachable (). This turns the warnings and undefined behaviour into explicit runtime aborts when these functions are actually called.
2021-09-22Fix byte char and byte string lexing codeMark Wielaard2-15/+8
There were two warnings in lexer parse_byte_char and parse_byte_string code for arches with signed chars: rust-lex.cc: In member function ‘Rust::TokenPtr Rust::Lexer::parse_byte_char(Location)’: rust-lex.cc:1564:21: warning: comparison is always false due to limited range of data type [-Wtype-limits] 1564 | if (byte_char > 127) | ~~~~~~~~~~^~~~~ rust-lex.cc: In member function ‘Rust::TokenPtr Rust::Lexer::parse_byte_string(Location)’: rust-lex.cc:1639:27: warning: comparison is always false due to limited range of data type [-Wtype-limits] 1639 | if (output_char > 127) | ~~~~~~~~~~~~^~~~~ The fix would be to cast to an unsigned char before the comparison. But that is actually wrong, and would produce the following errors parsing a byte char or byte string: bytecharstring.rs:3:14: error: ‘byte char’ ‘�’ out of range 3 | let _bc = b'\x80'; | ^ bytecharstring.rs:4:14: error: character ‘�’ in byte string out of range 4 | let _bs = b"foo\x80bar"; | ^ Both byte chars and byte strings may contain up to \xFF (255) characters. It is utf-8 chars or strings that can only Remove the faulty check and add a new testcase bytecharstring.rs that checks byte chars and strings do accept > 127 hex char escapes, but utf-8 chars and strings reject such hex char escapes.
2021-09-19Merge #685bors[bot]1-12/+97
685: Add v0 type mangling prefixing for simple types r=philberty a=CohenArthur This PR adds the generation of type prefixes for simple types, which are numeric types, booleans, chars, strings, empty tuples/unit types and placeholder types. I'm unsure as to how to test this, even in the long run. There might be some shenanigans we can pull using an elf reader and regexes in order to compare ABI names with rustc. The entire implementation of v0 name mangling is very large, so I thought I'd split it up in multiple PRs. Co-authored-by: CohenArthur <arthur.cohen@epita.fr>
2021-09-18v0-mangling: Add type prefixing for simple typesCohenArthur1-12/+97
2021-09-17Initial Dynamic dispatch supportPhilip Herron9-29/+427
This is the first pass at implementing dynamic dispatch, it creates a vtable object and trait object to store the vtable and reciever. The method resolution during type checking acts the same as if it was a generic type bound method call. This detects this case during code generation to access the dynamic object appropriately to get the fnptr and call it with the stored reciever. Fixes: #197
2021-09-17openacc: Remove unnecessary barriers (gimple worker partitioning/broadcast)Julian Brown2-29/+94
This is an optimisation for middle-end worker-partitioning support (used to support multiple workers on AMD GCN). At present, barriers may be emitted in cases where they aren't needed and cannot be optimised away. This patch stops the extraneous barriers from being emitted in the first place. One exception to the above (where the barrier is still needed) is for predicated blocks of code that perform a write to gang-private shared memory from one worker. We must execute a barrier before other workers read that shared memory location. gcc/ * config/gcn/gcn.c (gimple.h): Include. (gcn_fork_join): Emit barrier for worker-level joins. * omp-oacc-neuter-broadcast.cc (find_local_vars_to_propagate): Add writes_gang_private bitmap parameter. Set bit for blocks containing gang-private variable writes. (worker_single_simple): Don't emit barrier after predicated block. (worker_single_copy): Don't emit barrier if we're not broadcasting anything and the block contains no gang-private writes. (neuter_worker_single): Don't predicate blocks that only contain NOPs or internal marker functions. Pass has_gang_private_write argument to worker_single_copy. (oacc_do_neutering): Add writes_gang_private bitmap handling.
2021-09-17openacc: Shared memory layout optimisationJulian Brown9-89/+534
This patch implements an algorithm to lay out local data-share (LDS) space. It currently works for AMD GCN. At the moment, LDS is used for three things: 1. Gang-private variables 2. Reduction temporaries (accumulators) 3. Broadcasting for worker partitioning After the patch is applied, (2) and (3) are placed at preallocated locations in LDS, and (1) continues to be handled by the backend (as it is at present prior to this patch being applied). LDS now looks like this: +--------------+ (gang-private size + 1024, = 1536) | free space | | ... | | - - - - - - -| | worker bcast | +--------------+ | reductions | +--------------+ <<< -mgang-private-size=<number> (def. 512) | gang-private | | vars | +--------------+ (32) | low LDS vars | +--------------+ LDS base So, gang-private space is fixed at a constant amount at compile time (which can be increased with a command-line switch if necessary for some given code). The layout algorithm takes out a slice of the remainder of usable space for reduction vars, and uses the rest for worker partitioning. The partitioning algorithm works as follows. 1. An "adjacency" set is built up for each basic block that might do a broadcast. This is calculated by starting at each such block, and doing a recursive DFS walk over successors to find the next block (or blocks) that *also* does a broadcast (dfs_broadcast_reachable_1). 2. The adjacency set is inverted to get adjacent predecessor blocks also. 3. Blocks that will perform a broadcast are sorted by size of that broadcast: the biggest blocks are handled first. 4. A splay tree structure is used to calculate the spans of LDS memory that are already allocated by the blocks adjacent to this one (merge_ranges{,_1}. 5. The current block's broadcast space is allocated from the first free span not allocated in the splay tree structure calculated above (first_fit_range). This seems to work quite nicely and efficiently with the splay tree structure. 6. Continue with the next-biggest broadcast block until we're done. In this way, "adjacent" broadcasts will not use the same piece of LDS memory. PR96334 "openacc: Unshare reduction temporaries for GCN" got merged in: The GCN backend uses tree nodes like MEM((__lds TYPE *) <constant>) for reduction temporaries. Unlike e.g. var decls and SSA names, these nodes cannot be shared during gimplification, but are so in some circumstances. This is detected when appropriate --enable-checking options are used. This patch unshares such nodes when they are reused more than once. gcc/ * config/gcn/gcn-protos.h (gcn_goacc_create_worker_broadcast_record): Update prototype. * config/gcn/gcn-tree.c (gcn_goacc_get_worker_red_decl): Use preallocated block of LDS memory. Do not cache/share decls for reduction temporaries between invocations. (gcn_goacc_reduction_teardown): Unshare VAR on second use. (gcn_goacc_create_worker_broadcast_record): Add OFFSET parameter and return temporary LDS space at that offset. Return pointer in "sender" case. * config/gcn/gcn.c (acc_lds_size, gang_private_hwm, lds_allocs): New global vars. (ACC_LDS_SIZE): Define as acc_lds_size. (gcn_init_machine_status): Don't initialise lds_allocated, lds_allocs, reduc_decls fields of machine function struct. (gcn_option_override): Handle default size for gang-private variables and -mgang-private-size option. (gcn_expand_prologue): Use LDS_SIZE instead of LDS_SIZE-1 when initialising M0_REG. (gcn_shared_mem_layout): New function. (gcn_print_lds_decl): Update comment. Use global lds_allocs map and gang_private_hwm variable. (TARGET_GOACC_SHARED_MEM_LAYOUT): Define target hook. * config/gcn/gcn.h (machine_function): Remove lds_allocated, lds_allocs, reduc_decls. Add reduction_base, reduction_limit. * config/gcn/gcn.opt (gang_private_size_opt): New global. (mgang-private-size=): New option. * doc/tm.texi.in (TARGET_GOACC_SHARED_MEM_LAYOUT): Place documentation hook. * doc/tm.texi: Regenerate. * omp-oacc-neuter-broadcast.cc (targhooks.h, diagnostic-core.h): Add includes. (build_sender_ref): Handle sender_decl being pointer. (worker_single_copy): Add PLACEMENT and ISOLATE_BROADCASTS parameters. Pass placement argument to create_worker_broadcast_record hook invocations. Handle sender_decl being pointer and isolate_broadcasts inserting extra barriers. (blk_offset_map_t): Add typedef. (neuter_worker_single): Add BLK_OFFSET_MAP parameter. Pass preallocated range to worker_single_copy call. (dfs_broadcast_reachable_1): New function. (idx_decl_pair_t, used_range_vec_t): New typedefs. (sort_size_descending): New function. (addr_range): New class. (splay_tree_compare_addr_range, splay_tree_free_key) (first_fit_range, merge_ranges_1, merge_ranges): New functions. (execute_omp_oacc_neuter_broadcast): Rename to... (oacc_do_neutering): ... this. Add BOUNDS_LO, BOUNDS_HI parameters. Arrange layout of shared memory for broadcast operations. (execute_omp_oacc_neuter_broadcast): New function. (pass_omp_oacc_neuter_broadcast::gate): Remove num_workers==1 handling from here. Enable pass for all OpenACC routines in order to call shared memory-layout hook. * target.def (create_worker_broadcast_record): Add OFFSET parameter. (shared_mem_layout): New hook. libgomp/ * testsuite/libgomp.oacc-c-c++-common/broadcast-many.c: Update.
2021-09-17openacc: Turn off worker partitioning if num_workers==1Julian Brown1-16/+31
This patch turns off the middle-end worker-partitioning support if the number of workers for an outlined offload function is one. In that case, we do not need to perform the broadcasting/neutering code transformation. gcc/ * omp-oacc-neuter-broadcast.cc (pass_omp_oacc_neuter_broadcast::gate): Disable if num_workers is 1. (execute_omp_oacc_neuter_broadcast): Adjust. Co-Authored-By: Thomas Schwinge <thomas@codesourcery.com>
2021-09-17Provide a relation oracle for paths.Andrew MacLeod2-15/+224
This provides a path_oracle class which can optionally be used in conjunction with another oracle to track relations on a path as it is walked. * value-relation.cc (class equiv_chain): Move to header file. (path_oracle::path_oracle): New. (path_oracle::~path_oracle): New. (path_oracle::register_relation): New. (path_oracle::query_relation): New. (path_oracle::reset_path): New. (path_oracle::dump): New. * value-relation.h (class equiv_chain): Move to here. (class path_oracle): New.
2021-09-17Virtualize relation oracle and various cleanups.Andrew MacLeod4-178/+206
Standardize equiv_oracle API onto the new relation_oracle virtual base, and then have dom_oracle inherit from that. equiv_set always returns an equivalency set now, never NULL. EQ_EXPR requires symmetry now. Each SSA name must be in the other equiv set. Shuffle some routines around, simplify. * gimple-range-cache.cc (ranger_cache::ranger_cache): Create a DOM based oracle. * gimple-range-fold.cc (fur_depend::register_relation): Use register_stmt/edge routines. * value-relation.cc (equiv_chain::find): Relocate from equiv_oracle. (equiv_oracle::equiv_oracle): Create self equivalence cache. (equiv_oracle::~equiv_oracle): Release same. (equiv_oracle::equiv_set): Return entry from self equiv cache if there are no equivalences. (equiv_oracle::find_equiv_block): Move list find to equiv_chain. (equiv_oracle::register_relation): Rename from register_equiv. (relation_chain_head::find_relation): Relocate from dom_oracle. (relation_oracle::register_stmt): New. (relation_oracle::register_edge): New. (dom_oracle::*): Rename from relation_oracle. (dom_oracle::register_relation): Adjust to call equiv_oracle. (dom_oracle::set_one_relation): Split from register_relation. (dom_oracle::register_transitives): Consolidate 2 methods. (dom_oracle::find_relation_block): Move core to relation_chain. (dom_oracle::query_relation): Rename from find_relation_dom and adjust. * value-relation.h (class relation_oracle): New pure virtual base. (class equiv_oracle): Inherit from relation_oracle and adjust. (class dom_oracle): Rename from old relation_oracle and adjust.
2021-09-17testsuite: Fix gcc.target/i386/auto-init-* tests.qing zhao20-34/+45
This set of tests failed on many different combination of -march, -mtune. some of them failed with -fstack-protestor-all, or -mno-sse. And the pattern matches are also different on lp64 or ia32. The reason for these failures is that the RTL or assembly level patten matches are only valid for -march=x86-64 -mtune=generic. We restrict the testing only for -march=x86-64 and -mtune=generic. Also add -fno-stack-protector or -msse for some of the testing cases. gcc/testsuite/ChangeLog: 2021-09-17 qing zhao <qing.zhao@oracle.com> * gcc.target/i386/auto-init-1.c: Restrict the testing only for -march=x86-64 and -mtune=generic. Add -fno-stack-protector. * gcc.target/i386/auto-init-2.c: Restrict the testing only for -march=x86-64 and -mtune=generic -msse. * gcc.target/i386/auto-init-3.c: Likewise. * gcc.target/i386/auto-init-4.c: Likewise. * gcc.target/i386/auto-init-5.c: Different pattern match for lp64 and ia32. * gcc.target/i386/auto-init-6.c: Restrict the testing only for -march=x86-64 and -mtune-generic -msse. Add -fno-stack-protector. * gcc.target/i386/auto-init-7.c: Likewise. * gcc.target/i386/auto-init-8.c: Restrict the testing only for -march=x86-64 and -mtune=generic -msse.. * gcc.target/i386/auto-init-padding-1.c: Likewise. * gcc.target/i386/auto-init-padding-10.c: Likewise. * gcc.target/i386/auto-init-padding-11.c: Likewise. * gcc.target/i386/auto-init-padding-12.c: Likewise. * gcc.target/i386/auto-init-padding-2.c: Likewise. * gcc.target/i386/auto-init-padding-3.c: Restrict the testing only for -march=x86-64. Different pattern match for lp64 and ia32. * gcc.target/i386/auto-init-padding-4.c: Restrict the testing only for -march=x86-64 and -mtune-generic -msse. * gcc.target/i386/auto-init-padding-5.c: Likewise. * gcc.target/i386/auto-init-padding-6.c: Likewise. * gcc.target/i386/auto-init-padding-7.c: Restrict the testing only for -march=x86-64 and -mtune-generic -msse. Add -fno-stack-protector. * gcc.target/i386/auto-init-padding-8.c: Likewise. * gcc.target/i386/auto-init-padding-9.c: Restrict the testing only for -march=x86-64. Different pattern match for lp64 and ia32.
2021-09-17Add method resolution to Dynamic objectsPhilip Herron1-4/+7
Support method resolution via probe of the type bound on the dynamic objects. This acts the same way as when we probe for methods like this: ```rust trait Foo { fn bar(&self); } fn test<T: Foo>(a:T) { a.bar(); } ``` Addresses: #197
2021-09-17Add object safety checks for dynamic objectsPhilip Herron9-8/+137
You cannot create dynamic objects that contain non object safe trait items. This adds checks to ensure that all items are object safe so code generation does not need to care. see: https://doc.rust-lang.org/reference/items/traits.html#object-safety Addresses: #197
2021-09-17Better handle MIN/MAX_EXPR of unrelated objects [PR102200].Martin Sebor6-10/+496
Resolves: PR middle-end/102200 - ICE on a min of a decl and pointer in a loop gcc/ChangeLog: PR middle-end/102200 * pointer-query.cc (access_ref::inform_access): Handle MIN/MAX_EXPR. (handle_min_max_size): Change argument. Store original SSA_NAME for operands to potentially distinct (sub)objects. (compute_objsize_r): Adjust call to the above. gcc/testsuite/ChangeLog: PR middle-end/102200 * gcc.dg/Wstringop-overflow-62.c: Adjust text of an expected note. * gcc.dg/Warray-bounds-89.c: New test. * gcc.dg/Wstringop-overflow-74.c: New test. * gcc.dg/Wstringop-overflow-75.c: New test. * gcc.dg/Wstringop-overflow-76.c: New test.
2021-09-17Default to TyTy::Error node on TypePath resolution failurePhilip Herron1-1/+2
We should insert the error node into the type context when we have a type error such that covariant types using TyVar can still work, and avoid the assertion to ensure something exists within the context for that id upon creation.
2021-09-17rs6000: Support for vectorizing built-in functionsBill Schmidt1-0/+257
This patch just duplicates a couple of functions and adjusts them to use the new builtin names. There's no logical change otherwise. 2021-09-17 Bill Schmidt <wschmidt@linux.ibm.com> gcc/ * config/rs6000/rs6000.c (rs6000-builtins.h): New include. (rs6000_new_builtin_vectorized_function): New function. (rs6000_new_builtin_md_vectorized_function): Likewise. (rs6000_builtin_vectorized_function): Call rs6000_new_builtin_vectorized_function. (rs6000_builtin_md_vectorized_function): Call rs6000_new_builtin_md_vectorized_function.
2021-09-17rs6000: Handle some recent MMA builtin changesBill Schmidt3-86/+138
Peter Bergner recently added two new builtins __builtin_vsx_lxvp and __builtin_vsx_stxvp. These happened to break a pattern in MMA builtins that I had been using to automate gimple folding of MMA builtins. Previously, every MMA function that could be folded had an associated internal function that it was folded into. The LXVP/STXVP builtins are just folded directly into memory operations. Instead of relying on this pattern, this patch adds a new attribute to builtins called "mmaint," which is set for all MMA builtins that have an associated internal builtin. The naming convention that adds _INTERNAL to the builtin index name remains. The rest of the patch is just duplicating Peter's patch, using the new builtin infrastructure. 2021-09-17 Bill Schmidt <wschmidt@linux.ibm.com> gcc/ * config/rs6000/rs6000-builtin-new.def (ASSEMBLE_ACC): Add mmaint flag. (ASSEMBLE_PAIR): Likewise. (BUILD_ACC): Likewise. (DISASSEMBLE_ACC): Likewise. (DISASSEMBLE_PAIR): Likewise. (PMXVBF16GER2): Likewise. (PMXVBF16GER2NN): Likewise. (PMXVBF16GER2NP): Likewise. (PMXVBF16GER2PN): Likewise. (PMXVBF16GER2PP): Likewise. (PMXVF16GER2): Likewise. (PMXVF16GER2NN): Likewise. (PMXVF16GER2NP): Likewise. (PMXVF16GER2PN): Likewise. (PMXVF16GER2PP): Likewise. (PMXVF32GER): Likewise. (PMXVF32GERNN): Likewise. (PMXVF32GERNP): Likewise. (PMXVF32GERPN): Likewise. (PMXVF32GERPP): Likewise. (PMXVF64GER): Likewise. (PMXVF64GERNN): Likewise. (PMXVF64GERNP): Likewise. (PMXVF64GERPN): Likewise. (PMXVF64GERPP): Likewise. (PMXVI16GER2): Likewise. (PMXVI16GER2PP): Likewise. (PMXVI16GER2S): Likewise. (PMXVI16GER2SPP): Likewise. (PMXVI4GER8): Likewise. (PMXVI4GER8PP): Likewise. (PMXVI8GER4): Likewise. (PMXVI8GER4PP): Likewise. (PMXVI8GER4SPP): Likewise. (XVBF16GER2): Likewise. (XVBF16GER2NN): Likewise. (XVBF16GER2NP): Likewise. (XVBF16GER2PN): Likewise. (XVBF16GER2PP): Likewise. (XVF16GER2): Likewise. (XVF16GER2NN): Likewise. (XVF16GER2NP): Likewise. (XVF16GER2PN): Likewise. (XVF16GER2PP): Likewise. (XVF32GER): Likewise. (XVF32GERNN): Likewise. (XVF32GERNP): Likewise. (XVF32GERPN): Likewise. (XVF32GERPP): Likewise. (XVF64GER): Likewise. (XVF64GERNN): Likewise. (XVF64GERNP): Likewise. (XVF64GERPN): Likewise. (XVF64GERPP): Likewise. (XVI16GER2): Likewise. (XVI16GER2PP): Likewise. (XVI16GER2S): Likewise. (XVI16GER2SPP): Likewise. (XVI4GER8): Likewise. (XVI4GER8PP): Likewise. (XVI8GER4): Likewise. (XVI8GER4PP): Likewise. (XVI8GER4SPP): Likewise. (XXMFACC): Likewise. (XXMTACC): Likewise. (XXSETACCZ): Likewise. (ASSEMBLE_PAIR_V): Likewise. (BUILD_PAIR): Likewise. (DISASSEMBLE_PAIR_V): Likewise. (LXVP): New. (STXVP): New. * config/rs6000/rs6000-call.c (rs6000_gimple_fold_new_mma_builtin): Handle RS6000_BIF_LXVP and RS6000_BIF_STXVP. * config/rs6000/rs6000-gen-builtins.c (attrinfo): Add ismmaint. (parse_bif_attrs): Handle ismmaint. (write_decls): Add bif_mmaint_bit and bif_is_mmaint. (write_bif_static_init): Handle ismmaint.
2021-09-17rs6000: Handle gimple folding of target built-insBill Schmidt1-0/+1165
This is another patch that looks bigger than it really is. Because we have a new namespace for the builtins, allowing us to have both the old and new builtin infrastructure supported at once, we need versions of these functions that use the new builtin namespace. Otherwise the code is unchanged. 2021-09-17 Bill Schmidt <wschmidt@linux.ibm.com> gcc/ * config/rs6000/rs6000-call.c (rs6000_gimple_fold_new_builtin): New forward decl. (rs6000_gimple_fold_builtin): Call rs6000_gimple_fold_new_builtin. (rs6000_new_builtin_valid_without_lhs): New function. (rs6000_gimple_fold_new_mma_builtin): Likewise. (rs6000_gimple_fold_new_builtin): Likewise.
2021-09-17Allow for coercion of structures over to dynamic objects in type systemPhilip Herron2-0/+24
This is the initial support for allowing a coercion of something like: ```rust let a = Foo(123); let b:&dyn Bound = &a; ``` The coercion will need to ensure that 'a' meets the specified bounds of the dynamic object. Addresses #197
2021-09-17Fix 'hash_table::expand' to destruct stale Value objectsThomas Schwinge2-4/+9
Thus plugging potentional memory leaks if these have non-trivial constructor/destructor. See <https://stackoverflow.com/questions/6730403/how-to-delete-object-constructed-via-placement-new-operator> and others. As one example, compilation of 'g++.dg/warn/Wmismatched-tags.C' per 'valgrind --leak-check=full' improves as follows: [...] - -104 bytes in 1 blocks are definitely lost in loss record 399 of 519 - at 0x483DFAF: realloc (vg_replace_malloc.c:836) - by 0x223B62C: xrealloc (xmalloc.c:179) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA8B373: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::reserve(unsigned int, bool) (vec.h:1858) - by 0xA8B277: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::safe_push(class_decl_loc_t::class_key_loc_t const&) (vec.h:1967) - by 0xA57481: class_decl_loc_t::add_or_diag_mismatched_tag(tree_node*, tag_types, bool, bool) (parser.c:32967) - by 0xA573E1: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32941) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA3AD12: cp_parser_elaborated_type_specifier(cp_parser*, bool, bool) (parser.c:20227) - by 0xA37EF2: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18942) - by 0xA31CDD: cp_parser_decl_specifier_seq(cp_parser*, int, cp_decl_specifier_seq*, int*) (parser.c:15517) - by 0xA43C71: cp_parser_parameter_declaration(cp_parser*, int, bool, bool*) (parser.c:24242) - -168 bytes in 3 blocks are definitely lost in loss record 422 of 519 - at 0x483DFAF: realloc (vg_replace_malloc.c:836) - by 0x223B62C: xrealloc (xmalloc.c:179) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA8B373: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::reserve(unsigned int, bool) (vec.h:1858) - by 0xA8B277: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::safe_push(class_decl_loc_t::class_key_loc_t const&) (vec.h:1967) - by 0xA57481: class_decl_loc_t::add_or_diag_mismatched_tag(tree_node*, tag_types, bool, bool) (parser.c:32967) - by 0xA573E1: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32941) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA3AD12: cp_parser_elaborated_type_specifier(cp_parser*, bool, bool) (parser.c:20227) - by 0xA37EF2: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18942) - by 0xA31CDD: cp_parser_decl_specifier_seq(cp_parser*, int, cp_decl_specifier_seq*, int*) (parser.c:15517) - by 0xA53385: cp_parser_single_declaration(cp_parser*, vec<deferred_access_check, va_gc, vl_embed>*, bool, bool, bool*) (parser.c:31072) - -488 bytes in 7 blocks are definitely lost in loss record 449 of 519 - at 0x483DFAF: realloc (vg_replace_malloc.c:836) - by 0x223B62C: xrealloc (xmalloc.c:179) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA8B373: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::reserve(unsigned int, bool) (vec.h:1858) - by 0xA8B277: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::safe_push(class_decl_loc_t::class_key_loc_t const&) (vec.h:1967) - by 0xA57481: class_decl_loc_t::add_or_diag_mismatched_tag(tree_node*, tag_types, bool, bool) (parser.c:32967) - by 0xA573E1: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32941) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA3AD12: cp_parser_elaborated_type_specifier(cp_parser*, bool, bool) (parser.c:20227) - by 0xA37EF2: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18942) - by 0xA31CDD: cp_parser_decl_specifier_seq(cp_parser*, int, cp_decl_specifier_seq*, int*) (parser.c:15517) - by 0xA49508: cp_parser_member_declaration(cp_parser*) (parser.c:26440) - -728 bytes in 7 blocks are definitely lost in loss record 455 of 519 - at 0x483B7F3: malloc (vg_replace_malloc.c:309) - by 0x223B63F: xrealloc (xmalloc.c:177) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA8B373: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::reserve(unsigned int, bool) (vec.h:1858) - by 0xA57508: class_decl_loc_t::add_or_diag_mismatched_tag(tree_node*, tag_types, bool, bool) (parser.c:32980) - by 0xA573E1: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32941) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA48BC6: cp_parser_class_head(cp_parser*, bool*) (parser.c:26090) - by 0xA4674B: cp_parser_class_specifier_1(cp_parser*) (parser.c:25302) - by 0xA47D76: cp_parser_class_specifier(cp_parser*) (parser.c:25680) - by 0xA37E27: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18912) - by 0xA31CDD: cp_parser_decl_specifier_seq(cp_parser*, int, cp_decl_specifier_seq*, int*) (parser.c:15517) - -832 bytes in 8 blocks are definitely lost in loss record 458 of 519 - at 0x483B7F3: malloc (vg_replace_malloc.c:309) - by 0x223B63F: xrealloc (xmalloc.c:177) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA901ED: bool vec_safe_reserve<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:697) - by 0xA8F161: void vec_alloc<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int) (vec.h:718) - by 0xA8D18D: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>::copy() const (vec.h:979) - by 0xA8B0C3: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::copy() const (vec.h:1824) - by 0xA896B1: class_decl_loc_t::operator=(class_decl_loc_t const&) (parser.c:32697) - by 0xA571FD: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32899) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA3AD12: cp_parser_elaborated_type_specifier(cp_parser*, bool, bool) (parser.c:20227) - by 0xA37EF2: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18942) - -1,144 bytes in 11 blocks are definitely lost in loss record 466 of 519 - at 0x483B7F3: malloc (vg_replace_malloc.c:309) - by 0x223B63F: xrealloc (xmalloc.c:177) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA901ED: bool vec_safe_reserve<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:697) - by 0xA8F161: void vec_alloc<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int) (vec.h:718) - by 0xA8D18D: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>::copy() const (vec.h:979) - by 0xA8B0C3: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::copy() const (vec.h:1824) - by 0xA896B1: class_decl_loc_t::operator=(class_decl_loc_t const&) (parser.c:32697) - by 0xA571FD: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32899) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA48BC6: cp_parser_class_head(cp_parser*, bool*) (parser.c:26090) - by 0xA4674B: cp_parser_class_specifier_1(cp_parser*) (parser.c:25302) - -1,376 bytes in 10 blocks are definitely lost in loss record 467 of 519 - at 0x483DFAF: realloc (vg_replace_malloc.c:836) - by 0x223B62C: xrealloc (xmalloc.c:179) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA8B373: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::reserve(unsigned int, bool) (vec.h:1858) - by 0xA8B277: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::safe_push(class_decl_loc_t::class_key_loc_t const&) (vec.h:1967) - by 0xA57481: class_decl_loc_t::add_or_diag_mismatched_tag(tree_node*, tag_types, bool, bool) (parser.c:32967) - by 0xA573E1: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32941) - by 0xA56C52: cp_parser_check_class_key(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32819) - by 0xA3AD12: cp_parser_elaborated_type_specifier(cp_parser*, bool, bool) (parser.c:20227) - by 0xA37EF2: cp_parser_type_specifier(cp_parser*, int, cp_decl_specifier_seq*, bool, int*, bool*) (parser.c:18942) - by 0xA31CDD: cp_parser_decl_specifier_seq(cp_parser*, int, cp_decl_specifier_seq*, int*) (parser.c:15517) - by 0xA301E0: cp_parser_simple_declaration(cp_parser*, bool, tree_node**) (parser.c:14772) - -3,552 bytes in 33 blocks are definitely lost in loss record 483 of 519 - at 0x483B7F3: malloc (vg_replace_malloc.c:309) - by 0x223B63F: xrealloc (xmalloc.c:177) - by 0xA8D848: void va_heap::reserve<class_decl_loc_t::class_key_loc_t>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:290) - by 0xA901ED: bool vec_safe_reserve<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int, bool) (vec.h:697) - by 0xA8F161: void vec_alloc<class_decl_loc_t::class_key_loc_t, va_heap>(vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>*&, unsigned int) (vec.h:718) - by 0xA8D18D: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_embed>::copy() const (vec.h:979) - by 0xA8B0C3: vec<class_decl_loc_t::class_key_loc_t, va_heap, vl_ptr>::copy() const (vec.h:1824) - by 0xA8964A: class_decl_loc_t::class_decl_loc_t(class_decl_loc_t const&) (parser.c:32689) - by 0xA8F515: hash_table<hash_map<tree_decl_hash, class_decl_loc_t, simple_hashmap_traits<default_hash_traits<tree_decl_hash>, class_decl_loc_t> >::hash_entry, false, xcallocator>::expand() (hash-table.h:839) - by 0xA8D4B3: hash_table<hash_map<tree_decl_hash, class_decl_loc_t, simple_hashmap_traits<default_hash_traits<tree_decl_hash>, class_decl_loc_t> >::hash_entry, false, xcallocator>::find_slot_with_hash(tree_node* const&, unsigned int, insert_option) (hash-table.h:1008) - by 0xA8B1DC: hash_map<tree_decl_hash, class_decl_loc_t, simple_hashmap_traits<default_hash_traits<tree_decl_hash>, class_decl_loc_t> >::get_or_insert(tree_node* const&, bool*) (hash-map.h:200) - by 0xA57128: class_decl_loc_t::add(cp_parser*, unsigned int, tag_types, tree_node*, bool, bool) (parser.c:32888) [...] LEAK SUMMARY: - definitely lost: 8,440 bytes in 81 blocks + definitely lost: 48 bytes in 1 blocks indirectly lost: 12,529 bytes in 329 blocks possibly lost: 0 bytes in 0 blocks still reachable: 1,644,376 bytes in 768 blocks gcc/ * hash-table.h (hash_table<Descriptor, Lazy, Allocator>::expand): Destruct stale Value objects. * hash-map-tests.c (test_map_of_type_with_ctor_and_dtor_expand): Update.
2021-09-17Fortran: Use _Float128 rather than __float128 for c_float128 kind.Sandra Loosemore10-22/+21
The GNU Fortran manual documents that the c_float128 kind corresponds to __float128, but in fact the implementation uses float128_type_node, which is _Float128. Both refer to the 128-bit IEEE/ISO encoding, but some targets including aarch64 only define _Float128 and not __float128, and do not provide quadmath.h. This caused errors in some test cases referring to __float128. This patch changes the documentation (including code comments) and test cases to use _Float128 to match the implementation. 2021-09-16 Sandra Loosemore <sandra@codesourcery.com> gcc/fortran/ * intrinsic.texi (ISO_C_BINDING): Change C_FLOAT128 to correspond to _Float128 rather than __float128. * iso-c-binding.def (c_float128): Update comments. * trans-intrinsic.c (gfc_builtin_decl_for_float_kind): Likewise. (build_round_expr): Likewise. (gfc_build_intrinsic_lib_fndcecls): Likewise. * trans-types.h (gfc_real16_is_float128): Likewise. gcc/testsuite/ * gfortran.dg/PR100914.c: Do not include quadmath.h. Use _Float128 _Complex instead of __complex128. * gfortran.dg/PR100914.f90: Add -Wno-pedantic to suppress error about use of _Float128. * gfortran.dg/c-interop/typecodes-array-float128-c.c: Use _Float128 instead of __float128. * gfortran.dg/c-interop/typecodes-sanity-c.c: Likewise. * gfortran.dg/c-interop/typecodes-scalar-float128-c.c: Likewise. * lib/target-supports.exp (check_effective_target_fortran_real_c_float128): Update comments. libgfortran/ * ISO_Fortran_binding.h: Update comments. * runtime/ISO_Fortran_binding.c: Likewise.
2021-09-17Type coercions are recursivePhilip Herron1-1/+1
Reference types are covariants like arrays or pointers and thus this needs to be recurisve to support all coercions possible. Addresses: #197
2021-09-17PR c/102245: Disable sign-changing optimization for shifts by zero.Roger Sayle2-2/+39
Respecting Jakub's suggestion that it may be better to warn-on-valid for "if (x << 0)" as the author might have intended "if (x < 0)" [which will also warn when x is _Bool], the simplest way to resolve this regression is to disable the recently added fold transformation for shifts by zero; these will be optimized later (elsewhere). Guarding against integer_zerop is the simplest of three alternatives; the second being to only apply this transformation to GIMPLE and not GENERIC, and the third (potentially) being to explicitly handle shifts by zero here, with an (if cond then else), optimizing the expression to a convert, but awkwardly duplicating a more general transformation earlier in match.pd's shift simplifications. 2021-09-17 Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog PR c/102245 * match.pd (shift optimizations): Disable recent sign-changing optimization for shifts by zero, these will be folded later. gcc/testsuite/ChangeLog PR c/102245 * gcc.dg/Wint-in-bool-context-4.c: New test case.
2021-09-17remove some debugPhilip Herron1-3/+0
2021-09-17When calling functions the arguments are a coercion sitePhilip Herron2-4/+35
This changes all type checking of arguments to function calls to be coercion sites instead of unifications. This allows for cases like mutable pointers being coerced to immutable reference for example.
2021-09-17Cleanup error handling for CallExprPhilip Herron2-9/+13
Call Expressions need to type check the argument passing but the type system will return TyTy::Error nodes, it used to return nullptr about a year ago. Returning error nodes are safer and more flexible for detailed error handling and diagnostics. Addresses: #539
2021-09-17Add building blocks for Dynamic object typesPhilip Herron14-35/+394
This is the stub implementation for dynamic object types within the type system. More work is needed to actually support dynamic trait objects. The next change requires us to support type coercions in for arguments to functions such as a fat-reference to a type being coerced into this dynamic trait object for dynamic dispatch. Addresses: #197
2021-09-17rs6000: Move __builtin_mffsl to the [always] stanzaBill Schmidt1-3/+6
I over-restricted use of __builtin_mffsl, since I was unaware that it automatically uses mffs when mffsl is not available. Paul Clarke pointed this out in discussion of his SSE 4.1 compatibility patches. 2021-08-31 Bill Schmidt <wschmidt@linux.ibm.com> gcc/ * config/rs6000/rs6000-builtin-new.def (__builtin_mffsl): Move from [power9] to [always].
2021-09-17Revert no longer needed fix for PR95539Richard Biener1-12/+1
The workaround is no longer necessary since we maintain alignment info on the DR group leader only. 2021-09-17 Richard Biener <rguenther@suse.de> * tree-vect-stmts.c (vectorizable_load): Do not frob stmt_info for SLP.
2021-09-17openmp: Add support for OpenMP 5.1 atomics for C++Jakub Jelinek15-101/+704
Besides the C++ FE changes, I've noticed that the C FE didn't reject #pragma omp atomic capture compare { v = x; x = y; } and other forms of atomic swap, this patch fixes that too. And the c-family/ routine needed quite a few changes so that the new code in it works fine with both FEs. 2021-09-17 Jakub Jelinek <jakub@redhat.com> gcc/c-family/ * c-omp.c (c_finish_omp_atomic): Avoid creating TARGET_EXPR if test is true, use create_tmp_var_raw instead of create_tmp_var and add a zero initializer to TARGET_EXPRs that had NULL initializer. When omitting operands after v = x, use type of v rather than type of x. Fix type of vtmp TARGET_EXPR. gcc/c/ * c-parser.c (c_parser_omp_atomic): Reject atomic swap if capture is true. gcc/cp/ * cp-tree.h (finish_omp_atomic): Add r and weak arguments. * parser.c (cp_parser_omp_atomic): Update function comment for OpenMP 5.1 atomics, parse OpenMP 5.1 atomics and fail, compare and weak clauses. * semantics.c (finish_omp_atomic): Add r and weak arguments, handle them, handle COND_EXPRs. * pt.c (tsubst_expr): Adjust for COND_EXPR forms that finish_omp_atomic can now produce. gcc/testsuite/ * c-c++-common/gomp/atomic-18.c: Expect same diagnostics in C++ as in C. * c-c++-common/gomp/atomic-25.c: Drop c effective target. * c-c++-common/gomp/atomic-26.c: Likewise. * c-c++-common/gomp/atomic-27.c: Likewise. * c-c++-common/gomp/atomic-28.c: Likewise. * c-c++-common/gomp/atomic-29.c: Likewise. * c-c++-common/gomp/atomic-30.c: Likewise. Adjust expected diagnostics for C++ when it differs from C. (foo): Change return type from double to void. * g++.dg/gomp/atomic-5.C: Adjust expected diagnostics wording. * g++.dg/gomp/atomic-20.C: New test. libgomp/ * testsuite/libgomp.c-c++-common/atomic-19.c: Drop c effective target. Use /* */ comments instead of //. * testsuite/libgomp.c-c++-common/atomic-20.c: Likewise. * testsuite/libgomp.c-c++-common/atomic-21.c: Likewise. * testsuite/libgomp.c++/atomic-16.C: New test. * testsuite/libgomp.c++/atomic-17.C: New test.