aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-ssa-sccvn.c
AgeCommit message (Collapse)AuthorFilesLines
2020-11-16tree-optimization/97830 - fix compare of incomplete type size in VNRichard Biener1-1/+4
This avoids passing NULL to expressions_equal_p. 2020-11-16 Richard Biener <rguenther@suse.de> PR tree-optimization/97830 * tree-ssa-sccvn.c (vn_reference_eq): Check for incomplete types before comparing TYPE_SIZE. * gcc.dg/pr97830.c: New testcase.
2020-11-13improve VN PHI hashingRichard Biener1-6/+21
This reduces the number of collisions for PHIs in the VN hashtable by always hashing the number of predecessors and separately hashing the block number when we never merge PHIs from different blocks. This improves collisions seen for the PR69609 testcase dramatically. 2020-11-13 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (vn_phi_compute_hash): Always hash the number of predecessors. Hash the block number also for loop header PHIs. (expressions_equal_p): Short-cut SSA name compares, remove test for NULL operands. (vn_phi_eq): Cache number of predecessors, change inlined test from expressions_equal_p.
2020-11-10sccvn: Fix up push_partial_def little-endian bitfield handling [PR97764]Jakub Jelinek1-1/+4
This patch fixes a thinko in the left-endian push_partial_def path. As the testcase shows, we have 3 bitfields in the struct, bitoff bitsize 0 3 3 28 31 1 the corresponding read is the byte at offset 3 (i.e. 24 bits) and push_partial_def first handles the full store ({}) to all bits and then is processing the store to the middle bitfield with value of -1. Here are the interesting spots: pd.offset -= offseti; this adjusts the pd to { -21, 28 }, the (for little-endian lowest) 21 bits aren't interesting to us, we only care about the upper 7. len = native_encode_expr (pd.rhs, this_buffer, bufsize, MAX (0, -pd.offset) / BITS_PER_UNIT); native_encode_expr has the offset parameter in bytes and we tell it that we aren't interested in the first (lowest) two bytes of the number. It encodes 0xff, 0xff with len == 2 then. HOST_WIDE_INT size = pd.size; if (pd.offset < 0) size -= ROUND_DOWN (-pd.offset, BITS_PER_UNIT); we get 28 - 16, i.e. 12 - the 16 is subtracting those 2 bytes that we omitted in native_encode_expr. size = MIN (size, (HOST_WIDE_INT) needed_len * BITS_PER_UNIT); needed_len is how many bytes the read at most needs, and that is 1, so we get size 8 and copy all 8 bits (i.e. a single byte plus nothing) from the native_encode_expr filled this_buffer; this incorrectly sets the byte to 0xff when we want 0x7f. The above line is correct for the pd.offset >= 0 case when we don't skip anything, but for the pd.offset < 0 case we need to subtract also the remainder of the bits we aren't interested in (the code shifts the bytes by that number of bits). If it weren't for the big-endian path, we could as well do if (pd.offset < 0) size += pd.offset; but the big-endian path needs it differently. With the following patch, amnt is 3 and we subtract from 12 the (8 - 3) bits and thus get the 7 which is the value we want. 2020-11-10 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/97764 * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): For little-endian stores with negative pd.offset, subtract BITS_PER_UNIT - amnt from size if amnt is non-zero. * gcc.c-torture/execute/pr97764.c: New test.
2020-11-09CSE VN_INFO calls in PRE and VNRichard Biener1-6/+10
The following CSEs VN_INFO calls which nowadays are hashtable queries. 2020-11-09 Richard Biener <rguenther@suse.de> * tree-ssa-pre.c (get_representative_for): CSE VN_INFO calls. (create_expression_by_pieces): Likewise. (insert_into_preds_of_block): Likewsie. (do_pre_regular_insertion): Likewsie. * tree-ssa-sccvn.c (eliminate_dom_walker::eliminate_insert): Likewise. (eliminate_dom_walker::eliminate_stmt): Likewise.
2020-11-06make PRE constant value IDs negativeRichard Biener1-13/+21
This separates constant and non-constant value-ids to allow for a more efficient constant_value_id_p and for more efficient bit-packing inside the bitmap sets which never contain any constant values. There's further optimization opportunities but at this stage I'll do small refactorings. 2020-11-06 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.h (get_max_constant_value_id): Declare. (get_next_constant_value_id): Likewise. (value_id_constant_p): Inline and simplify. * tree-ssa-sccvn.c (constant_value_ids): Remove. (next_constant_value_id): Add. (get_or_alloc_constant_value_id): Adjust. (value_id_constant_p): Remove definition. (get_max_constant_value_id): Define. (get_next_value_id): Add assert for overflow. (get_next_constant_value_id): Define. (run_rpo_vn): Adjust. (free_rpo_vn): Likewise. (do_rpo_vn): Initialize next_constant_value_id. * tree-ssa-pre.c (constant_value_expressions): New. (add_to_value): Split into constant/non-constant value handling. Avoid exact re-allocation. (vn_valnum_from_value_id): Adjust. (phi_translate_1): Remove spurious exact re-allocation. (bitmap_find_leader): Adjust. Make sure we return a CONSTANT value for a constant value id. (do_pre_regular_insertion): Use 2 auto-elements for avail. (do_pre_partial_partial_insertion): Likewise. (init_pre): Allocate constant_value_expressions. (fini_pre): Release constant_value_expressions.
2020-10-08Disable TBAA in some uses of call_may_clobber_ref_pJan Hubicka1-1/+1
* tree-nrv.c (dest_safe_for_nrv_p): Disable tbaa in call_may_clobber_ref_p and ref_maybe_used_by_stmt_p. * tree-tailcall.c (find_tail_calls): Likewise. * tree-ssa-alias.c (call_may_clobber_ref_p): Add tbaa_p parameter. * tree-ssa-alias.h (call_may_clobber_ref_p): Update prototype. * tree-ssa-sccvn.c (vn_reference_lookup_3): Pass data->tbaa_p to call_may_clobber_ref_p_1.
2020-09-18tree-optimization/97089 - fix bogus unsigned division replacementRichard Biener1-1/+4
This fixes bogus replacing of an unsigned (-x)/y division by -(x/y). 2020-09-18 Richard Biener <rguenther@suse.de> PR tree-optimization/97089 * tree-ssa-sccvn.c (visit_nary_op): Do not replace unsigned divisions.
2020-09-17CSE negated multiplications and divisionsRichard Biener1-0/+35
This adds the capability to look for available negated multiplications and divisions, replacing them with cheaper negates. 2020-09-17 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (visit_nary_op): Value-number multiplications and divisions to negates of available negated forms. * gcc.dg/tree-ssa/ssa-fre-88.c: New testcase.
2020-08-27tree-optimization/96522 - transfer of flow-sensitive info in copy_ref_infoRichard Biener1-2/+1
This removes the bogus tranfer of flow-sensitive info in copy_ref_info plus fixes one oversight in FRE when flow-sensitive non-NULLness was added to points-to info. 2020-08-27 Richard Biener <rguenther@suse.de> PR tree-optimization/96522 * tree-ssa-address.c (copy_ref_info): Reset flow-sensitive info of the copied points-to. Transfer bigger alignment via the access type. * tree-ssa-sccvn.c (eliminate_dom_walker::eliminate_stmt): Reset all flow-sensitive info. * gcc.dg/torture/pr96522.c: New testcase.
2020-08-27vec: add exact argument for various grow functions.Martin Liska1-4/+4
gcc/ada/ChangeLog: * gcc-interface/trans.c (gigi): Set exact argument of a vector growth function to true. (Attribute_to_gnu): Likewise. gcc/ChangeLog: * alias.c (init_alias_analysis): Set exact argument of a vector growth function to true. * calls.c (internal_arg_pointer_based_exp_scan): Likewise. * cfgbuild.c (find_many_sub_basic_blocks): Likewise. * cfgexpand.c (expand_asm_stmt): Likewise. * cfgrtl.c (rtl_create_basic_block): Likewise. * combine.c (combine_split_insns): Likewise. (combine_instructions): Likewise. * config/aarch64/aarch64-sve-builtins.cc (function_expander::add_output_operand): Likewise. (function_expander::add_input_operand): Likewise. (function_expander::add_integer_operand): Likewise. (function_expander::add_address_operand): Likewise. (function_expander::add_fixed_operand): Likewise. * df-core.c (df_worklist_dataflow_doublequeue): Likewise. * dwarf2cfi.c (update_row_reg_save): Likewise. * early-remat.c (early_remat::init_block_info): Likewise. (early_remat::finalize_candidate_indices): Likewise. * except.c (sjlj_build_landing_pads): Likewise. * final.c (compute_alignments): Likewise. (grow_label_align): Likewise. * function.c (temp_slots_at_level): Likewise. * fwprop.c (build_single_def_use_links): Likewise. (update_uses): Likewise. * gcc.c (insert_wrapper): Likewise. * genautomata.c (create_state_ainsn_table): Likewise. (add_vect): Likewise. (output_dead_lock_vect): Likewise. * genmatch.c (capture_info::capture_info): Likewise. (parser::finish_match_operand): Likewise. * genrecog.c (optimize_subroutine_group): Likewise. (merge_pattern_info::merge_pattern_info): Likewise. (merge_into_decision): Likewise. (print_subroutine_start): Likewise. (main): Likewise. * gimple-loop-versioning.cc (loop_versioning::loop_versioning): Likewise. * gimple.c (gimple_set_bb): Likewise. * graphite-isl-ast-to-gimple.c (translate_isl_ast_node_user): Likewise. * haifa-sched.c (sched_extend_luids): Likewise. (extend_h_i_d): Likewise. * insn-addr.h (insn_addresses_new): Likewise. * ipa-cp.c (gather_context_independent_values): Likewise. (find_more_contexts_for_caller_subset): Likewise. * ipa-devirt.c (final_warning_record::grow_type_warnings): Likewise. (ipa_odr_read_section): Likewise. * ipa-fnsummary.c (evaluate_properties_for_edge): Likewise. (ipa_fn_summary_t::duplicate): Likewise. (analyze_function_body): Likewise. (ipa_merge_fn_summary_after_inlining): Likewise. (read_ipa_call_summary): Likewise. * ipa-icf.c (sem_function::bb_dict_test): Likewise. * ipa-prop.c (ipa_alloc_node_params): Likewise. (parm_bb_aa_status_for_bb): Likewise. (ipa_compute_jump_functions_for_edge): Likewise. (ipa_analyze_node): Likewise. (update_jump_functions_after_inlining): Likewise. (ipa_read_edge_info): Likewise. (read_ipcp_transformation_info): Likewise. (ipcp_transform_function): Likewise. * ipa-reference.c (ipa_reference_write_optimization_summary): Likewise. * ipa-split.c (execute_split_functions): Likewise. * ira.c (find_moveable_pseudos): Likewise. * lower-subreg.c (decompose_multiword_subregs): Likewise. * lto-streamer-in.c (input_eh_regions): Likewise. (input_cfg): Likewise. (input_struct_function_base): Likewise. (input_function): Likewise. * modulo-sched.c (set_node_sched_params): Likewise. (extend_node_sched_params): Likewise. (schedule_reg_moves): Likewise. * omp-general.c (omp_construct_simd_compare): Likewise. * passes.c (pass_manager::create_pass_tab): Likewise. (enable_disable_pass): Likewise. * predict.c (determine_unlikely_bbs): Likewise. * profile.c (compute_branch_probabilities): Likewise. * read-rtl-function.c (function_reader::parse_block): Likewise. * read-rtl.c (rtx_reader::read_rtx_code): Likewise. * reg-stack.c (stack_regs_mentioned): Likewise. * regrename.c (regrename_init): Likewise. * rtlanal.c (T>::add_single_to_queue): Likewise. * sched-deps.c (init_deps_data_vector): Likewise. * sel-sched-ir.c (sel_extend_global_bb_info): Likewise. (extend_region_bb_info): Likewise. (extend_insn_data): Likewise. * symtab.c (symtab_node::create_reference): Likewise. * tracer.c (tail_duplicate): Likewise. * trans-mem.c (tm_region_init): Likewise. (get_bb_regions_instrumented): Likewise. * tree-cfg.c (init_empty_tree_cfg_for_function): Likewise. (build_gimple_cfg): Likewise. (create_bb): Likewise. (move_block_to_fn): Likewise. * tree-complex.c (tree_lower_complex): Likewise. * tree-if-conv.c (predicate_rhs_code): Likewise. * tree-inline.c (copy_bb): Likewise. * tree-into-ssa.c (get_ssa_name_ann): Likewise. (mark_phi_for_rewrite): Likewise. * tree-object-size.c (compute_builtin_object_size): Likewise. (init_object_sizes): Likewise. * tree-predcom.c (initialize_root_vars_store_elim_1): Likewise. (initialize_root_vars_store_elim_2): Likewise. (prepare_initializers_chain_store_elim): Likewise. * tree-ssa-address.c (addr_for_mem_ref): Likewise. (multiplier_allowed_in_address_p): Likewise. * tree-ssa-coalesce.c (ssa_conflicts_new): Likewise. * tree-ssa-forwprop.c (simplify_vector_constructor): Likewise. * tree-ssa-loop-ivopts.c (addr_offset_valid_p): Likewise. (get_address_cost_ainc): Likewise. * tree-ssa-loop-niter.c (discover_iteration_bound_by_body_walk): Likewise. * tree-ssa-pre.c (add_to_value): Likewise. (phi_translate_1): Likewise. (do_pre_regular_insertion): Likewise. (do_pre_partial_partial_insertion): Likewise. (init_pre): Likewise. * tree-ssa-propagate.c (ssa_prop_init): Likewise. (update_call_from_tree): Likewise. * tree-ssa-reassoc.c (optimize_range_tests_cmp_bitwise): Likewise. * tree-ssa-sccvn.c (vn_reference_lookup_3): Likewise. (vn_reference_lookup_pieces): Likewise. (eliminate_dom_walker::eliminate_push_avail): Likewise. * tree-ssa-strlen.c (set_strinfo): Likewise. (get_stridx_plus_constant): Likewise. (zero_length_string): Likewise. (find_equal_ptrs): Likewise. (printf_strlen_execute): Likewise. * tree-ssa-threadedge.c (set_ssa_name_value): Likewise. * tree-ssanames.c (make_ssa_name_fn): Likewise. * tree-streamer-in.c (streamer_read_tree_bitfields): Likewise. * tree-vect-loop.c (vect_record_loop_mask): Likewise. (vect_get_loop_mask): Likewise. (vect_record_loop_len): Likewise. (vect_get_loop_len): Likewise. * tree-vect-patterns.c (vect_recog_mask_conversion_pattern): Likewise. * tree-vect-slp.c (vect_slp_convert_to_external): Likewise. (vect_bb_slp_scalar_cost): Likewise. (vect_bb_vectorization_profitable_p): Likewise. (vectorizable_slp_permutation): Likewise. * tree-vect-stmts.c (vectorizable_call): Likewise. (vectorizable_simd_clone_call): Likewise. (scan_store_can_perm_p): Likewise. (vectorizable_store): Likewise. * expr.c: Likewise. * vec.c (test_safe_grow_cleared): Likewise. * vec.h (vec_safe_grow): Likewise. (vec_safe_grow_cleared): Likewise. (vl_ptr>::safe_grow): Likewise. (vl_ptr>::safe_grow_cleared): Likewise. * config/c6x/c6x.c (insn_set_clock): Likewise. gcc/c/ChangeLog: * gimple-parser.c (c_parser_gimple_compound_statement): Set exact argument of a vector growth function to true. gcc/cp/ChangeLog: * class.c (build_vtbl_initializer): Set exact argument of a vector growth function to true. * constraint.cc (get_mapped_args): Likewise. * decl.c (cp_maybe_mangle_decomp): Likewise. (cp_finish_decomp): Likewise. * parser.c (cp_parser_omp_for_loop): Likewise. * pt.c (canonical_type_parameter): Likewise. * rtti.c (get_pseudo_ti_init): Likewise. gcc/fortran/ChangeLog: * trans-openmp.c (gfc_trans_omp_do): Set exact argument of a vector growth function to true. gcc/lto/ChangeLog: * lto-common.c (lto_file_finalize): Set exact argument of a vector growth function to true.
2020-08-24Add missing vn_reference_t::punned initializationMartin Liska1-2/+3
gcc/ChangeLog: PR tree-optimization/96597 * tree-ssa-sccvn.c (vn_reference_lookup_call): Add missing initialization of ::punned. (vn_reference_insert): Use consistently false instead of 0. (vn_reference_insert_pieces): Likewise.
2020-08-04tree-optimization/88240 - stopgap for floating point code-hoisting issuesRichard Biener1-1/+12
This adds a stopgap measure to avoid performing code-hoisting on mixed type loads when the load we'd insert in the hoisting position would be a floating point one. This is because certain targets (hello x87) cannot perform floating point loads without possibly altering the bit representation and thus cannot be used in place of integral loads. 2020-08-04 Richard Biener <rguenther@suse.de> PR tree-optimization/88240 * tree-ssa-sccvn.h (vn_reference_s::punned): New flag. * tree-ssa-sccvn.c (vn_reference_insert): Initialize punned. (vn_reference_insert_pieces): Likewise. (visit_reference_op_call): Likewise. (visit_reference_op_load): Track whether a ref was punned. * tree-ssa-pre.c (do_hoist_insertion): Refuse to perform hoist insertion on punned floating point loads. * gcc.target/i386/pr88240.c: New testcase.
2020-07-31Compute RPO with adjacent SCC members, expose toplevel SCC extentsRichard Biener1-44/+11
This produces a more optimal RPO order for iteration processing by making sure that SCC exits are processed before SCC members themselves.. This avoids iterating blocks unrelated to the current iteration for RPO VN and has the chance to improve code-generation for the non-iterative mode of RPO VN. The patch also exposes toplevel SCCs and gets rid of the ad-hoc max_rpo computation in RPO VN. For simplicity it also removes the odd reverse ordering of the RPO array returned from rev_post_order_and_mark_dfs_back_seme. Overall reduction in the number of visited blocks isn't spectacular for bootstrap (~2.5%) but single cases see up to a 10% reduction. The same function can be used to optimize var-tracking iteration order as seen in the followup. 2020-07-28 Richard Biener <rguenther@suse.de> * cfganal.h (rev_post_order_and_mark_dfs_back_seme): Adjust prototype. * cfganal.c (rpoamdbs_bb_data): New struct with pre BB data. (tag_header): New helper. (cmp_edge_dest_pre): Likewise. (rev_post_order_and_mark_dfs_back_seme): Compute SCCs, find SCC exits and perform a DFS walk with extra edges to compute a RPO with adjacent SCC members when requesting an iteration optimized order and populate the toplevel SCC array. * tree-ssa-sccvn.c (do_rpo_vn): Remove ad-hoc computation of max_rpo and fill it in from SCC extent info instead. * gcc.dg/torture/20200727-0.c: New testcase.
2020-07-09Make memory copy functions scalar storage order barriersEric Botcazou1-2/+10
This addresses the issue raised about the usage of memory copy functions to toggle the scalar storage order. Recall that you cannot (the compiler errors out) take the address of a scalar which is stored in reverse order, but you can do it for the enclosing aggregate type., which means that you can also pass it to the memory copy functions. In this case, the optimizer may rewrite the copy into a scalar copy, which is a no-no. gcc/c-family/ChangeLog: * c.opt (Wscalar-storage-order): Add explicit variable. gcc/c/ChangeLog: * c-typeck.c (convert_for_assignment): If -Wscalar-storage-order is set, warn for conversion between pointers that point to incompatible scalar storage orders. gcc/ChangeLog: * gimple-fold.c (gimple_fold_builtin_memory_op): Do not fold if either type has reverse scalar storage order. * tree-ssa-sccvn.c (vn_reference_lookup_3): Do not propagate through a memory copy if either type has reverse scalar storage order. gcc/testsuite/ChangeLog: * gcc.dg/sso-11.c: New test. * gcc.dg/sso/sso.exp: Pass -Wno-scalar-storage-order. * gcc.dg/sso/memcpy-1.c: New test.
2020-05-11tree-optimization/95049 - fix not terminating RPO VN iterationRichard Biener1-5/+22
This rejects lattice changes from one constant to another. 2020-05-11 Richard Biener <rguenther@suse.de> PR tree-optimization/95049 * tree-ssa-sccvn.c (set_ssa_val_to): Reject lattice transition between different constants. * gcc.dg/torture/pr95049.c: New testcase.
2020-05-08Fix availability compute during VN DOM eliminationRichard Biener1-5/+10
This fixes an issue with redundant store elimination in FRE/PRE which, when invoked by the DOM elimination walk, ends up using possibly stale availability data from the RPO walk. It also fixes a missed optimization during valueization of addresses by making sure to use get_addr_base_and_unit_offset_1 which can valueize and adjusting that to also valueize ARRAY_REFs low-bound. 2020-05-08 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (rpo_avail): Change type to eliminate_dom_walker *. (eliminate_with_rpo_vn): Adjust rpo_avail to make vn_valueize use the DOM walker availability. (vn_reference_fold_indirect): Use get_addr_base_and_unit_offset_1 with vn_valueize as valueization callback. (vn_reference_maybe_forwprop_address): Likewise. * tree-dfa.c (get_addr_base_and_unit_offset_1): Also valueize array_ref_low_bound. * gnat.dg/opt83.adb: New testcase.
2020-05-04tree-optimization/93891 - improve same-store disambiguationRichard Biener1-2/+4
We need a reference to assess alignment, fall back to the original reference tree if available. 2020-05-04 Richard Biener <rguenther@suse.de> PR tree-optimization/93891 * tree-ssa-sccvn.c (vn_reference_lookup_3): Fall back to the original reference tree for assessing access alignment.
2020-04-17fix PVS studio reported bugsRichard Biener1-1/+1
2020-04-17 Richard Biener <rguenther@suse.de> PR other/94629 * cgraphclones.c (cgraph_node::create_clone): Remove duplicate initialization. * dwarf2out.c (dw_val_equal_p): Fix pasto in dw_val_class_vms_delta comparison. * optabs.c (expand_binop_directly): Fix pasto in commutation check. * tree-ssa-sccvn.c (vn_reference_lookup_pieces): Fix pasto in initialization.
2020-03-25sccvn: Fix buffer overflow in push_partial_def [PR94300]Jakub Jelinek1-0/+2
The following testcase is miscompiled, because there is a buffer overflow in push_partial_def in the little-endian case when working 64-byte vectors. The code computes the number of bytes we need in the BUFFER: NEEDED_LEN, which is rounded up number of bits we need. Then the code native_encode_expr each (partially overlapping) pd into THIS_BUFFER. If pd.offset < 0, i.e. the pd.rhs store starts at some bits before the window we are interested in, we pass -pd.offset to native_encode_expr and shrink the size already earlier: HOST_WIDE_INT size = pd.size; if (pd.offset < 0) size -= ROUND_DOWN (-pd.offset, BITS_PER_UNIT); On this testcase, the problem is with a store with pd.offset > 0, in particular pd.offset 256, pd.size 512, i.e. a 64-byte store which doesn't fit into entirely into BUFFER. We have just: size = MIN (size, (HOST_WIDE_INT) needed_len * BITS_PER_UNIT); in this case for little-endian, which isn't sufficient, because needed_len is 64, the entire BUFFER (except of the last extra byte used for shifting). native_encode_expr fills the whole THIS_BUFFER (again, except the last extra byte), and the code then performs memcpy (BUFFER + 32, THIS_BUFFER, 64); which overflows BUFFER and as THIS_BUFFER is usually laid out after it, overflows it into THIS_BUFFER. The following patch fixes it by for pd.offset > 0 making sure size is reduced too. For big-endian the code does things differently and already handles this right. 2020-03-25 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/94300 * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): If pd.offset is positive, make sure that off + size isn't larger than needed_len. * gcc.target/i386/avx512f-pr94300.c: New test.
2020-03-12tree-optimization/94103 avoid CSE of loads with paddingRichard Biener1-7/+16
VN currently replaces a load of a 16 byte entity 128 bits of precision (TImode) with the result of a load of a 16 byte entity with 80 bits of mode precision (XFmode). That will go downhill since if the padding bits are not actually filled with memory contents those bits are missing. 2020-03-12 Richard Biener <rguenther@suse.de> PR tree-optimization/94103 * tree-ssa-sccvn.c (visit_reference_op_load): Avoid type punning when the mode precision is not sufficient. * gcc.target/i386/pr94103.c: New testcase.
2020-03-05sccvn: Fix handling of POINTER_PLUS_EXPR in memset offset [PR93582]Jakub Jelinek1-4/+8
> > where POINTER_PLUS_EXPR last operand has sizetype type, thus unsigned, > > and in the testcase gimple_assign_rhs2 (def) is thus 0xf000000000000001ULL > > which multiplied by 8 doesn't fit into signed HWI. If it would be treated > > as signed offset instead, it would fit (-0xfffffffffffffffLL, multiplied > > by 8 is -0x7ffffffffffffff8LL). Unfortunately with the poly_int obfuscation > > I'm not sure how to convert it from unsigned to signed poly_int. > > mem_ref_offset provides a boiler-plate for this: > > poly_offset_int::from (wi::to_poly_wide (TREE_OPERAND (t, 1)), SIGNED); Thanks, that seems to work. The test now works on both big-endian and little-endian. 2020-03-05 Richard Biener <rguenther@suse.de> Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * tree-ssa-sccvn.c (vn_reference_lookup_3): Treat POINTER_PLUS_EXPR last operand as signed when looking for memset offset. Formatting fix. * gcc.dg/tree-ssa/pr93582-11.c: New test. Co-authored-by: Richard Biener <rguenther@suse.de>
2020-03-04sccvn: Avoid overflows in push_partial_defJakub Jelinek1-13/+36
The following patch attempts to avoid dangerous overflows in the various push_partial_def HOST_WIDE_INT computations. This is achieved by performing the subtraction offset2i - offseti in the push_partial_def function and before doing that doing some tweaks. If a constant store (non-CONSTRUCTOR) is too large (perhaps just hypothetical case), native_encode_expr would fail for it, but we don't necessarily need to fail right away, instead we can treat it like non-constant store and if it is already shadowed, we can ignore it. Otherwise, if it at most 64-byte and the caller ensured that there is a range overlap and push_partial_def ensures the load is at most 64-byte, I think we should be fine, offset (relative to the load) can be from -64*8+1 to 64*8-1 only and size at most 64*8, so no risks of overflowing HOST_WIDE_INT computations. For CONSTRUCTOR (or non-constant) stores, those can be indeed arbitrarily large, the caller just checks that both the absolute offset and size fit into signed HWI. But, we store the same bytes in that case over and over (both in the {} case where it is all 0, and in the hypothetical future case where we handle in push_partial_def also memset (, 123, )), so we can tweak the write range for our purposes. For {} store we could just cap it at the start offset and/or offset+size because all the bits are 0, but I wrote it in anticipation of the memset case and so the relative offset can now be down to -7 and similarly size can grow up to 64 bytes + 14 bits, all this trying to preserve the offset difference % BITS_PER_UNIT or end as well. 2020-03-04 Jakub Jelinek <jakub@redhat.com> * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Add offseti argument. Change pd argument so that it can be modified. Turn constant non-CONSTRUCTOR store into non-constant if it is too large. Adjust offset and size of CONSTRUCTOR or non-constant store to avoid overflows. (vn_walk_cb_data::vn_walk_cb_data, vn_reference_lookup_3): Adjust callers.
2020-03-03sccvn: Improve handling of load masked with integer constant [PR93582]Jakub Jelinek1-39/+129
As mentioned in the PR and discussed on IRC, the following patch is the patch that fixes the originally reported issue. We have there because of the premature bitfield comparison -> BIT_FIELD_REF optimization: s$s4_19 = 0; s.s4 = s$s4_19; _10 = BIT_FIELD_REF <s, 8, 0>; _13 = _10 & 8; and no other s fields are initialized. If they would be all initialized with constants, then my earlier PR93582 bitfield handling patches would handle it already, but if at least one bit we ignore after the BIT_AND_EXPR masking is not initialized or is initialized earlier to non-constant, we aren't able to look through it until combine, which is too late for the warnings on the dead code. This patch handles BIT_AND_EXPR where the first operand is a SSA_NAME initialized with a memory load and second operand is INTEGER_CST, by trying a partial def lookup after pushing the ranges of 0 bits in the mask as artificial initializers. In the above case on little-endian, we push offset 0 size 3 {} partial def and offset 4 size 4 (the result is unsigned char) and then perform normal partial def handling. My initial version of the patch failed miserably during bootstrap, because data->finish (...) called vn_reference_lookup_or_insert_for_pieces which I believe tried to remember the masked value rather than real for the reference, or for failed lookup visit_reference_op_load called vn_reference_insert. The following version makes sure we aren't calling either of those functions in the masked case, as we don't know anything better about the reference from whatever has been discovered when the load stmt has been visited, the patch just calls vn_nary_op_insert_stmt on failure with the lhs (apparently calling it with the INTEGER_CST doesn't work). 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * tree-ssa-sccvn.h (vn_reference_lookup): Add mask argument. * tree-ssa-sccvn.c (struct vn_walk_cb_data): Add mask and masked_result members, initialize them in the constructor and if mask is non-NULL, artificially push_partial_def {} for the portions of the mask that contain zeros. (vn_walk_cb_data::finish): If mask is non-NULL, set masked_result to val and return (void *)-1. Formatting fix. (vn_reference_lookup_pieces): Adjust vn_walk_cb_data initialization. Formatting fix. (vn_reference_lookup): Add mask argument. If non-NULL, don't call fully_constant_vn_reference_p nor vn_reference_lookup_1 and return data.mask_result. (visit_nary_op): Handle BIT_AND_EXPR of a memory load and INTEGER_CST mask. (visit_stmt): Formatting fix. * gcc.dg/tree-ssa/pr93582-10.c: New test. * gcc.dg/pr93582.c: New test. * gcc.c-torture/execute/pr93582.c: New test.
2020-03-03tree-optimization/93946 - fix bogus redundant store removal in FRE, DSE and DOMRichard Biener1-100/+122
This fixes a common mistake in removing a store that looks redudnant but is not because it changes the dynamic type of the memory and thus makes a difference for following loads with TBAA. 2020-03-03 Richard Biener <rguenther@suse.de> PR tree-optimization/93946 * alias.h (refs_same_for_tbaa_p): Declare. * alias.c (refs_same_for_tbaa_p): New function. * tree-ssa-alias.c (ao_ref_alias_set): For a NULL ref return zero. * tree-ssa-scopedtables.h (avail_exprs_stack::lookup_avail_expr): Add output argument giving access to the hashtable entry. * tree-ssa-scopedtables.c (avail_exprs_stack::lookup_avail_expr): Likewise. * tree-ssa-dom.c: Include alias.h. (dom_opt_dom_walker::optimize_stmt): Validate TBAA state before removing redundant store. * tree-ssa-sccvn.h (vn_reference_s::base_set): New member. (ao_ref_init_from_vn_reference): Adjust prototype. (vn_reference_lookup_pieces): Likewise. (vn_reference_insert_pieces): Likewise. * tree-ssa-sccvn.c: Track base alias set in addition to alias set everywhere. (eliminate_dom_walker::eliminate_stmt): Also check base alias set when removing redundant stores. (visit_reference_op_store): Likewise. * dse.c (record_store): Adjust valdity check for redundant store removal. * gcc.dg/torture/pr93946-1.c: New testcase. * gcc.dg/torture/pr93946-2.c: Likewise.
2020-02-27tree-optimization/93508 - make VN translate through _chk and valueize lengthRichard Biener1-4/+15
Value-numbering failed to handle __builtin_{memcpy,memset,...}_chk variants when removing abstraction and also failed to use the value-numbering lattice when requiring the length argument of the call to be constant. 2020-02-27 Richard Biener <rguenther@suse.de> PR tree-optimization/93508 * tree-ssa-sccvn.c (vn_reference_lookup_3): Handle _CHK like non-_CHK variants. Valueize their length arguments. * gcc.dg/tree-ssa/ssa-fre-85.c: New testcase.
2020-02-27sccvn: Handle non-byte aligned offset or size for memset (, 123, ) [PR93945]Jakub Jelinek1-6/+36
The following is the last spot in vn_reference_lookup_3 that didn't allow non-byte aligned offsets or sizes. To be precise, it did allow size that wasn't multiple of byte size and that caused a wrong-code issue on big-endian, as the pr93945.c testcase shows, so for GCC 9 we should add && multiple_p (ref->size, BITS_PER_UNIT) check instead. For the memset with SSA_NAME middle-argument, it still requires byte-aligned offset, as we'd otherwise need to rotate the value at runtime. 2020-02-27 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 PR tree-optimization/93945 * tree-ssa-sccvn.c (vn_reference_lookup_3): Handle memset with non-zero INTEGER_CST second argument and ref->offset or ref->size not a multiple of BITS_PER_UNIT. * gcc.dg/tree-ssa/pr93582-9.c: New test. * gcc.c-torture/execute/pr93945.c: New test.
2020-02-24sccvn: Handle bitfields in push_partial_def [PR93582]Jakub Jelinek1-55/+184
The following patch adds support for bitfields to push_partial_def. Previously pd.offset and pd.size were counted in bytes and maxsizei in bits, now everything is counted in bits. Not really sure how much of the further code can be outlined and moved, e.g. the full def and partial def code doesn't have pretty much anything in common (the partial defs case basically have some load bit range and a set of store bit ranges that at least partially overlap and we need to handle all the different cases, like negative pd.offset or non-negative, little vs. bit endian, size so small that we need to preserve original bits on both sides of the byte, size that fits or is too large. Perhaps the storing of some value into a middle of existing buffer (i.e. what push_partial_def now does in the loop) could, but the candidate for sharing would be most likely store-merging rather than the other spots in sccvn, and I think it is better not to touch store-merging at this stage. Yes, I've thought about trying to do everything in place, but the code is quite hard to understand and get right already now and if we tried to do the optimize on the fly, it would need more special cases and would for gcov coverage need more testcases to cover it. Most of the time the sizes will be small. Furthermore, for bitfields native_encode_expr stores actually number of bytes in the mode and not say actual bitsize rounded up to bytes, so it wouldn't be just a matter of saving/restoring bytes at the start and end, but we might need even 7 further bytes e.g. for __int128 bitfields. Perhaps we could have just a fast path for the case where everything is byte aligned and (for integral types the mode bitsize is equal to the size too)? 2020-02-24 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Consider pd.offset and pd.size to be counted in bits rather than bytes, add support for maxsizei that is not a multiple of BITS_PER_UNIT and handle bitfield stores and loads. (vn_reference_lookup_3): Don't call ranges_known_overlap_p with uncomparable quantities - bytes vs. bits. Allow push_partial_def on offsets/sizes that aren't multiple of BITS_PER_UNIT and adjust pd.offset/pd.size to be counted in bits rather than bytes. Formatting fix. Rename shadowed len variable to buflen. * gcc.dg/tree-ssa/pr93582-4.c: New test. * gcc.dg/tree-ssa/pr93582-5.c: New test. * gcc.dg/tree-ssa/pr93582-6.c: New test. * gcc.dg/tree-ssa/pr93582-7.c: New test. * gcc.dg/tree-ssa/pr93582-8.c: New test.
2020-02-13sccvn: Handle bitfields in vn_reference_lookup_3 [PR93582]Jakub Jelinek1-22/+68
The following patch is first step towards fixing PR93582. vn_reference_lookup_3 right now punts on anything that isn't byte aligned, so to be able to lookup a constant bitfield store, one needs to use the exact same COMPONENT_REF, otherwise it isn't found. This patch lifts up that that restriction if the bits to be loaded are covered by a single store of a constant (keeps the restriction so far for the multiple store case, can tweak that incrementally, but I think for bisection etc. it is worth to do it one step at a time). 2020-02-13 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * fold-const.h (shift_bytes_in_array_left, shift_bytes_in_array_right): Declare. * fold-const.c (shift_bytes_in_array_left, shift_bytes_in_array_right): New function, moved from gimple-ssa-store-merging.c, no longer static. * gimple-ssa-store-merging.c (shift_bytes_in_array): Move to gimple-ssa-store-merging.c and rename to shift_bytes_in_array_left. (shift_bytes_in_array_right): Move to gimple-ssa-store-merging.c. (encode_tree_to_bitpos): Use shift_bytes_in_array_left instead of shift_bytes_in_array. (verify_shift_bytes_in_array): Rename to ... (verify_shift_bytes_in_array_left): ... this. Use shift_bytes_in_array_left instead of shift_bytes_in_array. (store_merging_c_tests): Call verify_shift_bytes_in_array_left instead of verify_shift_bytes_in_array. * tree-ssa-sccvn.c (vn_reference_lookup_3): For native_encode_expr / native_interpret_expr where the store covers all needed bits, punt on PDP-endian, otherwise allow all involved offsets and sizes not to be byte-aligned. * gcc.dg/tree-ssa/pr93582-1.c: New test. * gcc.dg/tree-ssa/pr93582-2.c: New test. * gcc.dg/tree-ssa/pr93582-3.c: New test.
2020-02-11tree-optimization/93661 properly guard tree_to_poly_int64Richard Biener1-0/+1
2020-02-11 Richard Biener <rguenther@suse.de> PR tree-optimization/93661 PR tree-optimization/93662 * tree-ssa-sccvn.c (vn_reference_lookup_3): Properly guard tree_to_poly_int64. * tree-sra.c (get_access_for_expr): Likewise. * gcc.dg/pr93661.c: New testcase.
2020-02-04tree-optimization/91123 - restore redundant store removalRichard Biener1-35/+60
Redundant store removal in FRE was restricted for correctness reasons. The following extends correctness fixes required to memcpy/aggregate copy translation. The main change is that we no longer insert references rewritten to cover such aggregate copies into the hashtable but the original one. 2020-02-04 Richard Biener <rguenther@suse.de> PR tree-optimization/91123 * tree-ssa-sccvn.c (vn_walk_cb_data::finish): New method. (vn_walk_cb_data::last_vuse): New member. (vn_walk_cb_data::saved_operands): Likewsie. (vn_walk_cb_data::~vn_walk_cb_data): Release saved_operands. (vn_walk_cb_data::push_partial_def): Use finish. (vn_reference_lookup_2): Update last_vuse and use finish if we've saved operands. (vn_reference_lookup_3): Use finish and update calls to push_partial_defs everywhere. When translating through memcpy or aggregate copies save off operands and alias-set. (eliminate_dom_walker::eliminate_stmt): Restore VN_WALKREWRITE operation for redundant store removal. * gcc.dg/tree-ssa/ssa-fre-85.c: New testcase.
2020-01-23tree-optimization/93354 FRE redundant store removal validity fixRichard Biener1-12/+22
This fixes tracking of the alias-set of partial defs for use by redundant store removal. 2020-01-23 Richard Biener <rguenther@suse.de> PR tree-optimization/93381 * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Take alias-set of the def as argument and record the first one. (vn_walk_cb_data::first_set): New member. (vn_reference_lookup_3): Pass the alias-set of the current def to push_partial_def. Fix alias-set used in the aggregate copy case. (vn_reference_lookup): Consistently set *last_vuse_ptr. * real.c (clear_significand_below): Fix out-of-bound access. * gcc.dg/torture/pr93354.c: New testcase.
2020-01-21tree-optimization/92328 fix value-number with bogus typeRichard Biener1-16/+27
We were actually value-numbering two entities with different type the same rather than just having the same representation in the hashtable. The following fixes that by wrapping the value in a to be inserted VIEW_CONVERT_EXPR. 2020-01-21 Richard Biener <rguenther@suse.de> PR tree-optimization/92328 * tree-ssa-sccvn.c (vn_reference_lookup_3): Preserve type when value-numbering same-sized store by inserting a VIEW_CONVERT_EXPR. (eliminate_dom_walker::eliminate_stmt): When eliminating a redundant store handle bit-reinterpretation of the same value. * gcc.dg/torture/pr92328.c: New testcase.
2020-01-16Fix value numbering dealing with reverse byte orderAndrew Pinski1-0/+2
Hi, While working on bit-field lowering pass, I came across this bug. The IR looks like: VIEW_CONVERT_EXPR<unsigned long>(var1) = _12; _1 = BIT_FIELD_REF <var1, 64, 0>; Where the BIT_FIELD_REF has REF_REVERSE_STORAGE_ORDER set on it and var1's type has TYPE_REVERSE_STORAGE_ORDER set on it. PRE/FRE would decided to prop _12 into the BFR statement which would produce wrong code. And yes _12 has the correct byte order already; bit-field lowering removes the implicit byte swaps in the IR and adds the explicity to make it easier optimize later on. This patch adds a check for storage_order_barrier_p on the lhs tree which returns true in the case where we had a reverse order with a VCE. ChangeLog: * tree-ssa-sccvn.c(vn_reference_lookup_3): Check lhs for !storage_order_barrier_p.
2020-01-14hash-table.h: support non-zero empty values in empty_slow (v2)David Malcolm1-0/+1
gcc/cp/ChangeLog: * cp-gimplify.c (source_location_table_entry_hash::empty_zero_p): New static constant. * cp-tree.h (named_decl_hash::empty_zero_p): Likewise. (struct named_label_hash::empty_zero_p): Likewise. * decl2.c (mangled_decl_hash::empty_zero_p): Likewise. gcc/ChangeLog: * attribs.c (excl_hash_traits::empty_zero_p): New static constant. * gcov.c (function_start_pair_hash::empty_zero_p): Likewise. * graphite.c (struct sese_scev_hash::empty_zero_p): Likewise. * hash-map-tests.c (selftest::test_nonzero_empty_key): New selftest. (selftest::hash_map_tests_c_tests): Call it. * hash-map-traits.h (simple_hashmap_traits::empty_zero_p): New static constant, using the value of = H::empty_zero_p. (unbounded_hashmap_traits::empty_zero_p): Likewise, using the value from default_hash_traits <Value>. * hash-map.h (hash_map::empty_zero_p): Likewise, using the value from Traits. * hash-set-tests.c (value_hash_traits::empty_zero_p): Likewise. * hash-table.h (hash_table::alloc_entries): Guard the loop of calls to mark_empty with !Descriptor::empty_zero_p. (hash_table::empty_slow): Conditionalize the memset call with a check that Descriptor::empty_zero_p; otherwise, loop through the entries calling mark_empty on them. * hash-traits.h (int_hash::empty_zero_p): New static constant. (pointer_hash::empty_zero_p): Likewise. (pair_hash::empty_zero_p): Likewise. * ipa-devirt.c (default_hash_traits <type_pair>::empty_zero_p): Likewise. * ipa-prop.c (ipa_bit_ggc_hash_traits::empty_zero_p): Likewise. (ipa_vr_ggc_hash_traits::empty_zero_p): Likewise. * profile.c (location_triplet_hash::empty_zero_p): Likewise. * sanopt.c (sanopt_tree_triplet_hash::empty_zero_p): Likewise. (sanopt_tree_couple_hash::empty_zero_p): Likewise. * tree-hasher.h (int_tree_hasher::empty_zero_p): Likewise. * tree-ssa-sccvn.c (vn_ssa_aux_hasher::empty_zero_p): Likewise. * tree-vect-slp.c (bst_traits::empty_zero_p): Likewise. * tree-vectorizer.h (default_hash_traits<scalar_cond_masked_key>::empty_zero_p): Likewise.
2020-01-01Update copyright years.Jakub Jelinek1-1/+1
From-SVN: r279813
2019-12-04tree-ssa-sccvn.c (vn_reference_lookup_3): Properly guard empty CTOR and ↵Richard Biener1-2/+11
memset partial-def registering. 2019-12-04 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (vn_reference_lookup_3): Properly guard empty CTOR and memset partial-def registering. Take advantage of fancy offset analysis in memset handling. From-SVN: r278965
2019-12-04tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Handle non-constant ↵Richard Biener1-32/+55
defs in the most trivial way. 2019-12-04 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Handle non-constant defs in the most trivial way. (vn_reference_lookup_3): Also push down SSA partial defs. * gcc.dg/tree-ssa/ssa-fre-84.c: New testcase. From-SVN: r278963
2019-12-03re PR tree-optimization/92751 (VN partial def support confused about clobbers)Richard Biener1-5/+15
2019-12-03 Richard Biener <rguenther@suse.de> PR tree-optimization/92751 * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Fail when a clobber ends up in the partial-def vector. (vn_reference_lookup_3): Let clobbers be handled by the assignment from CTOR handling. * g++.dg/tree-ssa/pr92751.C: New testcase. From-SVN: r278931
2019-11-29tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Bail out early for too ↵Richard Biener1-3/+10
large objects. 2019-11-29 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Bail out early for too large objects. From-SVN: r278844
2019-11-12Remove gcc/params.* files.Martin Liska1-1/+0
2019-11-12 Martin Liska <mliska@suse.cz> * Makefile.in: Remove PARAMS_H and params.list and params.options. * params-enum.h: Remove. * params-list.h: Remove. * params-options.h: Remove. * params.c: Remove. * params.def: Remove. * params.h: Remove. * asan.c: Do not include params.h. * auto-profile.c: Likewise. * bb-reorder.c: Likewise. * builtins.c: Likewise. * cfgcleanup.c: Likewise. * cfgexpand.c: Likewise. * cfgloopanal.c: Likewise. * cgraph.c: Likewise. * combine.c: Likewise. * common/config/aarch64/aarch64-common.c: Likewise. * common/config/gcn/gcn-common.c: Likewise. * common/config/ia64/ia64-common.c: Likewise. * common/config/powerpcspe/powerpcspe-common.c: Likewise. * common/config/rs6000/rs6000-common.c: Likewise. * common/config/sh/sh-common.c: Likewise. * config/aarch64/aarch64.c: Likewise. * config/alpha/alpha.c: Likewise. * config/arm/arm.c: Likewise. * config/avr/avr.c: Likewise. * config/csky/csky.c: Likewise. * config/i386/i386-builtins.c: Likewise. * config/i386/i386-expand.c: Likewise. * config/i386/i386-features.c: Likewise. * config/i386/i386-options.c: Likewise. * config/i386/i386.c: Likewise. * config/ia64/ia64.c: Likewise. * config/rs6000/rs6000-logue.c: Likewise. * config/rs6000/rs6000.c: Likewise. * config/s390/s390.c: Likewise. * config/sparc/sparc.c: Likewise. * config/visium/visium.c: Likewise. * coverage.c: Likewise. * cprop.c: Likewise. * cse.c: Likewise. * cselib.c: Likewise. * dse.c: Likewise. * emit-rtl.c: Likewise. * explow.c: Likewise. * final.c: Likewise. * fold-const.c: Likewise. * gcc.c: Likewise. * gcse.c: Likewise. * ggc-common.c: Likewise. * ggc-page.c: Likewise. * gimple-loop-interchange.cc: Likewise. * gimple-loop-jam.c: Likewise. * gimple-loop-versioning.cc: Likewise. * gimple-ssa-split-paths.c: Likewise. * gimple-ssa-sprintf.c: Likewise. * gimple-ssa-store-merging.c: Likewise. * gimple-ssa-strength-reduction.c: Likewise. * gimple-ssa-warn-alloca.c: Likewise. * gimple-ssa-warn-restrict.c: Likewise. * graphite-isl-ast-to-gimple.c: Likewise. * graphite-optimize-isl.c: Likewise. * graphite-scop-detection.c: Likewise. * graphite-sese-to-poly.c: Likewise. * graphite.c: Likewise. * haifa-sched.c: Likewise. * hsa-gen.c: Likewise. * ifcvt.c: Likewise. * ipa-cp.c: Likewise. * ipa-fnsummary.c: Likewise. * ipa-inline-analysis.c: Likewise. * ipa-inline.c: Likewise. * ipa-polymorphic-call.c: Likewise. * ipa-profile.c: Likewise. * ipa-prop.c: Likewise. * ipa-split.c: Likewise. * ipa-sra.c: Likewise. * ira-build.c: Likewise. * ira-conflicts.c: Likewise. * loop-doloop.c: Likewise. * loop-invariant.c: Likewise. * loop-unroll.c: Likewise. * lra-assigns.c: Likewise. * lra-constraints.c: Likewise. * modulo-sched.c: Likewise. * opt-suggestions.c: Likewise. * opts.c: Likewise. * postreload-gcse.c: Likewise. * predict.c: Likewise. * reload.c: Likewise. * reorg.c: Likewise. * resource.c: Likewise. * sanopt.c: Likewise. * sched-deps.c: Likewise. * sched-ebb.c: Likewise. * sched-rgn.c: Likewise. * sel-sched-ir.c: Likewise. * sel-sched.c: Likewise. * shrink-wrap.c: Likewise. * stmt.c: Likewise. * targhooks.c: Likewise. * toplev.c: Likewise. * tracer.c: Likewise. * trans-mem.c: Likewise. * tree-chrec.c: Likewise. * tree-data-ref.c: Likewise. * tree-if-conv.c: Likewise. * tree-inline.c: Likewise. * tree-loop-distribution.c: Likewise. * tree-parloops.c: Likewise. * tree-predcom.c: Likewise. * tree-profile.c: Likewise. * tree-scalar-evolution.c: Likewise. * tree-sra.c: Likewise. * tree-ssa-ccp.c: Likewise. * tree-ssa-dom.c: Likewise. * tree-ssa-dse.c: Likewise. * tree-ssa-ifcombine.c: Likewise. * tree-ssa-loop-ch.c: Likewise. * tree-ssa-loop-im.c: Likewise. * tree-ssa-loop-ivcanon.c: Likewise. * tree-ssa-loop-ivopts.c: Likewise. * tree-ssa-loop-manip.c: Likewise. * tree-ssa-loop-niter.c: Likewise. * tree-ssa-loop-prefetch.c: Likewise. * tree-ssa-loop-unswitch.c: Likewise. * tree-ssa-math-opts.c: Likewise. * tree-ssa-phiopt.c: Likewise. * tree-ssa-pre.c: Likewise. * tree-ssa-reassoc.c: Likewise. * tree-ssa-sccvn.c: Likewise. * tree-ssa-scopedtables.c: Likewise. * tree-ssa-sink.c: Likewise. * tree-ssa-strlen.c: Likewise. * tree-ssa-structalias.c: Likewise. * tree-ssa-tail-merge.c: Likewise. * tree-ssa-threadbackward.c: Likewise. * tree-ssa-threadedge.c: Likewise. * tree-ssa-uninit.c: Likewise. * tree-switch-conversion.c: Likewise. * tree-vect-data-refs.c: Likewise. * tree-vect-loop.c: Likewise. * tree-vect-slp.c: Likewise. * tree-vrp.c: Likewise. * tree.c: Likewise. * value-prof.c: Likewise. * var-tracking.c: Likewise. 2019-11-12 Martin Liska <mliska@suse.cz> * gimple-parser.c: Do not include params.h. 2019-11-12 Martin Liska <mliska@suse.cz> * name-lookup.c: Do not include params.h. * typeck.c: Likewise. 2019-11-12 Martin Liska <mliska@suse.cz> * lto-common.c: Do not include params.h. * lto-partition.c: Likewise. * lto.c: Likewise. From-SVN: r278086
2019-11-12Apply mechanical replacement (generated patch).Martin Liska1-3/+3
2019-11-12 Martin Liska <mliska@suse.cz> * asan.c (asan_sanitize_stack_p): Replace old parameter syntax with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET macro. (asan_sanitize_allocas_p): Likewise. (asan_emit_stack_protection): Likewise. (asan_protect_global): Likewise. (instrument_derefs): Likewise. (instrument_builtin_call): Likewise. (asan_expand_mark_ifn): Likewise. * auto-profile.c (auto_profile): Likewise. * bb-reorder.c (copy_bb_p): Likewise. (duplicate_computed_gotos): Likewise. * builtins.c (inline_expand_builtin_string_cmp): Likewise. * cfgcleanup.c (try_crossjump_to_edge): Likewise. (try_crossjump_bb): Likewise. * cfgexpand.c (defer_stack_allocation): Likewise. (stack_protect_classify_type): Likewise. (pass_expand::execute): Likewise. * cfgloopanal.c (expected_loop_iterations_unbounded): Likewise. (estimate_reg_pressure_cost): Likewise. * cgraph.c (cgraph_edge::maybe_hot_p): Likewise. * combine.c (combine_instructions): Likewise. (record_value_for_reg): Likewise. * common/config/aarch64/aarch64-common.c (aarch64_option_validate_param): Likewise. (aarch64_option_default_params): Likewise. * common/config/ia64/ia64-common.c (ia64_option_default_params): Likewise. * common/config/powerpcspe/powerpcspe-common.c (rs6000_option_default_params): Likewise. * common/config/rs6000/rs6000-common.c (rs6000_option_default_params): Likewise. * common/config/sh/sh-common.c (sh_option_default_params): Likewise. * config/aarch64/aarch64.c (aarch64_output_probe_stack_range): Likewise. (aarch64_allocate_and_probe_stack_space): Likewise. (aarch64_expand_epilogue): Likewise. (aarch64_override_options_internal): Likewise. * config/alpha/alpha.c (alpha_option_override): Likewise. * config/arm/arm.c (arm_option_override): Likewise. (arm_valid_target_attribute_p): Likewise. * config/i386/i386-options.c (ix86_option_override_internal): Likewise. * config/i386/i386.c (get_probe_interval): Likewise. (ix86_adjust_stack_and_probe_stack_clash): Likewise. (ix86_max_noce_ifcvt_seq_cost): Likewise. * config/ia64/ia64.c (ia64_adjust_cost): Likewise. * config/rs6000/rs6000-logue.c (get_stack_clash_protection_probe_interval): Likewise. (get_stack_clash_protection_guard_size): Likewise. * config/rs6000/rs6000.c (rs6000_option_override_internal): Likewise. * config/s390/s390.c (allocate_stack_space): Likewise. (s390_emit_prologue): Likewise. (s390_option_override_internal): Likewise. * config/sparc/sparc.c (sparc_option_override): Likewise. * config/visium/visium.c (visium_option_override): Likewise. * coverage.c (get_coverage_counts): Likewise. (coverage_compute_profile_id): Likewise. (coverage_begin_function): Likewise. (coverage_end_function): Likewise. * cse.c (cse_find_path): Likewise. (cse_extended_basic_block): Likewise. (cse_main): Likewise. * cselib.c (cselib_invalidate_mem): Likewise. * dse.c (dse_step1): Likewise. * emit-rtl.c (set_new_first_and_last_insn): Likewise. (get_max_insn_count): Likewise. (make_debug_insn_raw): Likewise. (init_emit): Likewise. * explow.c (compute_stack_clash_protection_loop_data): Likewise. * final.c (compute_alignments): Likewise. * fold-const.c (fold_range_test): Likewise. (fold_truth_andor): Likewise. (tree_single_nonnegative_warnv_p): Likewise. (integer_valued_real_single_p): Likewise. * gcse.c (want_to_gcse_p): Likewise. (prune_insertions_deletions): Likewise. (hoist_code): Likewise. (gcse_or_cprop_is_too_expensive): Likewise. * ggc-common.c: Likewise. * ggc-page.c (ggc_collect): Likewise. * gimple-loop-interchange.cc (MAX_NUM_STMT): Likewise. (MAX_DATAREFS): Likewise. (OUTER_STRIDE_RATIO): Likewise. * gimple-loop-jam.c (tree_loop_unroll_and_jam): Likewise. * gimple-loop-versioning.cc (loop_versioning::max_insns_for_loop): Likewise. * gimple-ssa-split-paths.c (is_feasible_trace): Likewise. * gimple-ssa-store-merging.c (imm_store_chain_info::try_coalesce_bswap): Likewise. (imm_store_chain_info::coalesce_immediate_stores): Likewise. (imm_store_chain_info::output_merged_store): Likewise. (pass_store_merging::process_store): Likewise. * gimple-ssa-strength-reduction.c (find_basis_for_base_expr): Likewise. * graphite-isl-ast-to-gimple.c (class translate_isl_ast_to_gimple): Likewise. (scop_to_isl_ast): Likewise. * graphite-optimize-isl.c (get_schedule_for_node_st): Likewise. (optimize_isl): Likewise. * graphite-scop-detection.c (build_scops): Likewise. * haifa-sched.c (set_modulo_params): Likewise. (rank_for_schedule): Likewise. (model_add_to_worklist): Likewise. (model_promote_insn): Likewise. (model_choose_insn): Likewise. (queue_to_ready): Likewise. (autopref_multipass_dfa_lookahead_guard): Likewise. (schedule_block): Likewise. (sched_init): Likewise. * hsa-gen.c (init_prologue): Likewise. * ifcvt.c (bb_ok_for_noce_convert_multiple_sets): Likewise. (cond_move_process_if_block): Likewise. * ipa-cp.c (ipcp_lattice::add_value): Likewise. (merge_agg_lats_step): Likewise. (devirtualization_time_bonus): Likewise. (hint_time_bonus): Likewise. (incorporate_penalties): Likewise. (good_cloning_opportunity_p): Likewise. (ipcp_propagate_stage): Likewise. * ipa-fnsummary.c (decompose_param_expr): Likewise. (set_switch_stmt_execution_predicate): Likewise. (analyze_function_body): Likewise. (compute_fn_summary): Likewise. * ipa-inline-analysis.c (estimate_growth): Likewise. * ipa-inline.c (caller_growth_limits): Likewise. (inline_insns_single): Likewise. (inline_insns_auto): Likewise. (can_inline_edge_by_limits_p): Likewise. (want_early_inline_function_p): Likewise. (big_speedup_p): Likewise. (want_inline_small_function_p): Likewise. (want_inline_self_recursive_call_p): Likewise. (edge_badness): Likewise. (recursive_inlining): Likewise. (compute_max_insns): Likewise. (early_inliner): Likewise. * ipa-polymorphic-call.c (csftc_abort_walking_p): Likewise. * ipa-profile.c (ipa_profile): Likewise. * ipa-prop.c (determine_known_aggregate_parts): Likewise. (ipa_analyze_node): Likewise. (ipcp_transform_function): Likewise. * ipa-split.c (consider_split): Likewise. * ipa-sra.c (allocate_access): Likewise. (process_scan_results): Likewise. (ipa_sra_summarize_function): Likewise. (pull_accesses_from_callee): Likewise. * ira-build.c (loop_compare_func): Likewise. (mark_loops_for_removal): Likewise. * ira-conflicts.c (build_conflict_bit_table): Likewise. * loop-doloop.c (doloop_optimize): Likewise. * loop-invariant.c (gain_for_invariant): Likewise. (move_loop_invariants): Likewise. * loop-unroll.c (decide_unroll_constant_iterations): Likewise. (decide_unroll_runtime_iterations): Likewise. (decide_unroll_stupid): Likewise. (expand_var_during_unrolling): Likewise. * lra-assigns.c (spill_for): Likewise. * lra-constraints.c (EBB_PROBABILITY_CUTOFF): Likewise. * modulo-sched.c (sms_schedule): Likewise. (DFA_HISTORY): Likewise. * opts.c (default_options_optimization): Likewise. (finish_options): Likewise. (common_handle_option): Likewise. * postreload-gcse.c (eliminate_partially_redundant_load): Likewise. (if): Likewise. * predict.c (get_hot_bb_threshold): Likewise. (maybe_hot_count_p): Likewise. (probably_never_executed): Likewise. (predictable_edge_p): Likewise. (predict_loops): Likewise. (expr_expected_value_1): Likewise. (tree_predict_by_opcode): Likewise. (handle_missing_profiles): Likewise. * reload.c (find_equiv_reg): Likewise. * reorg.c (redundant_insn): Likewise. * resource.c (mark_target_live_regs): Likewise. (incr_ticks_for_insn): Likewise. * sanopt.c (pass_sanopt::execute): Likewise. * sched-deps.c (sched_analyze_1): Likewise. (sched_analyze_2): Likewise. (sched_analyze_insn): Likewise. (deps_analyze_insn): Likewise. * sched-ebb.c (schedule_ebbs): Likewise. * sched-rgn.c (find_single_block_region): Likewise. (too_large): Likewise. (haifa_find_rgns): Likewise. (extend_rgns): Likewise. (new_ready): Likewise. (schedule_region): Likewise. (sched_rgn_init): Likewise. * sel-sched-ir.c (make_region_from_loop): Likewise. * sel-sched-ir.h (MAX_WS): Likewise. * sel-sched.c (process_pipelined_exprs): Likewise. (sel_setup_region_sched_flags): Likewise. * shrink-wrap.c (try_shrink_wrapping): Likewise. * targhooks.c (default_max_noce_ifcvt_seq_cost): Likewise. * toplev.c (print_version): Likewise. (process_options): Likewise. * tracer.c (tail_duplicate): Likewise. * trans-mem.c (tm_log_add): Likewise. * tree-chrec.c (chrec_fold_plus_1): Likewise. * tree-data-ref.c (split_constant_offset): Likewise. (compute_all_dependences): Likewise. * tree-if-conv.c (MAX_PHI_ARG_NUM): Likewise. * tree-inline.c (remap_gimple_stmt): Likewise. * tree-loop-distribution.c (MAX_DATAREFS_NUM): Likewise. * tree-parloops.c (MIN_PER_THREAD): Likewise. (create_parallel_loop): Likewise. * tree-predcom.c (determine_unroll_factor): Likewise. * tree-scalar-evolution.c (instantiate_scev_r): Likewise. * tree-sra.c (analyze_all_variable_accesses): Likewise. * tree-ssa-ccp.c (fold_builtin_alloca_with_align): Likewise. * tree-ssa-dse.c (setup_live_bytes_from_ref): Likewise. (dse_optimize_redundant_stores): Likewise. (dse_classify_store): Likewise. * tree-ssa-ifcombine.c (ifcombine_ifandif): Likewise. * tree-ssa-loop-ch.c (ch_base::copy_headers): Likewise. * tree-ssa-loop-im.c (LIM_EXPENSIVE): Likewise. * tree-ssa-loop-ivcanon.c (try_unroll_loop_completely): Likewise. (try_peel_loop): Likewise. (tree_unroll_loops_completely): Likewise. * tree-ssa-loop-ivopts.c (avg_loop_niter): Likewise. (CONSIDER_ALL_CANDIDATES_BOUND): Likewise. (MAX_CONSIDERED_GROUPS): Likewise. (ALWAYS_PRUNE_CAND_SET_BOUND): Likewise. * tree-ssa-loop-manip.c (can_unroll_loop_p): Likewise. * tree-ssa-loop-niter.c (MAX_ITERATIONS_TO_TRACK): Likewise. * tree-ssa-loop-prefetch.c (PREFETCH_BLOCK): Likewise. (L1_CACHE_SIZE_BYTES): Likewise. (L2_CACHE_SIZE_BYTES): Likewise. (should_issue_prefetch_p): Likewise. (schedule_prefetches): Likewise. (determine_unroll_factor): Likewise. (volume_of_references): Likewise. (add_subscript_strides): Likewise. (self_reuse_distance): Likewise. (mem_ref_count_reasonable_p): Likewise. (insn_to_prefetch_ratio_too_small_p): Likewise. (loop_prefetch_arrays): Likewise. (tree_ssa_prefetch_arrays): Likewise. * tree-ssa-loop-unswitch.c (tree_unswitch_single_loop): Likewise. * tree-ssa-math-opts.c (gimple_expand_builtin_pow): Likewise. (convert_mult_to_fma): Likewise. (math_opts_dom_walker::after_dom_children): Likewise. * tree-ssa-phiopt.c (cond_if_else_store_replacement): Likewise. (hoist_adjacent_loads): Likewise. (gate_hoist_loads): Likewise. * tree-ssa-pre.c (translate_vuse_through_block): Likewise. (compute_partial_antic_aux): Likewise. * tree-ssa-reassoc.c (get_reassociation_width): Likewise. * tree-ssa-sccvn.c (vn_reference_lookup_pieces): Likewise. (vn_reference_lookup): Likewise. (do_rpo_vn): Likewise. * tree-ssa-scopedtables.c (avail_exprs_stack::lookup_avail_expr): Likewise. * tree-ssa-sink.c (select_best_block): Likewise. * tree-ssa-strlen.c (new_stridx): Likewise. (new_addr_stridx): Likewise. (get_range_strlen_dynamic): Likewise. (class ssa_name_limit_t): Likewise. * tree-ssa-structalias.c (push_fields_onto_fieldstack): Likewise. (create_variable_info_for_1): Likewise. (init_alias_vars): Likewise. * tree-ssa-tail-merge.c (find_clusters_1): Likewise. (tail_merge_optimize): Likewise. * tree-ssa-threadbackward.c (thread_jumps::profitable_jump_thread_path): Likewise. (thread_jumps::fsm_find_control_statement_thread_paths): Likewise. (thread_jumps::find_jump_threads_backwards): Likewise. * tree-ssa-threadedge.c (record_temporary_equivalences_from_stmts_at_dest): Likewise. * tree-ssa-uninit.c (compute_control_dep_chain): Likewise. * tree-switch-conversion.c (switch_conversion::check_range): Likewise. (jump_table_cluster::can_be_handled): Likewise. * tree-switch-conversion.h (jump_table_cluster::case_values_threshold): Likewise. (SWITCH_CONVERSION_BRANCH_RATIO): Likewise. (param_switch_conversion_branch_ratio): Likewise. * tree-vect-data-refs.c (vect_mark_for_runtime_alias_test): Likewise. (vect_enhance_data_refs_alignment): Likewise. (vect_prune_runtime_alias_test_list): Likewise. * tree-vect-loop.c (vect_analyze_loop_costing): Likewise. (vect_get_datarefs_in_loop): Likewise. (vect_analyze_loop): Likewise. * tree-vect-slp.c (vect_slp_bb): Likewise. * tree-vectorizer.h: Likewise. * tree-vrp.c (find_switch_asserts): Likewise. (vrp_prop::check_mem_ref): Likewise. * tree.c (wide_int_to_tree_1): Likewise. (cache_integer_cst): Likewise. * var-tracking.c (EXPR_USE_DEPTH): Likewise. (reverse_op): Likewise. (vt_find_locations): Likewise. 2019-11-12 Martin Liska <mliska@suse.cz> * gimple-parser.c (c_parser_parse_gimple_body): Replace old parameter syntax with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET macro. 2019-11-12 Martin Liska <mliska@suse.cz> * name-lookup.c (namespace_hints::namespace_hints): Replace old parameter syntax with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET macro. * typeck.c (comptypes): Likewise. 2019-11-12 Martin Liska <mliska@suse.cz> * lto-partition.c (lto_balanced_map): Replace old parameter syntax with the new one, include opts.h if needed. Use SET_OPTION_IF_UNSET macro. * lto.c (do_whole_program_analysis): Likewise. From-SVN: r278085
2019-11-08Handle POLY_INT_CST in copy_reference_ops_from_refRichard Sandiford1-0/+1
2019-11-08 Richard Sandiford <richard.sandiford@arm.com> gcc/ * tree-ssa-sccvn.c (copy_reference_ops_from_ref): Handle POLY_INT_CST. gcc/testsuite/ * gcc.target/aarch64/sve/acle/general/deref_2.c: New test. * gcc.target/aarch64/sve/acle/general/whilele_8.c: Likewise. * gcc.target/aarch64/sve/acle/general/whilelt_4.c: Likewise. From-SVN: r277959
2019-10-29[vect]PR 88915: Vectorize epilogues when versioning loopsAndre Vieira1-1/+5
gcc/ChangeLog: 2019-10-29 Andre Vieira <andre.simoesdiasvieira@arm.com> PR 88915 * tree-ssa-loop-niter.h (simplify_replace_tree): Change declaration. * tree-ssa-loop-niter.c (simplify_replace_tree): Add context parameter and make the valueize function pointer also take a void pointer. * gcc/tree-ssa-sccvn.c (vn_valueize_wrapper): New function to wrap around vn_valueize, to call it without a context. (process_bb): Use vn_valueize_wrapper instead of vn_valueize. * tree-vect-loop.c (_loop_vec_info): Initialize epilogue_vinfos. (~_loop_vec_info): Release epilogue_vinfos. (vect_analyze_loop_costing): Use knowledge of main VF to estimate number of iterations of epilogue. (vect_analyze_loop_2): Adapt to analyse main loop for all supported vector sizes when vect-epilogues-nomask=1. Also keep track of lowest versioning threshold needed for main loop. (vect_analyze_loop): Likewise. (find_in_mapping): New helper function. (update_epilogue_loop_vinfo): New function. (vect_transform_loop): When vectorizing epilogues re-use analysis done on main loop and call update_epilogue_loop_vinfo to update it. * tree-vect-loop-manip.c (vect_update_inits_of_drs): No longer insert stmts on loop preheader edge. (vect_do_peeling): Enable skip-vectors when doing loop versioning if we decided to vectorize epilogues. Update epilogues NITERS and construct ADVANCE to update epilogues data references where needed. * tree-vectorizer.h (_loop_vec_info): Add epilogue_vinfos. (vect_do_peeling, vect_update_inits_of_drs, determine_peel_for_niter, vect_analyze_loop): Add or update declarations. * tree-vectorizer.c (try_vectorize_loop_1): Make sure to use already created loop_vec_info's for epilogues when available. Otherwise analyse epilogue separately. From-SVN: r277569
2019-10-24re PR tree-optimization/92203 (ICE in eliminate_stmt, at tree-ssa-sccvn.c:5492)Richard Biener1-2/+7
2019-10-24 Richard Biener <rguenther@suse.de> PR tree-optimization/92203 * treee-ssa-sccvn.c (eliminate_dom_walker::eliminate_stmt): Skip eliminating conversion stmts inserted by insertion. * gcc.dg/torture/pr92203.c: New testcase. From-SVN: r277374
2019-10-12re PR middle-end/92063 (ICE in operation_could_trap_p, at tree-eh.c:2528 ↵Jakub Jelinek1-7/+4
when compiling Python's Python/_warnings.c) PR middle-end/92063 * tree-eh.c (operation_could_trap_helper_p) <case COND_EXPR> <case VEC_COND_EXPR>: Return false with *handled = false. (tree_could_trap_p): For {,VEC_}COND_EXPR return false instead of recursing on the first operand. * fold-const.c (simple_operand_p_2): Use generic_expr_could_trap_p instead of tree_could_trap_p. * tree-ssa-sccvn.c (vn_nary_may_trap): Formatting fixes. * gcc.c-torture/compile/pr92063.c: New test. From-SVN: r276915
2019-10-11re PR tree-optimization/90883 (Generated code is worse if returned struct is ↵Richard Biener1-29/+74
unnamed) 2019-10-11 Richard Biener <rguenther@suse.de> PR tree-optimization/90883 PR tree-optimization/91091 * tree-ssa-sccvn.c (vn_reference_lookup_3): Use correct alias-sets both for recording VN table entries and continuing walking after translating through copies. Handle same-sized reads from SSA names by returning the plain SSA name. (eliminate_dom_walker::eliminate_stmt): Properly handle non-size precision stores in redundant store elimination. * gcc.dg/torture/20191011-1.c: New testcase. * gcc.dg/tree-ssa/ssa-fre-82.c: Likewise. * gcc.dg/tree-ssa/ssa-fre-83.c: Likewise. * gcc.dg/tree-ssa/redundant-assign-zero-1.c: Disable FRE. * gcc.dg/tree-ssa/redundant-assign-zero-2.c: Likewise. From-SVN: r276882
2019-09-24tree-ssa-sccvn.c (vn_reference_lookup_3): Valueize MEM_REF base.Richard Biener1-2/+3
2019-09-24 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.c (vn_reference_lookup_3): Valueize MEM_REF base. * gcc.dg/torture/20190924-1.c: New testcase. From-SVN: r276092
2019-09-16re PR tree-optimization/91756 (g++.dg/lto/alias-3 FAILs)Richard Biener1-8/+14
2019-09-16 Richard Biener <rguenther@suse.de> PR tree-optimization/91756 PR tree-optimization/87132 * tree-ssa-alias.h (enum translate_flags): New. (get_continuation_for_phi): Use it instead of simple bool flag. (walk_non_aliased_vuses): Likewise. * tree-ssa-alias.c (maybe_skip_until): Adjust. (get_continuation_for_phi): When looking across backedges only disallow valueization. (walk_non_aliased_vuses): Adjust. * tree-ssa-sccvn.c (vn_reference_lookup_3): Avoid valueization if requested. * gcc.dg/tree-ssa/ssa-fre-81.c: New testcase. From-SVN: r275747
2019-09-03tree-ssa-sccvn.h (vn_nary_op_lookup): Remove.Richard Biener1-45/+0
2019-09-03 Richard Biener <rguenther@suse.de> * tree-ssa-sccvn.h (vn_nary_op_lookup): Remove. (vn_nary_op_insert): Likewise. * tree-ssa-sccvn.c (init_vn_nary_op_from_op): Remove. (vn_nary_op_lookup): Likewise. (vn_nary_op_insert): Likewise. From-SVN: r275338
2019-08-26re PR c/91526 (Unnecessary SSE and other instructions generated when ↵Richard Biener1-0/+5
compiling in C mode (vs. C++ mode)) 2019-08-26 Richard Biener <rguenther@suse.de> PR tree-optimization/91526 * passes.def: Note that after late FRE we do TODO_update_address_taken. * tree-ssa-sccvn.c (pass_fre::execute): In late mode schedule TODO_update_address_taken. From-SVN: r274922