diff options
author | Richard Guenther <rguenther@suse.de> | 2010-07-01 08:49:19 +0000 |
---|---|---|
committer | Richard Biener <rguenth@gcc.gnu.org> | 2010-07-01 08:49:19 +0000 |
commit | 70f348148c09468b05aa09fcfa91b61611003c27 (patch) | |
tree | 4cc8d9c35ed3127dbf885a1f08a83776819bed41 /gcc/tree-ssa-forwprop.c | |
parent | 952b984e86f884d08d2e1ae5675ce518381692c5 (diff) | |
download | gcc-70f348148c09468b05aa09fcfa91b61611003c27.zip gcc-70f348148c09468b05aa09fcfa91b61611003c27.tar.gz gcc-70f348148c09468b05aa09fcfa91b61611003c27.tar.bz2 |
re PR middle-end/42834 (memcpy folding overeager)
2010-07-01 Richard Guenther <rguenther@suse.de>
PR middle-end/42834
PR middle-end/44468
* doc/gimple.texi (is_gimple_mem_ref_addr): Document.
* doc/generic.texi (References to storage): Document MEM_REF.
* tree-pretty-print.c (dump_generic_node): Handle MEM_REF.
(print_call_name): Likewise.
* tree.c (recompute_tree_invariant_for_addr_expr): Handle MEM_REF.
(build_simple_mem_ref_loc): New function.
(mem_ref_offset): Likewise.
* tree.h (build_simple_mem_ref_loc): Declare.
(build_simple_mem_ref): Define.
(mem_ref_offset): Declare.
* fold-const.c: Include tree-flow.h.
(operand_equal_p): Handle MEM_REF.
(build_fold_addr_expr_with_type_loc): Likewise.
(fold_comparison): Likewise.
(fold_unary_loc): Fold
VIEW_CONVERT_EXPR <T1, MEM_REF <T2, ...>> to MEM_REF <T1, ...>.
(fold_binary_loc): Fold MEM[&MEM[p, CST1], CST2] to MEM[p, CST1 + CST2],
fold MEM[&a.b, CST2] to MEM[&a, offsetof (a, b) + CST2].
* tree-ssa-alias.c (ptr_deref_may_alias_decl_p): Handle MEM_REF.
(ptr_deref_may_alias_ref_p_1): Likewise.
(ao_ref_base_alias_set): Properly differentiate base object for
offset and TBAA.
(ao_ref_init_from_ptr_and_size): Use MEM_REF.
(indirect_ref_may_alias_decl_p): Handle MEM_REFs properly.
(indirect_refs_may_alias_p): Likewise.
(refs_may_alias_p_1): Likewise. Remove pointer SSA name def
chasing code.
(ref_maybe_used_by_call_p_1): Handle MEM_REF.
(call_may_clobber_ref_p_1): Likewise.
* dwarf2out.c (loc_list_from_tree): Handle MEM_REF.
* expr.c (expand_assignment): Handle MEM_REF.
(store_expr): Handle MEM_REFs from STRING_CSTs.
(store_field): If expanding a MEM_REF of a non-addressable
decl use bitfield operations.
(get_inner_reference): Handle MEM_REF.
(expand_expr_addr_expr_1): Likewise.
(expand_expr_real_1): Likewise.
* tree-eh.c (tree_could_trap_p): Handle MEM_REF.
* alias.c (ao_ref_from_mem): Handle MEM_REF.
(get_alias_set): Likewise. Properly handle VIEW_CONVERT_EXPRs.
* tree-data-ref.c (dr_analyze_innermost): Handle MEM_REF.
(dr_analyze_indices): Likewise.
(dr_analyze_alias): Likewise.
(object_address_invariant_in_loop_p): Likewise.
* gimplify.c (mark_addressable): Handle MEM_REF.
(gimplify_cond_expr): Build MEM_REFs.
(gimplify_modify_expr_to_memcpy): Likewise.
(gimplify_init_ctor_preeval_1): Handle MEM_REF.
(gimple_fold_indirect_ref): Adjust.
(gimplify_expr): Handle MEM_REF. Gimplify INDIRECT_REF to MEM_REF.
* tree.def (MEM_REF): New tree code.
* tree-dfa.c: Include toplev.h.
(get_ref_base_and_extent): Handle MEM_REF.
(get_addr_base_and_unit_offset): New function.
* emit-rtl.c (set_mem_attributes_minus_bitpos): Handle MEM_REF.
* gimple-fold.c (may_propagate_address_into_dereference): Handle
MEM_REF.
(maybe_fold_offset_to_array_ref): Allow possibly out-of bounds
accesses if the array has just one dimension. Remove always true
parameter. Do not require type compatibility here.
(maybe_fold_offset_to_component_ref): Remove.
(maybe_fold_stmt_indirect): Remove.
(maybe_fold_reference): Remove INDIRECT_REF handling.
Fold back to non-MEM_REF.
(maybe_fold_offset_to_address): Simplify. Deal with type
mismatches here.
(maybe_fold_reference): Likewise.
(maybe_fold_stmt_addition): Likewise. Also handle
&ARRAY + I in addition to &ARRAY[0] + I.
(fold_gimple_assign): Handle ADDR_EXPR of MEM_REFs.
(gimple_get_relevant_ref_binfo): Handle MEM_REF.
* cfgexpand.c (expand_debug_expr): Handle MEM_REF.
* tree-ssa.c (useless_type_conversion_p): Make most pointer
conversions useless.
(warn_uninitialized_var): Handle MEM_REF.
(maybe_rewrite_mem_ref_base): New function.
(execute_update_addresses_taken): Implement re-writing of MEM_REFs
to SSA form.
* tree-inline.c (remap_gimple_op_r): Handle MEM_REF, remove
INDIRECT_REF handling.
(copy_tree_body_r): Handle MEM_REF.
* gimple.c (is_gimple_addressable): Adjust.
(is_gimple_address): Likewise.
(is_gimple_invariant_address): ADDR_EXPRs of MEM_REFs with
invariant base are invariant.
(is_gimple_min_lval): Adjust.
(is_gimple_mem_ref_addr): New function.
(get_base_address): Handle MEM_REF.
(count_ptr_derefs): Likewise.
(get_base_loadstore): Likewise.
* gimple.h (is_gimple_mem_ref_addr): Declare.
(gimple_call_fndecl): Handle invariant MEM_REF addresses.
* tree-cfg.c (verify_address): New function, split out from ...
(verify_expr): ... here. Use for verifying ADDR_EXPRs and
the address operand of MEM_REFs. Verify MEM_REFs. Reject
INDIRECT_REFs.
(verify_types_in_gimple_min_lval): Handle MEM_REF. Disallow
INDIRECT_REF. Allow conversions.
(verify_types_in_gimple_reference): Verify VIEW_CONVERT_EXPR of
a register does not change its size.
(verify_types_in_gimple_reference): Verify MEM_REF.
(verify_gimple_assign_single): Disallow INDIRECT_REF.
Handle MEM_REF.
* tree-ssa-operands.c (opf_non_addressable, opf_not_non_addressable):
New.
(mark_address_taken): Handle MEM_REF.
(get_indirect_ref_operands): Pass through opf_not_non_addressable.
(get_asm_expr_operands): Pass opf_not_non_addressable.
(get_expr_operands): Handle opf_[not_]non_addressable.
Handle MEM_REF. Remove INDIRECT_REF handling.
* tree-vrp.c: (check_array_ref): Handle MEM_REF.
(search_for_addr_array): Likewise.
(check_array_bounds): Likewise.
(vrp_stmt_computes_nonzero): Adjust for MEM_REF.
* tree-ssa-loop-im.c (for_each_index): Handle MEM_REF.
(ref_always_accessed_p): Likewise.
(gen_lsm_tmp_name): Likewise. Handle ADDR_EXPR.
* tree-complex.c (extract_component): Do not handle INDIRECT_REF.
Handle MEM_REF.
* cgraphbuild.c (mark_load): Properly check for NULL result
from get_base_address.
(mark_store): Likewise.
* tree-ssa-loop-niter.c (array_at_struct_end_p): Handle MEM_REF.
* tree-loop-distribution.c (generate_builtin): Exchange INDIRECT_REF
handling for MEM_REF.
* tree-scalar-evolution.c (follow_ssa_edge_expr): Handle
&MEM[ptr + CST] similar to POINTER_PLUS_EXPR.
* builtins.c (stabilize_va_list_loc): Use the function ABI
valist type if we couldn't canonicalize the argument type.
Always dereference with the canonical va-list type.
(maybe_emit_free_warning): Handle MEM_REF.
(fold_builtin_memory_op): Simplify and handle MEM_REFs in folding
memmove to memcpy.
* builtins.c (fold_builtin_memory_op): Use ref-all types
for all memcpy foldings.
* omp-low.c (build_receiver_ref): Adjust for MEM_REF.
(build_outer_var_ref): Likewise.
(scan_omp_1_op): Likewise.
(lower_rec_input_clauses): Likewise.
(lower_lastprivate_clauses): Likewise.
(lower_reduction_clauses): Likewise.
(lower_copyprivate_clauses): Likewise.
(expand_omp_atomic_pipeline): Likewise.
(expand_omp_atomic_mutex): Likewise.
(create_task_copyfn): Likewise.
* tree-ssa-sccvn.c (copy_reference_ops_from_ref): Handle MEM_REF.
Remove old union trick. Initialize constant offsets.
(ao_ref_init_from_vn_reference): Likewise. Do not handle
INDIRECT_REF. Init base_alias_set properly.
(vn_reference_lookup_3): Replace INDIRECT_REF handling with
MEM_REF.
(vn_reference_fold_indirect): Adjust for MEM_REFs.
(valueize_refs): Fold MEM_REFs. Re-evaluate constant offset
for ARRAY_REFs.
(may_insert): Remove.
(visit_reference_op_load): Do not test may_insert.
(run_scc_vn): Remove parameter, do not fiddle with may_insert.
* tree-ssa-sccvn.h (struct vn_reference_op_struct): Add
a field to store the constant offset this op applies.
(run_scc_vn): Adjust prototype.
* cgraphunit.c (thunk_adjust): Adjust for MEM_REF.
* tree-ssa-ccp.c (ccp_fold): Replace INDIRECT_REF folding with
MEM_REF. Propagate &foo + CST as &MEM[&foo, CST]. Do not
bother about volatile qualifiers on pointers.
(fold_const_aggregate_ref): Handle MEM_REF, do not handle INDIRECT_REF.
* tree-ssa-loop-ivopts.c
* tree-ssa-loop-ivopts.c (determine_base_object): Adjust
for MEM_REF.
(strip_offset_1): Likewise.
(find_interesting_uses_address): Replace INDIRECT_REF handling with
MEM_REF handling.
(get_computation_cost_at): Likewise.
* ipa-pure-const.c (check_op): Handle MEM_REF.
* tree-stdarg.c (check_all_va_list_escapes): Adjust for MEM_REF.
* tree-ssa-sink.c (is_hidden_global_store): Handle MEM_REF
and constants.
* ipa-inline.c (likely_eliminated_by_inlining_p): Handle MEM_REF.
* tree-parloops.c (take_address_of): Adjust for MEM_REF.
(eliminate_local_variables_1): Likewise.
(create_call_for_reduction_1): Likewise.
(create_loads_for_reductions): Likewise.
(create_loads_and_stores_for_name): Likewise.
* matrix-reorg.c (may_flatten_matrices_1): Sanitize.
(ssa_accessed_in_tree): Handle MEM_REF.
(ssa_accessed_in_assign_rhs): Likewise.
(update_type_size): Likewise.
(analyze_accesses_for_call_stmt): Likewise.
(analyze_accesses_for_assign_stmt): Likewise.
(transform_access_sites): Likewise.
(transform_allocation_sites): Likewise.
* tree-affine.c (tree_to_aff_combination): Handle MEM_REF.
* tree-vect-data-refs.c (vect_create_addr_base_for_vector_ref): Do
not handle INDIRECT_REF.
* tree-ssa-phiopt.c (add_or_mark_expr): Handle MEM_REF.
(cond_store_replacement): Likewise.
* tree-ssa-pre.c (create_component_ref_by_pieces_1): Handle
MEM_REF, no not handle INDIRECT_REFs.
(insert_into_preds_of_block): Properly initialize avail.
(phi_translate_1): Fold MEM_REFs. Re-evaluate constant offset
for ARRAY_REFs. Properly handle reference lookups that
require a bit re-interpretation.
(can_PRE_operation): Do not handle INDIRECT_REF. Handle MEM_REF.
* tree-sra.c
* tree-sra.c (build_access_from_expr_1): Handle MEM_REF.
(build_ref_for_offset_1): Remove.
(build_ref_for_offset): Build MEM_REFs.
(gate_intra_sra): Disable for now.
(sra_ipa_modify_expr): Handle MEM_REF.
(ipa_early_sra_gate): Disable for now.
* tree-sra.c (create_access): Swap INDIRECT_REF handling for
MEM_REF handling.
(disqualify_base_of_expr): Likewise.
(ptr_parm_has_direct_uses): Swap INDIRECT_REF handling for
MEM_REF handling.
(sra_ipa_modify_expr): Remove INDIRECT_REF handling.
Use mem_ref_offset. Remove bogus folding.
(build_access_from_expr_1): Properly handle MEM_REF for
non IPA-SRA.
(make_fancy_name_1): Add support for MEM_REF.
* tree-predcom.c (ref_at_iteration): Handle MEM_REFs.
* tree-mudflap.c (mf_xform_derefs_1): Adjust for MEM_REF.
* ipa-prop.c (compute_complex_assign_jump_func): Handle MEM_REF.
(compute_complex_ancestor_jump_func): Likewise.
(ipa_analyze_virtual_call_uses): Likewise.
* tree-ssa-forwprop.c (forward_propagate_addr_expr_1): Replace
INDIRECT_REF folding with more generalized MEM_REF folding.
(tree_ssa_forward_propagate_single_use_vars): Adjust accordingly.
(forward_propagate_addr_into_variable_array_index): Also handle
&ARRAY + I in addition to &ARRAY[0] + I.
* tree-ssa-dce.c (ref_may_be_aliased): Handle MEM_REF.
* tree-ssa-ter.c (find_replaceable_in_bb): Avoid TER if that
creates assignments with overlap.
* tree-nested.c (get_static_chain): Adjust for MEM_REF.
(get_frame_field): Likewise.
(get_nonlocal_debug_decl): Likewise.
(convert_nonlocal_reference_op): Likewise.
(struct nesting_info): Add mem_refs pointer-set.
(create_nesting_tree): Allocate it.
(convert_local_reference_op): Insert to be folded mem-refs.
(fold_mem_refs): New function.
(finalize_nesting_tree_1): Perform defered folding of mem-refs
(free_nesting_tree): Free the pointer-set.
* tree-vect-stmts.c (vectorizable_store): Adjust for MEM_REF.
(vectorizable_load): Likewise.
* tree-ssa-phiprop.c (phiprop_insert_phi): Adjust for MEM_REF.
(propagate_with_phi): Likewise.
* tree-object-size.c (addr_object_size): Handle MEM_REFs
instead of INDIRECT_REFs.
(compute_object_offset): Handle MEM_REF.
(plus_stmt_object_size): Handle MEM_REF.
(collect_object_sizes_for): Dispatch to plus_stmt_object_size
for &MEM_REF.
* tree-flow.h (get_addr_base_and_unit_offset): Declare.
(symbol_marked_for_renaming): Likewise.
* Makefile.in (tree-dfa.o): Add $(TOPLEV_H).
(fold-const.o): Add $(TREE_FLOW_H).
* tree-ssa-structalias.c (get_constraint_for_1): Handle MEM_REF.
(find_func_clobbers): Likewise.
* ipa-struct-reorg.c (decompose_indirect_ref_acc): Handle MEM_REF.
(decompose_access): Likewise.
(replace_field_acc): Likewise.
(replace_field_access_stmt): Likewise.
(insert_new_var_in_stmt): Likewise.
(get_stmt_accesses): Likewise.
(reorg_structs_drive): Disable.
* config/i386/i386.c (ix86_va_start): Adjust for MEM_REF.
(ix86_canonical_va_list_type): Likewise.
cp/
* cp-gimplify.c (cp_gimplify_expr): Open-code the rhs
predicate we are looking for, allow non-gimplified
INDIRECT_REFs.
testsuite/
* gcc.c-torture/execute/20100316-1.c: New testcase.
* gcc.c-torture/execute/pr44468.c: Likewise.
* gcc.c-torture/compile/20100609-1.c: Likewise.
* gcc.dg/volatile2.c: Adjust.
* gcc.dg/plugin/selfassign.c: Likewise.
* gcc.dg/pr36902.c: Likewise.
* gcc.dg/tree-ssa/foldaddr-2.c: Remove.
* gcc.dg/tree-ssa/foldaddr-3.c: Likewise.
* gcc.dg/tree-ssa/forwprop-8.c: Adjust.
* gcc.dg/tree-ssa/pr17141-1.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-13.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-14.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-21.c: Likewise.
* gcc.dg/tree-ssa/pta-ptrarith-1.c: Likewise.
* gcc.dg/tree-ssa/20030807-7.c: Likewise.
* gcc.dg/tree-ssa/forwprop-10.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-1.c: Likewise.
* gcc.dg/tree-ssa/pta-ptrarith-2.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-23.c: Likewise.
* gcc.dg/tree-ssa/forwprop-1.c: Likewise.
* gcc.dg/tree-ssa/forwprop-2.c: Likewise.
* gcc.dg/tree-ssa/struct-aliasing-1.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-25.c: Likewise.
* gcc.dg/tree-ssa/ssa-pre-26.c: Likewise.
* gcc.dg/tree-ssa/struct-aliasing-2.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-26.c: Likewise.
* gcc.dg/tree-ssa/ssa-sccvn-4.c: Likewise.
* gcc.dg/tree-ssa/ssa-pre-7.c: Likewise.
* gcc.dg/tree-ssa/forwprop-5.c: Likewise.
* gcc.dg/struct/w_prof_two_strs.c: XFAIL.
* gcc.dg/struct/wo_prof_escape_arg_to_local.c: Likewise.
* gcc.dg/struct/wo_prof_global_var.c: Likewise.
* gcc.dg/struct/wo_prof_malloc_size_var.c: Likewise.
* gcc.dg/struct/w_prof_local_array.c: Likewise.
* gcc.dg/struct/w_prof_single_str_global.c: Likewise.
* gcc.dg/struct/wo_prof_escape_str_init.c: Likewise.
* gcc.dg/struct/wo_prof_array_through_pointer.c: Likewise.
* gcc.dg/struct/w_prof_global_array.c: Likewise.
* gcc.dg/struct/wo_prof_array_field.c: Likewise.
* gcc.dg/struct/wo_prof_single_str_local.c: Likewise.
* gcc.dg/struct/w_prof_local_var.c: Likewise.
* gcc.dg/struct/wo_prof_two_strs.c: Likewise.
* gcc.dg/struct/wo_prof_empty_str.c: Likewise.
* gcc.dg/struct/wo_prof_local_array.c: Likewise.
* gcc.dg/struct/w_prof_global_var.c: Likewise.
* gcc.dg/struct/wo_prof_single_str_global.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_value.c: Likewise.
* gcc.dg/struct/wo_prof_global_array.c: Likewise.
* gcc.dg/struct/wo_prof_escape_return.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_array.c: Likewise.
* gcc.dg/struct/wo_prof_double_malloc.c: Likewise.
* gcc.dg/struct/w_ratio_cold_str.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_pointer.c: Likewise.
* gcc.dg/struct/wo_prof_local_var.c: Likewise.
* gcc.dg/tree-prof/stringop-1.c: Adjust.
* g++.dg/tree-ssa/pr31146.C: Likewise.
* g++.dg/tree-ssa/copyprop-1.C: Likewise.
* g++.dg/tree-ssa/pr33604.C: Likewise.
* g++.dg/plugin/selfassign.c: Likewise.
* gfortran.dg/array_memcpy_3.f90: Likewise.
* gfortran.dg/array_memcpy_4.f90: Likewise.
* c-c++-common/torture/pr42834.c: New testcase.
From-SVN: r161655
Diffstat (limited to 'gcc/tree-ssa-forwprop.c')
-rw-r--r-- | gcc/tree-ssa-forwprop.c | 327 |
1 files changed, 229 insertions, 98 deletions
diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c index eb6c831..5044aff 100644 --- a/gcc/tree-ssa-forwprop.c +++ b/gcc/tree-ssa-forwprop.c @@ -628,9 +628,14 @@ forward_propagate_addr_into_variable_array_index (tree offset, { tree index, tunit; gimple offset_def, use_stmt = gsi_stmt (*use_stmt_gsi); - tree tmp; + tree new_rhs, tmp; - tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (def_rhs))); + if (TREE_CODE (TREE_OPERAND (def_rhs, 0)) == ARRAY_REF) + tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (def_rhs))); + else if (TREE_CODE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) == ARRAY_TYPE) + tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (TREE_TYPE (def_rhs)))); + else + return false; if (!host_integerp (tunit, 1)) return false; @@ -697,10 +702,28 @@ forward_propagate_addr_into_variable_array_index (tree offset, /* Replace the pointer addition with array indexing. */ index = force_gimple_operand_gsi (use_stmt_gsi, index, true, NULL_TREE, true, GSI_SAME_STMT); - gimple_assign_set_rhs_from_tree (use_stmt_gsi, unshare_expr (def_rhs)); + if (TREE_CODE (TREE_OPERAND (def_rhs, 0)) == ARRAY_REF) + { + new_rhs = unshare_expr (def_rhs); + TREE_OPERAND (TREE_OPERAND (new_rhs, 0), 1) = index; + } + else + { + new_rhs = build4 (ARRAY_REF, TREE_TYPE (TREE_TYPE (TREE_TYPE (def_rhs))), + unshare_expr (TREE_OPERAND (def_rhs, 0)), + index, integer_zero_node, NULL_TREE); + new_rhs = build_fold_addr_expr (new_rhs); + if (!useless_type_conversion_p (TREE_TYPE (gimple_assign_lhs (use_stmt)), + TREE_TYPE (new_rhs))) + { + new_rhs = force_gimple_operand_gsi (use_stmt_gsi, new_rhs, true, + NULL_TREE, true, GSI_SAME_STMT); + new_rhs = fold_convert (TREE_TYPE (gimple_assign_lhs (use_stmt)), + new_rhs); + } + } + gimple_assign_set_rhs_from_tree (use_stmt_gsi, new_rhs); use_stmt = gsi_stmt (*use_stmt_gsi); - TREE_OPERAND (TREE_OPERAND (gimple_assign_rhs1 (use_stmt), 0), 1) - = index; /* That should have created gimple, so there is no need to record information to undo the propagation. */ @@ -725,11 +748,9 @@ forward_propagate_addr_expr_1 (tree name, tree def_rhs, bool single_use_p) { tree lhs, rhs, rhs2, array_ref; - tree *rhsp, *lhsp; gimple use_stmt = gsi_stmt (*use_stmt_gsi); enum tree_code rhs_code; bool res = true; - bool addr_p = false; gcc_assert (TREE_CODE (def_rhs) == ADDR_EXPR); @@ -767,31 +788,120 @@ forward_propagate_addr_expr_1 (tree name, tree def_rhs, return true; } + /* Propagate through constant pointer adjustments. */ + if (TREE_CODE (lhs) == SSA_NAME + && rhs_code == POINTER_PLUS_EXPR + && rhs == name + && TREE_CODE (gimple_assign_rhs2 (use_stmt)) == INTEGER_CST) + { + tree new_def_rhs; + /* As we come here with non-invariant addresses in def_rhs we need + to make sure we can build a valid constant offsetted address + for further propagation. Simply rely on fold building that + and check after the fact. */ + new_def_rhs = fold_build2 (MEM_REF, TREE_TYPE (TREE_TYPE (rhs)), + def_rhs, + fold_convert (ptr_type_node, + gimple_assign_rhs2 (use_stmt))); + if (TREE_CODE (new_def_rhs) == MEM_REF + && TREE_CODE (TREE_OPERAND (new_def_rhs, 0)) == ADDR_EXPR + && !DECL_P (TREE_OPERAND (TREE_OPERAND (new_def_rhs, 0), 0)) + && !CONSTANT_CLASS_P (TREE_OPERAND (TREE_OPERAND (new_def_rhs, 0), 0))) + return false; + new_def_rhs = build_fold_addr_expr_with_type (new_def_rhs, + TREE_TYPE (rhs)); + + /* Recurse. If we could propagate into all uses of lhs do not + bother to replace into the current use but just pretend we did. */ + if (TREE_CODE (new_def_rhs) == ADDR_EXPR + && forward_propagate_addr_expr (lhs, new_def_rhs)) + return true; + + if (useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (new_def_rhs))) + gimple_assign_set_rhs_with_ops (use_stmt_gsi, TREE_CODE (new_def_rhs), + new_def_rhs, NULL_TREE); + else if (is_gimple_min_invariant (new_def_rhs)) + gimple_assign_set_rhs_with_ops (use_stmt_gsi, NOP_EXPR, + new_def_rhs, NULL_TREE); + else + return false; + gcc_assert (gsi_stmt (*use_stmt_gsi) == use_stmt); + update_stmt (use_stmt); + return true; + } + /* Now strip away any outer COMPONENT_REF/ARRAY_REF nodes from the LHS. ADDR_EXPR will not appear on the LHS. */ - lhsp = gimple_assign_lhs_ptr (use_stmt); - while (handled_component_p (*lhsp)) - lhsp = &TREE_OPERAND (*lhsp, 0); - lhs = *lhsp; + lhs = gimple_assign_lhs (use_stmt); + while (handled_component_p (lhs)) + lhs = TREE_OPERAND (lhs, 0); - /* Now see if the LHS node is an INDIRECT_REF using NAME. If so, + /* Now see if the LHS node is a MEM_REF using NAME. If so, propagate the ADDR_EXPR into the use of NAME and fold the result. */ - if (TREE_CODE (lhs) == INDIRECT_REF + if (TREE_CODE (lhs) == MEM_REF && TREE_OPERAND (lhs, 0) == name) { - if (may_propagate_address_into_dereference (def_rhs, lhs) - && (lhsp != gimple_assign_lhs_ptr (use_stmt) - || useless_type_conversion_p - (TREE_TYPE (TREE_OPERAND (def_rhs, 0)), TREE_TYPE (rhs)))) + tree def_rhs_base; + HOST_WIDE_INT def_rhs_offset; + /* If the address is invariant we can always fold it. */ + if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0), + &def_rhs_offset))) { - *lhsp = unshare_expr (TREE_OPERAND (def_rhs, 0)); - fold_stmt_inplace (use_stmt); + double_int off = mem_ref_offset (lhs); + tree new_ptr; + off = double_int_add (off, + shwi_to_double_int (def_rhs_offset)); + if (TREE_CODE (def_rhs_base) == MEM_REF) + { + off = double_int_add (off, mem_ref_offset (def_rhs_base)); + new_ptr = TREE_OPERAND (def_rhs_base, 0); + } + else + new_ptr = build_fold_addr_expr (def_rhs_base); + TREE_OPERAND (lhs, 0) = new_ptr; + TREE_OPERAND (lhs, 1) + = double_int_to_tree (TREE_TYPE (TREE_OPERAND (lhs, 1)), off); tidy_after_forward_propagate_addr (use_stmt); - /* Continue propagating into the RHS if this was not the only use. */ if (single_use_p) return true; } + /* If the LHS is a plain dereference and the value type is the same as + that of the pointed-to type of the address we can put the + dereferenced address on the LHS preserving the original alias-type. */ + else if (gimple_assign_lhs (use_stmt) == lhs + && useless_type_conversion_p + (TREE_TYPE (TREE_OPERAND (def_rhs, 0)), + TREE_TYPE (gimple_assign_rhs1 (use_stmt)))) + { + tree *def_rhs_basep = &TREE_OPERAND (def_rhs, 0); + tree new_offset, new_base, saved; + while (handled_component_p (*def_rhs_basep)) + def_rhs_basep = &TREE_OPERAND (*def_rhs_basep, 0); + saved = *def_rhs_basep; + if (TREE_CODE (*def_rhs_basep) == MEM_REF) + { + new_base = TREE_OPERAND (*def_rhs_basep, 0); + new_offset + = int_const_binop (PLUS_EXPR, TREE_OPERAND (lhs, 1), + TREE_OPERAND (*def_rhs_basep, 1), 0); + } + else + { + new_base = build_fold_addr_expr (*def_rhs_basep); + new_offset = TREE_OPERAND (lhs, 1); + } + *def_rhs_basep = build2 (MEM_REF, TREE_TYPE (*def_rhs_basep), + new_base, new_offset); + gimple_assign_set_lhs (use_stmt, + unshare_expr (TREE_OPERAND (def_rhs, 0))); + *def_rhs_basep = saved; + tidy_after_forward_propagate_addr (use_stmt); + /* Continue propagating into the RHS if this was not the + only use. */ + if (single_use_p) + return true; + } else /* We can have a struct assignment dereferencing our name twice. Note that we didn't propagate into the lhs to not falsely @@ -801,79 +911,76 @@ forward_propagate_addr_expr_1 (tree name, tree def_rhs, /* Strip away any outer COMPONENT_REF, ARRAY_REF or ADDR_EXPR nodes from the RHS. */ - rhsp = gimple_assign_rhs1_ptr (use_stmt); - if (TREE_CODE (*rhsp) == ADDR_EXPR) - { - rhsp = &TREE_OPERAND (*rhsp, 0); - addr_p = true; - } - while (handled_component_p (*rhsp)) - rhsp = &TREE_OPERAND (*rhsp, 0); - rhs = *rhsp; + rhs = gimple_assign_rhs1 (use_stmt); + if (TREE_CODE (rhs) == ADDR_EXPR) + rhs = TREE_OPERAND (rhs, 0); + while (handled_component_p (rhs)) + rhs = TREE_OPERAND (rhs, 0); - /* Now see if the RHS node is an INDIRECT_REF using NAME. If so, + /* Now see if the RHS node is a MEM_REF using NAME. If so, propagate the ADDR_EXPR into the use of NAME and fold the result. */ - if (TREE_CODE (rhs) == INDIRECT_REF - && TREE_OPERAND (rhs, 0) == name - && may_propagate_address_into_dereference (def_rhs, rhs)) + if (TREE_CODE (rhs) == MEM_REF + && TREE_OPERAND (rhs, 0) == name) { - *rhsp = unshare_expr (TREE_OPERAND (def_rhs, 0)); - fold_stmt_inplace (use_stmt); - tidy_after_forward_propagate_addr (use_stmt); - return res; + tree def_rhs_base; + HOST_WIDE_INT def_rhs_offset; + if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0), + &def_rhs_offset))) + { + double_int off = mem_ref_offset (rhs); + tree new_ptr; + off = double_int_add (off, + shwi_to_double_int (def_rhs_offset)); + if (TREE_CODE (def_rhs_base) == MEM_REF) + { + off = double_int_add (off, mem_ref_offset (def_rhs_base)); + new_ptr = TREE_OPERAND (def_rhs_base, 0); + } + else + new_ptr = build_fold_addr_expr (def_rhs_base); + TREE_OPERAND (rhs, 0) = new_ptr; + TREE_OPERAND (rhs, 1) + = double_int_to_tree (TREE_TYPE (TREE_OPERAND (rhs, 1)), off); + fold_stmt_inplace (use_stmt); + tidy_after_forward_propagate_addr (use_stmt); + return res; + } + /* If the LHS is a plain dereference and the value type is the same as + that of the pointed-to type of the address we can put the + dereferenced address on the LHS preserving the original alias-type. */ + else if (gimple_assign_rhs1 (use_stmt) == rhs + && useless_type_conversion_p + (TREE_TYPE (gimple_assign_lhs (use_stmt)), + TREE_TYPE (TREE_OPERAND (def_rhs, 0)))) + { + tree *def_rhs_basep = &TREE_OPERAND (def_rhs, 0); + tree new_offset, new_base, saved; + while (handled_component_p (*def_rhs_basep)) + def_rhs_basep = &TREE_OPERAND (*def_rhs_basep, 0); + saved = *def_rhs_basep; + if (TREE_CODE (*def_rhs_basep) == MEM_REF) + { + new_base = TREE_OPERAND (*def_rhs_basep, 0); + new_offset + = int_const_binop (PLUS_EXPR, TREE_OPERAND (rhs, 1), + TREE_OPERAND (*def_rhs_basep, 1), 0); + } + else + { + new_base = build_fold_addr_expr (*def_rhs_basep); + new_offset = TREE_OPERAND (rhs, 1); + } + *def_rhs_basep = build2 (MEM_REF, TREE_TYPE (*def_rhs_basep), + new_base, new_offset); + gimple_assign_set_rhs1 (use_stmt, + unshare_expr (TREE_OPERAND (def_rhs, 0))); + *def_rhs_basep = saved; + fold_stmt_inplace (use_stmt); + tidy_after_forward_propagate_addr (use_stmt); + return res; + } } - /* Now see if the RHS node is an INDIRECT_REF using NAME. If so, - propagate the ADDR_EXPR into the use of NAME and try to - create a VCE and fold the result. */ - if (TREE_CODE (rhs) == INDIRECT_REF - && TREE_OPERAND (rhs, 0) == name - && TYPE_SIZE (TREE_TYPE (rhs)) - && TYPE_SIZE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) - /* Function decls should not be used for VCE either as it could be a - function descriptor that we want and not the actual function code. */ - && TREE_CODE (TREE_OPERAND (def_rhs, 0)) != FUNCTION_DECL - /* We should not convert volatile loads to non volatile loads. */ - && !TYPE_VOLATILE (TREE_TYPE (rhs)) - && !TYPE_VOLATILE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) - && operand_equal_p (TYPE_SIZE (TREE_TYPE (rhs)), - TYPE_SIZE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))), 0) - /* Make sure we only do TBAA compatible replacements. */ - && get_alias_set (TREE_OPERAND (def_rhs, 0)) == get_alias_set (rhs)) - { - tree def_rhs_base, new_rhs = unshare_expr (TREE_OPERAND (def_rhs, 0)); - new_rhs = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (rhs), new_rhs); - if (TREE_CODE (new_rhs) != VIEW_CONVERT_EXPR) - { - /* If we have folded the VIEW_CONVERT_EXPR then the result is only - valid if we can replace the whole rhs of the use statement. */ - if (rhs != gimple_assign_rhs1 (use_stmt)) - return false; - new_rhs = force_gimple_operand_gsi (use_stmt_gsi, new_rhs, true, NULL, - true, GSI_NEW_STMT); - gimple_assign_set_rhs1 (use_stmt, new_rhs); - tidy_after_forward_propagate_addr (use_stmt); - return res; - } - /* If the defining rhs comes from an indirect reference, then do not - convert into a VIEW_CONVERT_EXPR. Likewise if we'll end up taking - the address of a V_C_E of a constant. */ - def_rhs_base = TREE_OPERAND (def_rhs, 0); - while (handled_component_p (def_rhs_base)) - def_rhs_base = TREE_OPERAND (def_rhs_base, 0); - if (!INDIRECT_REF_P (def_rhs_base) - && (!addr_p - || !is_gimple_min_invariant (def_rhs))) - { - /* We may have arbitrary VIEW_CONVERT_EXPRs in a nested component - reference. Place it there and fold the thing. */ - *rhsp = new_rhs; - fold_stmt_inplace (use_stmt); - tidy_after_forward_propagate_addr (use_stmt); - return res; - } - } - /* If the use of the ADDR_EXPR is not a POINTER_PLUS_EXPR, there is nothing to do. */ if (gimple_assign_rhs_code (use_stmt) != POINTER_PLUS_EXPR @@ -885,9 +992,10 @@ forward_propagate_addr_expr_1 (tree name, tree def_rhs, element zero in an array. If that is not the case then there is nothing to do. */ array_ref = TREE_OPERAND (def_rhs, 0); - if (TREE_CODE (array_ref) != ARRAY_REF - || TREE_CODE (TREE_TYPE (TREE_OPERAND (array_ref, 0))) != ARRAY_TYPE - || TREE_CODE (TREE_OPERAND (array_ref, 1)) != INTEGER_CST) + if ((TREE_CODE (array_ref) != ARRAY_REF + || TREE_CODE (TREE_TYPE (TREE_OPERAND (array_ref, 0))) != ARRAY_TYPE + || TREE_CODE (TREE_OPERAND (array_ref, 1)) != INTEGER_CST) + && TREE_CODE (TREE_TYPE (array_ref)) != ARRAY_TYPE) return false; rhs2 = gimple_assign_rhs2 (use_stmt); @@ -923,7 +1031,8 @@ forward_propagate_addr_expr_1 (tree name, tree def_rhs, array elements, then the result is converted into the proper type for the arithmetic. */ if (TREE_CODE (rhs2) == SSA_NAME - && integer_zerop (TREE_OPERAND (array_ref, 1)) + && (TREE_CODE (array_ref) != ARRAY_REF + || integer_zerop (TREE_OPERAND (array_ref, 1))) && useless_type_conversion_p (TREE_TYPE (name), TREE_TYPE (def_rhs)) /* Avoid problems with IVopts creating PLUS_EXPRs with a different type than their operands. */ @@ -1300,13 +1409,35 @@ tree_ssa_forward_propagate_single_use_vars (void) else gsi_next (&gsi); } - else if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR - && is_gimple_min_invariant (rhs)) + else if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) { - /* Make sure to fold &a[0] + off_1 here. */ - fold_stmt_inplace (stmt); - update_stmt (stmt); - if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + if (TREE_CODE (gimple_assign_rhs2 (stmt)) == INTEGER_CST + /* ??? Better adjust the interface to that function + instead of building new trees here. */ + && forward_propagate_addr_expr + (lhs, + build1 (ADDR_EXPR, + TREE_TYPE (rhs), + fold_build2 (MEM_REF, + TREE_TYPE (TREE_TYPE (rhs)), + rhs, + fold_convert + (ptr_type_node, + gimple_assign_rhs2 (stmt)))))) + { + release_defs (stmt); + todoflags |= TODO_remove_unused_locals; + gsi_remove (&gsi, true); + } + else if (is_gimple_min_invariant (rhs)) + { + /* Make sure to fold &a[0] + off_1 here. */ + fold_stmt_inplace (stmt); + update_stmt (stmt); + if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + gsi_next (&gsi); + } + else gsi_next (&gsi); } else if ((gimple_assign_rhs_code (stmt) == BIT_NOT_EXPR |