diff options
author | Teresa Johnson <tejohnson@google.com> | 2013-04-08 17:39:10 +0000 |
---|---|---|
committer | Teresa Johnson <tejohnson@gcc.gnu.org> | 2013-04-08 17:39:10 +0000 |
commit | 8ddb5a296eea999c6376f43a643b2baf79cf886a (patch) | |
tree | ffb943825496db918422c6ebe357dcdcc5207a4b /gcc/lto-cgraph.c | |
parent | d6222d4ef011f80c08fb9619c29619b47daf4feb (diff) | |
download | gcc-8ddb5a296eea999c6376f43a643b2baf79cf886a.zip gcc-8ddb5a296eea999c6376f43a643b2baf79cf886a.tar.gz gcc-8ddb5a296eea999c6376f43a643b2baf79cf886a.tar.bz2 |
First phase of unifying the computation of profile scale factors/probabilities and the actual scaling to use rounding divides...
First phase of unifying the computation of profile scale factors/probabilities
and the actual scaling to use rounding divides:
- Add new macro GCOV_COMPUTE_SCALE to basic-block.h to compute the scale
factor/probability via a rounding divide.
- Change all locations that already perform rounding divides (inline or via RDIV)
to use the appropriate helper: GCOV_COMPUTE_SCALE, apply_probability or
combine_probabilities.
- Change ipa-cp.c truncating divides to use rounding divides.
- Add comments to all other locations (currently using truncating divides) to
switch them to one of the helpers so they use a rounding divide.
Next phase will be to replace the locations using truncating divides, marked
with a comment here, into rounding divides via the helper methods.
2013-04-08 Teresa Johnson <tejohnson@google.com>
* basic-block.h (GCOV_COMPUTE_SCALE): Define.
* ipa-inline-analysis.c (param_change_prob): Use helper rounding divide
methods.
(estimate_edge_size_and_time): Add comment to suggest using rounding
methods.
(estimate_node_size_and_time): Ditto.
(remap_edge_change_prob): Use helper rounding divide methods.
* value-prof.c (gimple_divmod_fixed_value_transform): Ditto.
(gimple_mod_pow2_value_transform): Ditto.
(gimple_mod_subtract_transform): Ditto.
(gimple_ic_transform): Ditto.
(gimple_stringops_transform): Ditto.
* stmt.c (conditional_probability): Ditto.
(emit_case_dispatch_table): Ditto.
* lto-cgraph.c (merge_profile_summaries): Ditto.
* tree-optimize.c (execute_fixup_cfg): Ditto.
* cfgcleanup.c (try_forward_edges): Ditto.
* cfgloopmanip.c (scale_loop_profile): Ditto.
(loopify): Ditto.
(duplicate_loop_to_header_edge): Ditto.
(lv_adjust_loop_entry_edge): Ditto.
* tree-vect-loop.c (vect_transform_loop): Ditto.
* profile.c (compute_branch_probabilities): Ditto.
* cfgbuild.c (compute_outgoing_frequencies): Ditto.
* lto-streamer-in.c (input_cfg): Ditto.
* gimple-streamer-in.c (input_bb): Ditto.
* ipa-cp.c (update_profiling_info): Ditto.
(update_specialized_profile): Ditto.
* tree-vect-loop-manip.c (slpeel_tree_peel_loop_to_edge): Ditto.
* cfg.c (update_bb_profile_for_threading): Add comment to suggest using
rounding methods.
* sched-rgn.c (compute_dom_prob_ps): Ditto.
(compute_trg_info): Ditto.
* cfgrtl.c (force_nonfallthru_and_redirect): Ditto.
(purge_dead_edges): Ditto.
* loop-unswitch.c (unswitch_loop): Ditto.
* cgraphclones.c (cgraph_clone_edge): Ditto.
(cgraph_clone_node): Ditto.
* tree-inline.c (copy_bb): Ditto.
(copy_edges_for_bb): Ditto.
(initialize_cfun): Ditto.
(copy_cfg_body): Ditto.
(expand_call_inline): Ditto.
From-SVN: r197595
Diffstat (limited to 'gcc/lto-cgraph.c')
-rw-r--r-- | gcc/lto-cgraph.c | 29 |
1 files changed, 15 insertions, 14 deletions
diff --git a/gcc/lto-cgraph.c b/gcc/lto-cgraph.c index ac92e90..69f5e3a 100644 --- a/gcc/lto-cgraph.c +++ b/gcc/lto-cgraph.c @@ -1343,14 +1343,14 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) for (j = 0; (file_data = file_data_vec[j]) != NULL; j++) if (file_data->profile_info.runs) { - int scale = RDIV (REG_BR_PROB_BASE * max_runs, - file_data->profile_info.runs); - lto_gcov_summary.sum_max = MAX (lto_gcov_summary.sum_max, - RDIV (file_data->profile_info.sum_max - * scale, REG_BR_PROB_BASE)); - lto_gcov_summary.sum_all = MAX (lto_gcov_summary.sum_all, - RDIV (file_data->profile_info.sum_all - * scale, REG_BR_PROB_BASE)); + int scale = GCOV_COMPUTE_SCALE (max_runs, + file_data->profile_info.runs); + lto_gcov_summary.sum_max + = MAX (lto_gcov_summary.sum_max, + apply_probability (file_data->profile_info.sum_max, scale)); + lto_gcov_summary.sum_all + = MAX (lto_gcov_summary.sum_all, + apply_probability (file_data->profile_info.sum_all, scale)); /* Save a pointer to the profile_info with the largest scaled sum_all and the scale for use in merging the histogram. */ @@ -1371,8 +1371,9 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) { /* Scale up the min value as we did the corresponding sum_all above. Use that to find the new histogram index. */ - gcov_type scaled_min = RDIV (saved_profile_info->histogram[h_ix].min_value - * saved_scale, REG_BR_PROB_BASE); + gcov_type scaled_min + = apply_probability (saved_profile_info->histogram[h_ix].min_value, + saved_scale); /* The new index may be shared with another scaled histogram entry, so we need to account for a non-zero histogram entry at new_ix. */ unsigned new_ix = gcov_histo_index (scaled_min); @@ -1385,8 +1386,8 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) here and place the scaled cumulative counter value in the bucket corresponding to the scaled minimum counter value. */ lto_gcov_summary.histogram[new_ix].cum_value - += RDIV (saved_profile_info->histogram[h_ix].cum_value - * saved_scale, REG_BR_PROB_BASE); + += apply_probability (saved_profile_info->histogram[h_ix].cum_value, + saved_scale); lto_gcov_summary.histogram[new_ix].num_counters += saved_profile_info->histogram[h_ix].num_counters; } @@ -1418,8 +1419,8 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) if (scale == REG_BR_PROB_BASE) continue; for (edge = node->callees; edge; edge = edge->next_callee) - edge->count = RDIV (edge->count * scale, REG_BR_PROB_BASE); - node->count = RDIV (node->count * scale, REG_BR_PROB_BASE); + edge->count = apply_probability (edge->count, scale); + node->count = apply_probability (node->count, scale); } } |