diff options
author | Teresa Johnson <tejohnson@google.com> | 2013-04-29 13:22:46 +0000 |
---|---|---|
committer | Teresa Johnson <tejohnson@gcc.gnu.org> | 2013-04-29 13:22:46 +0000 |
commit | f41f80f90846d26a89ed5a6440bc283b745235ac (patch) | |
tree | 9daf54908866e15052bfcfc011bc4753b4b363fb /gcc/lto-cgraph.c | |
parent | 315bbd2e3c15dea3528259be2aee2876dec33843 (diff) | |
download | gcc-f41f80f90846d26a89ed5a6440bc283b745235ac.zip gcc-f41f80f90846d26a89ed5a6440bc283b745235ac.tar.gz gcc-f41f80f90846d26a89ed5a6440bc283b745235ac.tar.bz2 |
This patch fixes PR bootstrap/57077.
This patch fixes PR bootstrap/57077. Certain new uses of apply_probability
are actually scaling the counts up, and the scale factor should not
be treated as a probability as the value may exceed REG_BR_PROB_BASE.
One example (from the PR) is when scaling counts up in LTO when merging
profiles. Another example I found when preparing the patch to use
the rounding divide in more places is when inlining COMDAT functions.
Add new helper function apply_scale that does the scaling without
the probability range check. I audited the new uses of apply_probability
and changed the calls as appropriate.
2013-04-29 Teresa Johnson <tejohnson@google.com>
PR bootstrap/57077
* basic-block.h (apply_scale): New function.
(apply_probability): Use apply_scale.
* gimple-streamer-in.c (input_bb): Ditto.
* lto-streamer-in.c (input_cfg): Ditto.
* lto-cgraph.c (merge_profile_summaries): Ditto.
* tree-optimize.c (execute_fixup_cfg): Ditto.
* tree-inline.c (copy_bb): Update comment to use
apply_scale.
(copy_edges_for_bb): Ditto.
(copy_cfg_body): Ditto.
From-SVN: r198416
Diffstat (limited to 'gcc/lto-cgraph.c')
-rw-r--r-- | gcc/lto-cgraph.c | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/gcc/lto-cgraph.c b/gcc/lto-cgraph.c index 69f5e3a..cead76b 100644 --- a/gcc/lto-cgraph.c +++ b/gcc/lto-cgraph.c @@ -1347,10 +1347,10 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) file_data->profile_info.runs); lto_gcov_summary.sum_max = MAX (lto_gcov_summary.sum_max, - apply_probability (file_data->profile_info.sum_max, scale)); + apply_scale (file_data->profile_info.sum_max, scale)); lto_gcov_summary.sum_all = MAX (lto_gcov_summary.sum_all, - apply_probability (file_data->profile_info.sum_all, scale)); + apply_scale (file_data->profile_info.sum_all, scale)); /* Save a pointer to the profile_info with the largest scaled sum_all and the scale for use in merging the histogram. */ @@ -1372,8 +1372,8 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) /* Scale up the min value as we did the corresponding sum_all above. Use that to find the new histogram index. */ gcov_type scaled_min - = apply_probability (saved_profile_info->histogram[h_ix].min_value, - saved_scale); + = apply_scale (saved_profile_info->histogram[h_ix].min_value, + saved_scale); /* The new index may be shared with another scaled histogram entry, so we need to account for a non-zero histogram entry at new_ix. */ unsigned new_ix = gcov_histo_index (scaled_min); @@ -1386,8 +1386,8 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) here and place the scaled cumulative counter value in the bucket corresponding to the scaled minimum counter value. */ lto_gcov_summary.histogram[new_ix].cum_value - += apply_probability (saved_profile_info->histogram[h_ix].cum_value, - saved_scale); + += apply_scale (saved_profile_info->histogram[h_ix].cum_value, + saved_scale); lto_gcov_summary.histogram[new_ix].num_counters += saved_profile_info->histogram[h_ix].num_counters; } @@ -1419,8 +1419,8 @@ merge_profile_summaries (struct lto_file_decl_data **file_data_vec) if (scale == REG_BR_PROB_BASE) continue; for (edge = node->callees; edge; edge = edge->next_callee) - edge->count = apply_probability (edge->count, scale); - node->count = apply_probability (node->count, scale); + edge->count = apply_scale (edge->count, scale); + node->count = apply_scale (node->count, scale); } } |