aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-ssa-threadbackward.c
diff options
context:
space:
mode:
authorAldy Hernandez <aldyh@redhat.com>2021-09-09 20:30:28 +0200
committerAldy Hernandez <aldyh@redhat.com>2021-09-10 18:41:51 +0200
commit01b5038718056b024b370b74a874fbd92c5bbab3 (patch)
tree327cf6de3914007cedd4483b4b248c25a448fc79 /gcc/tree-ssa-threadbackward.c
parentfb88bf9931f17d137eb50c001e1c924aa1e34e83 (diff)
downloadgcc-01b5038718056b024b370b74a874fbd92c5bbab3.zip
gcc-01b5038718056b024b370b74a874fbd92c5bbab3.tar.gz
gcc-01b5038718056b024b370b74a874fbd92c5bbab3.tar.bz2
Disable threading through latches until after loop optimizations.
The motivation for this patch was enabling the use of global ranges in the path solver, but this caused certain properties of loops being destroyed which made subsequent loop optimizations to fail. Consequently, this patch's mail goal is to disable jump threading involving the latch until after loop optimizations have run. As can be seen in the test adjustments, we mostly shift the threading from the early threaders (ethread, thread[12] to the late threaders thread[34]). I have nuked some of the early notes in the testcases that came as part of the jump threader rewrite. They're mostly noise now. Note that we could probably relax some other restrictions in profitable_path_p when loop optimizations have completed, but it would require more testing, and I'm hesitant to touch more things than needed at this point. I have added a reminder to the function to keep this in mind. Finally, perhaps as a follow-up, we should apply the same restrictions to the forward threader. At some point I'd like to combine the cost models. Tested on x86-64 Linux. p.s. There is a thorough discussion involving the limitations of jump threading involving loops here: https://gcc.gnu.org/pipermail/gcc/2021-September/237247.html gcc/ChangeLog: * tree-pass.h (PROP_loop_opts_done): New. * gimple-range-path.cc (path_range_query::internal_range_of_expr): Intersect with global range. * tree-ssa-loop.c (tree_ssa_loop_done): Set PROP_loop_opts_done. * tree-ssa-threadbackward.c (back_threader_profitability::profitable_path_p): Disable threading through latches until after loop optimizations have run. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/ssa-dom-thread-2b.c: Adjust for disabling of threading through latches. * gcc.dg/tree-ssa/ssa-dom-thread-6.c: Same. * gcc.dg/tree-ssa/ssa-dom-thread-7.c: Same. Co-authored-by: Michael Matz <matz@suse.de>
Diffstat (limited to 'gcc/tree-ssa-threadbackward.c')
-rw-r--r--gcc/tree-ssa-threadbackward.c28
1 files changed, 26 insertions, 2 deletions
diff --git a/gcc/tree-ssa-threadbackward.c b/gcc/tree-ssa-threadbackward.c
index 449232c..e729923 100644
--- a/gcc/tree-ssa-threadbackward.c
+++ b/gcc/tree-ssa-threadbackward.c
@@ -43,6 +43,7 @@ along with GCC; see the file COPYING3. If not see
#include "ssa.h"
#include "tree-cfgcleanup.h"
#include "tree-pretty-print.h"
+#include "cfghooks.h"
// Path registry for the backwards threader. After all paths have been
// registered with register_path(), thread_through_all_blocks() is called
@@ -564,7 +565,10 @@ back_threader_registry::thread_through_all_blocks (bool may_peel_loop_headers)
TAKEN_EDGE, otherwise it is NULL.
CREATES_IRREDUCIBLE_LOOP, if non-null is set to TRUE if threading this path
- would create an irreducible loop. */
+ would create an irreducible loop.
+
+ ?? It seems we should be able to loosen some of the restrictions in
+ this function after loop optimizations have run. */
bool
back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path,
@@ -725,7 +729,11 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path,
the last entry in the array when determining if we thread
through the loop latch. */
if (loop->latch == bb)
- threaded_through_latch = true;
+ {
+ threaded_through_latch = true;
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, " (latch)");
+ }
}
gimple *stmt = get_gimple_control_stmt (m_path[0]);
@@ -845,6 +853,22 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path,
"a multiway branch.\n");
return false;
}
+
+ /* Threading through an empty latch would cause code to be added to
+ the latch. This could alter the loop form sufficiently to cause
+ loop optimizations to fail. Disable these threads until after
+ loop optimizations have run. */
+ if ((threaded_through_latch
+ || (taken_edge && taken_edge->dest == loop->latch))
+ && !(cfun->curr_properties & PROP_loop_opts_done)
+ && empty_block_p (loop->latch))
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file,
+ " FAIL: FSM Thread through latch before loop opts would create non-empty latch\n");
+ return false;
+
+ }
return true;
}