aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRichard Sandiford <richard.sandiford@arm.com>2023-10-25 10:39:50 +0100
committerRichard Sandiford <richard.sandiford@arm.com>2023-10-25 10:39:50 +0100
commit60ef0d2cdc97b4325947941834f5d3590f0af062 (patch)
tree74d7eefce6ec5dd3b9302e11b9f3b1b78fbe049c
parentd5e0321c3f423e307f063c51a6e0795d9daa9309 (diff)
downloadgcc-60ef0d2cdc97b4325947941834f5d3590f0af062.zip
gcc-60ef0d2cdc97b4325947941834f5d3590f0af062.tar.gz
gcc-60ef0d2cdc97b4325947941834f5d3590f0af062.tar.bz2
rtl-ssa: Fix ICE when deleting memory clobbers
Sometimes an optimisation can remove a clobber of scratch registers or scratch memory. We then need to update the DU chains to reflect the removed clobber. For registers this isn't a problem. Clobbers of registers are just momentary blips in the register's lifetime. They act as a barrier for moving uses later or defs earlier, but otherwise they have no effect on the semantics of other instructions. Removing a clobber is therefore a cheap, local operation. In contrast, clobbers of memory are modelled as full sets. This is because (a) a clobber of memory does not invalidate *all* memory and (b) it's a common idiom to use (clobber (mem ...)) in stack barriers. But removing a set and redirecting all uses to a different set is a linear operation. Doing it for potentially every optimisation could lead to quadratic behaviour. This patch therefore refrains from removing sets of memory that appear to be redundant. There's an opportunity to clean this up in linear time at the end of the pass, but as things stand, nothing would benefit from that. This is also a very rare event. Usually we should try to optimise the insn before the scratch memory has been allocated. gcc/ * rtl-ssa/changes.cc (function_info::finalize_new_accesses): If a change describes a set of memory, ensure that that set is kept, regardless of the insn pattern.
-rw-r--r--gcc/rtl-ssa/changes.cc14
1 files changed, 12 insertions, 2 deletions
diff --git a/gcc/rtl-ssa/changes.cc b/gcc/rtl-ssa/changes.cc
index c73c23c..5800f9d 100644
--- a/gcc/rtl-ssa/changes.cc
+++ b/gcc/rtl-ssa/changes.cc
@@ -429,8 +429,18 @@ function_info::finalize_new_accesses (insn_change &change, insn_info *pos)
// Also keep any explicitly-recorded call clobbers, which are deliberately
// excluded from the vec_rtx_properties. Calls shouldn't move, so we can
// keep the definitions in their current position.
+ //
+ // If the change describes a set of memory, but the pattern doesn't
+ // reference memory, keep the set anyway. This can happen if the
+ // old pattern was a parallel that contained a memory clobber, and if
+ // the new pattern was recognized without that clobber. Keeping the
+ // set avoids a linear-complexity update to the set's users.
+ //
+ // ??? We could queue an update so that these bogus clobbers are
+ // removed later.
for (def_info *def : change.new_defs)
- if (def->m_has_been_superceded && def->is_call_clobber ())
+ if (def->m_has_been_superceded
+ && (def->is_call_clobber () || def->is_mem ()))
{
def->m_has_been_superceded = false;
def->set_insn (insn);
@@ -535,7 +545,7 @@ function_info::finalize_new_accesses (insn_change &change, insn_info *pos)
}
}
- // Install the new list of definitions in CHANGE.
+ // Install the new list of uses in CHANGE.
sort_accesses (m_temp_uses);
change.new_uses = use_array (temp_access_array (m_temp_uses));
m_temp_uses.truncate (0);