aboutsummaryrefslogtreecommitdiff
path: root/gcc/testsuite/gcc.c-torture
diff options
context:
space:
mode:
authorAlex Coplan <alex.coplan@arm.com>2024-01-29 13:28:04 +0000
committerAlex Coplan <alex.coplan@arm.com>2024-01-29 13:29:54 +0000
commitd41a1873f334cf29b9a595bb03c27bff2be17319 (patch)
tree3bed35841550d7fb932c0631c15099b96479ac77 /gcc/testsuite/gcc.c-torture
parent291f75fa1bc6a23c6184bb99c726074b13f2f18e (diff)
downloadgcc-d41a1873f334cf29b9a595bb03c27bff2be17319.zip
gcc-d41a1873f334cf29b9a595bb03c27bff2be17319.tar.gz
gcc-d41a1873f334cf29b9a595bb03c27bff2be17319.tar.bz2
aarch64: Ensure iterator validity when updating debug uses [PR113616]
The fix for PR113089 introduced range-based for loops over the debug_insn_uses of an RTL-SSA set_info, but in the case that we reset a debug insn, the use would get removed from the use list, and thus we would end up using an invalidated iterator in the next iteration of the loop. In practice this means we end up terminating the loop prematurely, and hence ICE as in PR113089 since there are debug uses that we failed to fix up. This patch fixes that by introducing a general mechanism to avoid this sort of problem. We introduce a safe_iterator to iterator-utils.h which wraps an iterator, and also holds the end iterator value. It then pre-computes the next iterator value at all iterations, so it doesn't matter if the original iterator got invalidated during the loop body, we can still move safely to the next iteration. We introduce an iterate_safely helper which effectively adapts a container such as iterator_range into a container of safe_iterators over the original iterator type. We then use iterate_safely around all loops over debug_insn_uses () in the aarch64 ldp/stp pass to fix PR113616. While doing this, I remembered that cleanup_tombstones () had the same problem. I previously worked around this locally by manually maintaining the next nondebug insn, so this patch also refactors that loop to use the new iterate_safely helper. While doing that I noticed that a couple of cases in cleanup_tombstones could be converted from using dyn_cast<set_info *> to as_a<set_info *>, which should be safe because there are no clobbers of mem in RTL-SSA, so all defs of memory should be set_infos. gcc/ChangeLog: PR target/113616 * config/aarch64/aarch64-ldp-fusion.cc (fixup_debug_uses_trailing_add): Use iterate_safely when iterating over debug uses. (fixup_debug_uses): Likewise. (ldp_bb_info::cleanup_tombstones): Use iterate_safely to iterate over nondebug insns instead of manually maintaining the next insn. * iterator-utils.h (class safe_iterator): New. (iterate_safely): New. gcc/testsuite/ChangeLog: PR target/113616 * gcc.c-torture/compile/pr113616.c: New test.
Diffstat (limited to 'gcc/testsuite/gcc.c-torture')
-rw-r--r--gcc/testsuite/gcc.c-torture/compile/pr113616.c19
1 files changed, 19 insertions, 0 deletions
diff --git a/gcc/testsuite/gcc.c-torture/compile/pr113616.c b/gcc/testsuite/gcc.c-torture/compile/pr113616.c
new file mode 100644
index 0000000..04c38ea
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/compile/pr113616.c
@@ -0,0 +1,19 @@
+// { dg-do compile }
+// { dg-options "-g" }
+struct A { struct A *a; } foo ();
+struct B { long b; };
+struct C { struct B c; struct A d; } *e;
+
+void
+bar (void)
+{
+ int f;
+ struct C *g;
+ struct A *h;
+ for (g = 0, g = e ? (void *) e - (char) (__SIZE_TYPE__) &g->d : 0, h = g ? (&g->d)->a : 0; g;
+ g = 0, g = h ? (void *) h - (char) (__SIZE_TYPE__) &g->d : 0, h = h ? h->a : 0)
+ {
+ f = (int) (__SIZE_TYPE__) g;
+ foo (((struct B *) g)->b);
+ }
+}