aboutsummaryrefslogtreecommitdiff
path: root/gdb/breakpoint.c
diff options
context:
space:
mode:
authorPedro Alves <palves@redhat.com>2014-03-20 13:26:32 +0000
committerPedro Alves <palves@redhat.com>2014-03-20 13:42:23 +0000
commit2adfaa28b5ba2fb78ba5113977082c4d04752bd6 (patch)
treef590227ff8fa056cfbe079c76334a4cbbe68ec05 /gdb/breakpoint.c
parent31e77af205cf6564c2bf4c18400b4ca16bdf92cd (diff)
downloadgdb-2adfaa28b5ba2fb78ba5113977082c4d04752bd6.zip
gdb-2adfaa28b5ba2fb78ba5113977082c4d04752bd6.tar.gz
gdb-2adfaa28b5ba2fb78ba5113977082c4d04752bd6.tar.bz2
Fix for even more missed events; eliminate thread-hop code.
Even with deferred_step_ptid out of the way, GDB can still lose watchpoints. If a watchpoint triggers and the PC points to an address where a thread-specific breakpoint for another thread is set, the thread-hop code triggers, and we lose the watchpoint: if (ecs->event_thread->suspend.stop_signal == GDB_SIGNAL_TRAP) { int thread_hop_needed = 0; struct address_space *aspace = get_regcache_aspace (get_thread_regcache (ecs->ptid)); /* Check if a regular breakpoint has been hit before checking for a potential single step breakpoint. Otherwise, GDB will not see this breakpoint hit when stepping onto breakpoints. */ if (regular_breakpoint_inserted_here_p (aspace, stop_pc)) { if (!breakpoint_thread_match (aspace, stop_pc, ecs->ptid)) thread_hop_needed = 1; ^^^^^^^^^^^^^^^^^^^^^ } And on software single-step targets, even without a thread-specific breakpoint in the way, here in the thread-hop code: else if (singlestep_breakpoints_inserted_p) { ... if (!ptid_equal (singlestep_ptid, ecs->ptid) && in_thread_list (singlestep_ptid)) { /* If the PC of the thread we were trying to single-step has changed, discard this event (which we were going to ignore anyway), and pretend we saw that thread trap. This prevents us continuously moving the single-step breakpoint forward, one instruction at a time. If the PC has changed, then the thread we were trying to single-step has trapped or been signalled, but the event has not been reported to GDB yet. There might be some cases where this loses signal information, if a signal has arrived at exactly the same time that the PC changed, but this is the best we can do with the information available. Perhaps we should arrange to report all events for all threads when they stop, or to re-poll the remote looking for this particular thread (i.e. temporarily enable schedlock). */ CORE_ADDR new_singlestep_pc = regcache_read_pc (get_thread_regcache (singlestep_ptid)); if (new_singlestep_pc != singlestep_pc) { enum gdb_signal stop_signal; if (debug_infrun) fprintf_unfiltered (gdb_stdlog, "infrun: unexpected thread," " but expected thread advanced also\n"); /* The current context still belongs to singlestep_ptid. Don't swap here, since that's the context we want to use. Just fudge our state and continue. */ stop_signal = ecs->event_thread->suspend.stop_signal; ecs->event_thread->suspend.stop_signal = GDB_SIGNAL_0; ecs->ptid = singlestep_ptid; ecs->event_thread = find_thread_ptid (ecs->ptid); ecs->event_thread->suspend.stop_signal = stop_signal; stop_pc = new_singlestep_pc; } else { if (debug_infrun) fprintf_unfiltered (gdb_stdlog, "infrun: unexpected thread\n"); thread_hop_needed = 1; stepping_past_singlestep_breakpoint = 1; saved_singlestep_ptid = singlestep_ptid; } } } we either end up with thread_hop_needed, ignoring the watchpoint SIGTRAP, or switch to the stepping thread, again ignoring that the SIGTRAP could be for some other event. The new test added by this patch exercises both paths. So the fix is similar to the deferred_step_ptid fix -- defer the thread hop to _after_ the SIGTRAP had a change of passing through the regular bpstat handling. If the wrong thread hits a breakpoint, we'll just end up with BPSTAT_WHAT_SINGLE, and if nothing causes a stop, keep_going starts a step-over. Most of the stepping_past_singlestep_breakpoint mechanism is really not necessary -- setting the thread to step over a breakpoint with thread->trap_expected is sufficient to keep all other threads locked. It's best to still keep the flag in some form though, because when we get to keep_going, the software single-step breakpoint we need to step over is already gone -- an optimization done by a follow up patch will check whether a step-over is still be necessary by looking to see whether the breakpoint is still there, and would find the thread no longer needs a step-over, while we still want it. Special care is still needed to handle the case of PC of the thread we were trying to single-step having changed, like in the old code. We can't just keep_going and re-step it, as in that case we can over-step the thread (if it was already done with the step, but hasn't reported it yet, we'd ask it to step even further). That's now handled in switch_back_to_stepped_thread. As bonus, we're now using a technique that doesn't lose signals, unlike the old code -- we now insert a breakpoint at PC, and resume, which either reports the breakpoint immediately, or any pending signal. Tested on x86_64 Fedora 17, against pristine mainline, and against a branch that implements software single-step on x86. gdb/ 2014-03-20 Pedro Alves <palves@redhat.com> * breakpoint.c (single_step_breakpoint_inserted_here_p): Make extern. * breakpoint.h (single_step_breakpoint_inserted_here_p): Declare. * infrun.c (saved_singlestep_ptid) (stepping_past_singlestep_breakpoint): Delete. (resume): Remove stepping_past_singlestep_breakpoint handling. (proceed): Store the prev_pc of the stepping thread too. (init_wait_for_inferior): Adjust. Clear singlestep_ptid and singlestep_pc. (enum infwait_states): Delete infwait_thread_hop_state. (struct execution_control_state) <hit_singlestep_breakpoint>: New field. (handle_inferior_event): Adjust. (handle_signal_stop): Delete stepping_past_singlestep_breakpoint handling and the thread-hop code. Before removing single-step breakpoints, check whether the thread hit a single-step breakpoint of another thread. If it did, the trap is not a random signal. (switch_back_to_stepped_thread): If the event thread hit a single-step breakpoint, unblock it before switching to the stepping thread. Handle the case of the stepped thread having advanced already. (keep_going): Handle the case of the current thread moving past a single-step breakpoint. gdb/testsuite/ 2014-03-20 Pedro Alves <palves@redhat.com> * gdb.threads/step-over-trips-on-watchpoint.c: New file. * gdb.threads/step-over-trips-on-watchpoint.exp: New file.
Diffstat (limited to 'gdb/breakpoint.c')
-rw-r--r--gdb/breakpoint.c5
1 files changed, 1 insertions, 4 deletions
diff --git a/gdb/breakpoint.c b/gdb/breakpoint.c
index 03b21cb..19af9df 100644
--- a/gdb/breakpoint.c
+++ b/gdb/breakpoint.c
@@ -228,9 +228,6 @@ static void tcatch_command (char *arg, int from_tty);
static void detach_single_step_breakpoints (void);
-static int single_step_breakpoint_inserted_here_p (struct address_space *,
- CORE_ADDR pc);
-
static void free_bp_location (struct bp_location *loc);
static void incref_bp_location (struct bp_location *loc);
static void decref_bp_location (struct bp_location **loc);
@@ -15148,7 +15145,7 @@ detach_single_step_breakpoints (void)
/* Check whether a software single-step breakpoint is inserted at
PC. */
-static int
+int
single_step_breakpoint_inserted_here_p (struct address_space *aspace,
CORE_ADDR pc)
{