diff options
author | Pedro Alves <palves@redhat.com> | 2015-01-07 12:48:32 +0000 |
---|---|---|
committer | Pedro Alves <palves@redhat.com> | 2015-01-09 14:42:03 +0000 |
commit | 9c02b52532ac7864e7e19c7df1fb2e63625f3131 (patch) | |
tree | 43f7a50d10cf355ac443a2ee9eb71a30ff618bda /gdb/linux-nat.h | |
parent | 8af756ef818acb875865a21131a30e52cbcf15ce (diff) | |
download | gdb-9c02b52532ac7864e7e19c7df1fb2e63625f3131.zip gdb-9c02b52532ac7864e7e19c7df1fb2e63625f3131.tar.gz gdb-9c02b52532ac7864e7e19c7df1fb2e63625f3131.tar.bz2 |
linux-nat.c: better starvation avoidance, handle non-stop mode too
Running the testsuite with a series that reimplements user-visible
all-stop behavior on top of a target running in non-stop mode revealed
problems related to event starvation avoidance.
For example, I see
gdb.threads/signal-while-stepping-over-bp-other-thread.exp failing.
What happens is that GDB core never gets to see the signal event. It
ends up processing the events for the same threads over an over,
because Linux's waitpid(-1, ...) returns that first task in the task
list that has an event, starving threads on the tail of the task list.
So I wrote a non-stop mode test originally inspired by
signal-while-stepping-over-bp-other-thread.exp, to stress this
independently of all-stop on top of non-stop. Fixing it required the
changes described below. The test will be added in a following
commit.
1) linux-nat.c has code in place that picks an event LWP at random out
of all that have had events. This is because on the kernel side,
"waitpid(-1, ...)" just walks the task list linearly looking for the
first that had an event. But, this code is currently only used in
all-stop mode. So with a multi-threaded program that has multiple
events triggering debug events in parallel, GDB ends up starving some
threads.
To make the event randomization work in non-stop mode too, the patch
makes us pull out all the already pending events on the kernel side,
with waitpid, before deciding which LWP to report to the core.
There's some code in linux_wait that takes care of leaving events
pending if they were for LWPs the caller is not interested in. The
patch moves that to linux_nat_filter_event, so that we only have one
place that leaves events pending. With that in place, conceptually,
the flow is simpler and more normalized:
#1 - walk the LWP list looking for an LWP with a pending event to report.
#2 - if no pending event, pull events out of the kernel, and store
them in the LWP structures as pending.
#3- goto #1.
2) Then, currently the event randomization code only considers SIGTRAP
(or trap-like) events. That means that if e.g., have have multiple
threads stepping in parallel that hit a breakpoint that needs stepping
over, and one gets a signal, the signal may end up never getting
processed, because GDB will always be giving priority to the SIGTRAPs.
The patch fixes this by making the randomization code consider all
kinds of pending events.
3) If multiple threads hit a breakpoint, we report one of those, and
"cancel" the others. Cancelling means decrementing the PC, and
discarding the event. If the next time the LWP is resumed the
breakpoint is still installed, the LWP should hit it again, and we'll
report the hit then. The problem I found is that this delays threads
from advancing too much, with the kernel potentially ending up
scheduling the same threads over and over, and others not advancing.
So the patch switches away from cancelling the breakpoints, and
instead remembering that the LWP had stopped for a breakpoint. If on
resume the breakpoint is still installed, we report it. If it's no
longer installed, we discard the pending event then. This is actually
how GDBserver used to handle this before d50171e4 (Teach linux
gdbserver to step-over-breakpoints), but with the difference that back
then we'd delay adjusting the PC until resuming, which made it so that
"info threads" could wrongly see threads with unadjusted PCs.
gdb/
2015-01-09 Pedro Alves <palves@redhat.com>
* breakpoint.c (hardware_breakpoint_inserted_here_p): New
function.
* breakpoint.h (hardware_breakpoint_inserted_here_p): New
declaration.
* linux-nat.c (linux_nat_status_is_event): Move higher up in file.
(linux_resume_one_lwp): Store the thread's PC. Adjust to clear
stop_reason.
(check_stopped_by_watchpoint): New function.
(save_sigtrap): Reimplement.
(linux_nat_stopped_by_watchpoint): Adjust.
(linux_nat_lp_status_is_event): Delete.
(stop_wait_callback): Only call save_sigtrap after storing the
pending status.
(status_callback): If the thread had been stopped for a breakpoint
that has since been removed, discard the event and resume the LWP.
(count_events_callback, select_event_lwp_callback): Use
lwp_status_pending_p instead of linux_nat_lp_status_is_event.
(cancel_breakpoint): Rename to ...
(check_stopped_by_breakpoint): ... this. Record whether the LWP
stopped for a software breakpoint or hardware breakpoint.
(select_event_lwp): Only give preference to the stepping LWP in
all-stop mode. Adjust comments.
(stop_and_resume_callback): Remove references to new_pending_p.
(linux_nat_filter_event): Likewise. Leave exit events of the
leader thread pending here. Handle signal short circuiting here.
Only call save_sigtrap after storing the pending waitstatus.
(linux_nat_wait_1): Remove 'retry' label. Remove references to
new_pending. Don't handle leaving events the caller is not
interested in pending here, nor handle signal short-circuiting
here. Also give equal priority to all LWPs that have had events
in non-stop mode. If reporting a software breakpoint event,
unadjust the LWP's PC.
* linux-nat.h (enum lwp_stop_reason): New.
(struct lwp_info) <stop_pc>: New field.
(struct lwp_info) <stopped_by_watchpoint>: Delete field.
(struct lwp_info) <stop_reason>: New field.
* x86-linux-nat.c (x86_linux_prepare_to_resume): Adjust.
Diffstat (limited to 'gdb/linux-nat.h')
-rw-r--r-- | gdb/linux-nat.h | 31 |
1 files changed, 28 insertions, 3 deletions
diff --git a/gdb/linux-nat.h b/gdb/linux-nat.h index 8a44324..669450d 100644 --- a/gdb/linux-nat.h +++ b/gdb/linux-nat.h @@ -23,6 +23,24 @@ struct arch_lwp_info; +/* Reasons an LWP last stopped. */ + +enum lwp_stop_reason +{ + /* Either not stopped, or stopped for a reason that doesn't require + special tracking. */ + LWP_STOPPED_BY_NO_REASON, + + /* Stopped by a software breakpoint. */ + LWP_STOPPED_BY_SW_BREAKPOINT, + + /* Stopped by a hardware breakpoint. */ + LWP_STOPPED_BY_HW_BREAKPOINT, + + /* Stopped by a watchpoint. */ + LWP_STOPPED_BY_WATCHPOINT +}; + /* Structure describing an LWP. This is public only for the purposes of ALL_LWPS; target-specific code should generally not access it directly. */ @@ -63,12 +81,19 @@ struct lwp_info /* If non-zero, a pending wait status. */ int status; + /* When 'stopped' is set, this is where the lwp last stopped, with + decr_pc_after_break already accounted for. If the LWP is + running, and stepping, this is the address at which the lwp was + resumed (that is, it's the previous stop PC). If the LWP is + running and not stepping, this is 0. */ + CORE_ADDR stop_pc; + /* Non-zero if we were stepping this LWP. */ int step; - /* STOPPED_BY_WATCHPOINT is non-zero if this LWP stopped with a data - watchpoint trap. */ - int stopped_by_watchpoint; + /* The reason the LWP last stopped, if we need to track it + (breakpoint, watchpoint, etc.) */ + enum lwp_stop_reason stop_reason; /* On architectures where it is possible to know the data address of a triggered watchpoint, STOPPED_DATA_ADDRESS_P is non-zero, and |