diff options
author | Pedro Alves <palves@redhat.com> | 2015-02-20 20:21:59 +0000 |
---|---|---|
committer | Pedro Alves <palves@redhat.com> | 2015-02-20 21:40:31 +0000 |
commit | 2db9a4275ceada4aad3443dc157b96dd2e23afc0 (patch) | |
tree | 59a27d7c2e9734db5bbc2237a75c7c5f6402a385 /gdb/linux-nat.c | |
parent | 3b27ef472df3b4cdcdd54629281610d594c99c97 (diff) | |
download | gdb-2db9a4275ceada4aad3443dc157b96dd2e23afc0.zip gdb-2db9a4275ceada4aad3443dc157b96dd2e23afc0.tar.gz gdb-2db9a4275ceada4aad3443dc157b96dd2e23afc0.tar.bz2 |
GNU/Linux: Stop using libthread_db/td_ta_thr_iter
TL;DR - GDB can hang if something refreshes the thread list out of the
target while the target is running. GDB hangs inside td_ta_thr_iter.
The fix is to not use that libthread_db function anymore.
Long version:
Running the testsuite against my all-stop-on-top-of-non-stop series is
still exposing latent non-stop bugs.
I was originally seeing this with the multi-create.exp test, back when
we were still using libthread_db thread event breakpoints. The
all-stop-on-top-of-non-stop series forces a thread list refresh each
time GDB needs to start stepping over a breakpoint (to pause all
threads). That test hits the thread event breakpoint often, resulting
in a bunch of step-over operations, thus a bunch of thread list
refreshes while some threads in the target are running.
The commit adds a real non-stop mode test that triggers the issue,
based on multi-create.exp, that does an explicit "info threads" when a
breakpoint is hit. IOW, it does the same things the as-ns series was
doing when testing multi-create.exp.
The bug is a race, so it unfortunately takes several runs for the test
to trigger it. In fact, even when setting the test running in a loop,
it sometimes takes several minutes for it to trigger for me.
The race is related to libthread_db's td_ta_thr_iter. This is
libthread_db's entry point for walking the thread list of the
inferior.
Sometimes, when GDB refreshes the thread list from the target,
libthread_db's td_ta_thr_iter can somehow see glibc's thread list as a
cycle, and get stuck in an infinite loop.
The issue is that when a thread exits, its thread control structure in
glibc is moved from a "used" list to a "cache" list. These lists are
simply circular linked lists where the "next/prev" pointers are
embedded in the thread control structure itself. The "next" pointer
of the last element of the list points back to the list's sentinel
"head". There's only one set of "next/prev" pointers for both lists;
thus a thread can only be in one of the lists at a time, not in both
simultaneously.
So when thread C exits, simplifying, the following happens. A-C are
threads. stack_used and stack_cache are the list's heads.
Before:
stack_used -> A -> B -> C -> (&stack_used)
stack_cache -> (&stack_cache)
After:
stack_used -> A -> B -> (&stack_used)
stack_cache -> C -> (&stack_cache)
td_ta_thr_iter starts by iterating at the list's head's next, and
iterates until it sees a thread whose next pointer points to the
list's head again. Thus in the before case above, C's next points to
stack_used, indicating end of list. In the same case, the stack_cache
list is empty.
For each thread being iterated, td_ta_thr_iter reads the whole thread
object out of the inferior. This includes the thread's "next"
pointer.
In the scenario above, it may happen that td_ta_thr_iter is iterating
thread B and has already read B's thread structure just before thread
C exits and its control structure moves to the cached list.
Now, recall that td_ta_thr_iter is running in the context of GDB, and
there's no locking between GDB and the inferior. From it's local copy
of B, td_ta_thr_iter believes that the next thread after B is thread
C, so it happilly continues iterating to C, a thread that has already
exited, and is now in the stack cache list.
After iterating C, td_ta_thr_iter finds the stack_cache head, which
because it is not stack_used, td_ta_thr_iter assumes it's just another
thread. After this, unless the reverse race triggers, GDB gets stuck
in td_ta_thr_iter forever walking the stack_cache list, as no thread
in thatlist has a next pointer that points back to stack_used (the
terminating condition).
Before fully understanding the issue, I tried adding cycle detection
to GDB's td_ta_thr_iter callback. However, td_ta_thr_iter skips
calling the callback in some cases, which means that it's possible
that the callback isn't called at all, making it impossible for GDB to
break the loop. I did manage to get GDB stuck in that state more than
once.
Fortunately, we can avoid the issue altogether. We don't really need
td_ta_thr_iter for live debugging nowadays, given PTRACE_EVENT_CLONE.
We already know how to map and lwp id to a thread id without iterating
(thread_from_lwp), so use that more.
gdb/ChangeLog:
2015-02-20 Pedro Alves <palves@redhat.com>
* linux-nat.c (linux_handle_extended_wait): Call
thread_db_notice_clone whenever a new clone LWP is detected.
(linux_stop_and_wait_all_lwps, linux_unstop_all_lwps): New
functions.
* linux-nat.h (thread_db_attach_lwp): Delete declaration.
(thread_db_notice_clone, linux_stop_and_wait_all_lwps)
(linux_unstop_all_lwps): Declare.
* linux-thread-db.c (struct thread_get_info_inout): Delete.
(thread_get_info_callback): Delete.
(thread_from_lwp): Use td_thr_get_info and record_thread.
(thread_db_attach_lwp): Delete.
(thread_db_notice_clone): New function.
(try_thread_db_load_1): If /proc is mounted and shows the
process'es task list, walk over all LWPs and call thread_from_lwp
instead of relying on td_ta_thr_iter.
(attach_thread): Don't call check_thread_signals here. Split the
tail part of the function (which adds the thread to the core GDB
thread list) to ...
(record_thread): ... this function. Call check_thread_signals
here.
(thread_db_wait): Don't call thread_db_find_new_threads_1. Always
call thread_from_lwp.
(thread_db_update_thread_list): Rename to ...
(thread_db_update_thread_list_org): ... this.
(thread_db_update_thread_list): New function.
(thread_db_find_thread_from_tid): Delete.
(thread_db_get_ada_task_ptid): Simplify.
* nat/linux-procfs.c: Include <sys/stat.h>.
(linux_proc_task_list_dir_exists): New function.
* nat/linux-procfs.h (linux_proc_task_list_dir_exists): Declare.
gdb/gdbserver/ChangeLog:
2015-02-20 Pedro Alves <palves@redhat.com>
* thread-db.c: Include "nat/linux-procfs.h".
(thread_db_init): Skip listing new threads if the kernel supports
PTRACE_EVENT_CLONE and /proc/PID/task/ is accessible.
gdb/testsuite/ChangeLog:
2015-02-20 Pedro Alves <palves@redhat.com>
* gdb.threads/multi-create-ns-info-thr.exp: New file.
Diffstat (limited to 'gdb/linux-nat.c')
-rw-r--r-- | gdb/linux-nat.c | 65 |
1 files changed, 39 insertions, 26 deletions
diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c index cfde1fd..2e1133d 100644 --- a/gdb/linux-nat.c +++ b/gdb/linux-nat.c @@ -264,6 +264,7 @@ async_file_mark (void) static int kill_lwp (int lwpid, int signo); static int stop_callback (struct lwp_info *lp, void *data); +static int resume_stopped_resumed_lwps (struct lwp_info *lp, void *data); static void block_child_signals (sigset_t *prev_mask); static void restore_child_signals_mask (sigset_t *prev_mask); @@ -2003,34 +2004,24 @@ linux_handle_extended_wait (struct lwp_info *lp, int status, status = 0; } - if (non_stop) + /* If the thread_db layer is active, let it record the user + level thread id and status, and add the thread to GDB's + list. */ + if (!thread_db_notice_clone (lp->ptid, new_lp->ptid)) { - /* Add the new thread to GDB's lists as soon as possible - so that: - - 1) the frontend doesn't have to wait for a stop to - display them, and, - - 2) we tag it with the correct running state. */ - - /* If the thread_db layer is active, let it know about - this new thread, and add it to GDB's list. */ - if (!thread_db_attach_lwp (new_lp->ptid)) - { - /* We're not using thread_db. Add it to GDB's - list. */ - target_post_attach (ptid_get_lwp (new_lp->ptid)); - add_thread (new_lp->ptid); - } + /* The process is not using thread_db. Add the LWP to + GDB's list. */ + target_post_attach (ptid_get_lwp (new_lp->ptid)); + add_thread (new_lp->ptid); + } - if (!stopping) - { - set_running (new_lp->ptid, 1); - set_executing (new_lp->ptid, 1); - /* thread_db_attach_lwp -> lin_lwp_attach_lwp forced - resume_stop. */ - new_lp->last_resume_kind = resume_continue; - } + if (!stopping) + { + set_running (new_lp->ptid, 1); + set_executing (new_lp->ptid, 1); + /* thread_db_attach_lwp -> lin_lwp_attach_lwp forced + resume_stop. */ + new_lp->last_resume_kind = resume_continue; } if (status != 0) @@ -2285,6 +2276,28 @@ linux_stop_lwp (struct lwp_info *lwp) stop_callback (lwp, NULL); } +/* See linux-nat.h */ + +void +linux_stop_and_wait_all_lwps (void) +{ + /* Stop all LWP's ... */ + iterate_over_lwps (minus_one_ptid, stop_callback, NULL); + + /* ... and wait until all of them have reported back that + they're no longer running. */ + iterate_over_lwps (minus_one_ptid, stop_wait_callback, NULL); +} + +/* See linux-nat.h */ + +void +linux_unstop_all_lwps (void) +{ + iterate_over_lwps (minus_one_ptid, + resume_stopped_resumed_lwps, &minus_one_ptid); +} + /* Return non-zero if LWP PID has a pending SIGINT. */ static int |