Age | Commit message (Collapse) | Author | Files | Lines |
|
When running test-case gdb.base/vfork-follow-parent.exp on powerpc64 (likewise
on s390x), I run into:
...
(gdb) PASS: gdb.base/vfork-follow-parent.exp: \
exec_file=vfork-follow-parent-exit: target-non-stop=on: non-stop=off: \
resolution_method=schedule-multiple: print unblock_parent = 1
continue^M
Continuing.^M
Reading symbols from vfork-follow-parent-exit...^M
^M
^M
Fatal signal: Segmentation fault^M
----- Backtrace -----^M
0x1027d3e7 gdb_internal_backtrace_1^M
src/gdb/bt-utils.c:122^M
0x1027d54f _Z22gdb_internal_backtracev^M
src/gdb/bt-utils.c:168^M
0x1057643f handle_fatal_signal^M
src/gdb/event-top.c:889^M
0x10576677 handle_sigsegv^M
src/gdb/event-top.c:962^M
0x3fffa7610477 ???^M
0x103f2144 for_each_block^M
src/gdb/dcache.c:199^M
0x103f235b _Z17dcache_invalidateP13dcache_struct^M
src/gdb/dcache.c:251^M
0x10bde8c7 _Z24target_dcache_invalidatev^M
src/gdb/target-dcache.c:50^M
...
or similar.
The root cause for the segmentation fault is that linux_is_uclinux gives an
incorrect result: it should always return false, given that we're running on a
regular linux system, but instead it returns first true, then false.
In more detail, the segmentation fault happens as follows:
- a program space with an address space is created
- a second program space is about to be created. maybe_new_address_space
is called, and because linux_is_uclinux returns true, maybe_new_address_space
returns false, and no new address space is created
- a second program space with the same address space is created
- a program space is deleted. Because linux_is_uclinux now returns false,
gdbarch_has_shared_address_space (current_inferior ()->arch ()) returns
false, and the address space is deleted
- when gdb uses the address space of the remaining program space, we run into
the segfault, because the address space is deleted.
Hardcoding linux_is_uclinux to false makes the test-case pass.
We leave addressing the root cause for the following commit in this series.
For now, prevent the segmentation fault by making the address space a refcounted
object.
This was already suggested here [1]:
...
A better solution might be to have the address spaces be reference counted
...
Tested on top of trunk on x86_64-linux and ppc64le-linux.
Tested on top of gdb-14-branch on ppc64-linux.
Co-Authored-By: Simon Marchi <simon.marchi@polymtl.ca>
PR gdb/30547
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30547
[1] https://sourceware.org/pipermail/gdb-patches/2023-October/202928.html
|
|
While looking at the regcache code, I noticed that the address space
(passed to regcache when constructing it, and available through
regcache::aspace) wasn't relevant for the regcache itself. Callers of
regcache::aspace use that method because it appears to be a convenient
way of getting the address space for a thread, if you already have the
regcache. But there is always another way to get the address space, as
the callers pretty much always know which thread they are dealing with.
The regcache code itself doesn't use the address space.
This patch removes anything related to address_space from the regcache
code, and updates callers to get it from the thread in context. This
removes a bit of unnecessary complexity from the regcache code.
The current get_thread_arch_regcache function gets an address_space for
the given thread using the target_thread_address_space function (which
calls the target_ops::thread_address_space method). This suggest that
there might have been the intention of supporting per-thread address
spaces. But digging through the history, I did not find any such case.
Maybe this method was just added because we needed a way to get an
address space from a ptid (because constructing a regcache required an
address space), and this seemed like the right way to do it, I don't
know.
The only implementations of thread_address_space and
process_stratum_target::thread_address_space and
linux_nat_target::thread_address_space, which essentially just return
the inferior's address space. And thread_address_space is only used in
the current get_thread_arch_regcache, which gets removed. So, I think
that the thread_address_space target method can be removed, and we can
assume that it's fine to use the inferior's address space everywhere.
Callers of regcache::aspace are updated to get the address space from
the relevant inferior, either using some context they already know
about, or in last resort using the current global context.
So, to summarize:
- remove everything in regcache related to address spaces
- in particular, remove get_thread_arch_regcache, and rename
get_thread_arch_aspace_regcache to get_thread_arch_regcache
- remove target_ops::thread_address_space, and
target_thread_address_space
- adjust all users of regcache::aspace to get the address space another
way
Change-Id: I04fd41b22c83fe486522af7851c75bcfb31c88c7
|
|
GDB doesn't handle correctly the case where a thread steps over a
breakpoint (using either in-line or displaced stepping), and the
executed instruction causes the thread to exit.
Using the test program included later in the series, this is what it
looks like with displaced-stepping, on x86-64 Linux, where we have two
displaced-step buffers:
$ ./gdb -q -nx --data-directory=data-directory build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit -ex "b my_exit_syscall" -ex r
Reading symbols from build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit...
Breakpoint 1 at 0x123c: file src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S, line 68.
Starting program: build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/../lib/libthread_db.so.1".
[New Thread 0x7ffff7c5f640 (LWP 2915510)]
[Switching to Thread 0x7ffff7c5f640 (LWP 2915510)]
Thread 2 "step-over-threa" hit Breakpoint 1, my_exit_syscall () at src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S:68
68 syscall
(gdb) c
Continuing.
[New Thread 0x7ffff7c5f640 (LWP 2915524)]
[Thread 0x7ffff7c5f640 (LWP 2915510) exited]
[Switching to Thread 0x7ffff7c5f640 (LWP 2915524)]
Thread 3 "step-over-threa" hit Breakpoint 1, my_exit_syscall () at src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S:68
68 syscall
(gdb) c
Continuing.
[New Thread 0x7ffff7c5f640 (LWP 2915616)]
[Thread 0x7ffff7c5f640 (LWP 2915524) exited]
[Switching to Thread 0x7ffff7c5f640 (LWP 2915616)]
Thread 4 "step-over-threa" hit Breakpoint 1, my_exit_syscall () at src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S:68
68 syscall
(gdb) c
Continuing.
... hangs ...
The first two times we do "continue", we displaced-step the syscall
instruction that causes the thread to exit. When the thread exits,
the main thread, waiting on pthread_join, is unblocked. It spawns a
new thread, which hits the breakpoint on the syscall again. However,
infrun was never notified that the displaced-stepping threads are done
using the displaced-step buffer, so now both buffers are marked as
used. So when we do the third continue, there are no buffers
available to displaced-step the syscall, so the thread waits forever
for its turn.
When trying the same but with in-line step over (displaced-stepping
disabled):
$ ./gdb -q -nx --data-directory=data-directory \
build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit \
-ex "b my_exit_syscall" -ex "set displaced-stepping off" -ex r
Reading symbols from build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit...
Breakpoint 1 at 0x123c: file src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S, line 68.
Starting program: build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-over-thread-exit/step-over-thread-exit
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/../lib/libthread_db.so.1".
[New Thread 0x7ffff7c5f640 (LWP 2928290)]
[Switching to Thread 0x7ffff7c5f640 (LWP 2928290)]
Thread 2 "step-over-threa" hit Breakpoint 1, my_exit_syscall () at src/binutils-gdb/gdb/testsuite/lib/my-syscalls.S:68
68 syscall
(gdb) c
Continuing.
[Thread 0x7ffff7c5f640 (LWP 2928290) exited]
No unwaited-for children left.
(gdb) i th
Id Target Id Frame
1 Thread 0x7ffff7c60740 (LWP 2928285) "step-over-threa" 0x00007ffff7f7c9b7 in __pthread_clockjoin_ex () from /usr/lib/libpthread.so.0
The current thread <Thread ID 2> has terminated. See `help thread'.
(gdb) thread 1
[Switching to thread 1 (Thread 0x7ffff7c60740 (LWP 2928285))]
#0 0x00007ffff7f7c9b7 in __pthread_clockjoin_ex () from /usr/lib/libpthread.so.0
(gdb) c
Continuing.
^C^C
... hangs ...
The "continue" causes an in-line step to occur, meaning the main
thread is stopped while we step the syscall. The stepped thread exits
when executing the syscall, the linux-nat target notices there are no
more resumed threads to be waited for, so returns
TARGET_WAITKIND_NO_RESUMED, which causes the prompt to return. But
infrun never clears the in-line step over info. So if we try
continuing the main thread, GDB doesn't resume it, because it thinks
there's an in-line step in progress that we need to wait for to
finish, and we are stuck there.
To fix this, infrun needs to be informed when a thread doing a
displaced or in-line step over exits. We can do that with the new
target_set_thread_options mechanism which is optimal for only enabling
exit events of the thread that needs it; or, if that is not supported,
by using target_thread_events, which enables thread exit events for
all threads. This is done by this commit.
This patch then modifies handle_inferior_event in infrun.c to clean up
any step-over the exiting thread might have been doing at the time of
the exit. The cases to consider are:
- the exiting thread was doing an in-line step-over with an all-stop
target
- the exiting thread was doing an in-line step-over with a non-stop
target
- the exiting thread was doing a displaced step-over with a non-stop
target
The displaced-stepping buffer implementation in displaced-stepping.c
is modified to account for the fact that it's possible that we
"finish" a displaced step after a thread exit event. The buffer that
the exiting thread was using is marked as available again and the
original instructions under the scratch pad are restored. However, it
skips applying the fixup, which wouldn't make sense since the thread
does not exist anymore.
Another case that needs handling is if a displaced-stepping thread
exits, and the event is reported while we are in stop_all_threads. We
should call displaced_step_finish in the handle_one function, in that
case. It was already called in other code paths, just not the "thread
exited" path.
This commit doesn't make infrun ask the target to report the
TARGET_WAITKIND_THREAD_EXITED events yet, that'll be done later in the
series.
Note that "stop_print_frame = false;" line is moved to normal_stop,
because TARGET_WAITKIND_THREAD_EXITED can also end up with the event
transmorphed into TARGET_WAITKIND_NO_RESUMED. Moving it to
normal_stop keeps it centralized.
Co-authored-by: Simon Marchi <simon.marchi@efficios.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27338
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I745c6955d7ef90beb83bcf0ff1d1ac8b9b6285a5
|
|
Since commit 1e5ccb9c5ff4fd8ade4a8694676f99f4abf2d679, we have an assertion in
displaced_step_buffers::copy_insn_closure_by_addr that makes sure a closure
is available whenever we have a match between the provided address argument and
the buffer address.
That is fine, but the report in PR30872 shows this assertion triggering when
it really shouldn't. After some investigation, here's what I found out.
The 32-bit Arm architecture is the only one that calls
gdbarch_displaced_step_copy_insn_closure_by_addr directly, and that's because
32-bit Arm needs to figure out the thumb state of the original instruction
that we displaced-stepped through the displaced-step buffer.
Before the assertion was put in place by commit
1e5ccb9c5ff4fd8ade4a8694676f99f4abf2d679, there was the possibility of
getting nullptr back, which meant we were not doing a displaced-stepping
operation.
Now, with the assertion in place, this is running into issues.
It looks like displaced_step_buffers::copy_insn_closure_by_addr is
being used to return a couple different answers depending on the
state we're in:
1 - If we are actively displaced-stepping, then copy_insn_closure_by_addr
is supposed to return a valid closure for us, so we can determine the
thumb mode.
2 - If we are not actively displaced-stepping, then copy_insn_closure_by_addr
should return nullptr to signal that there isn't any displaced-step buffers
in use, because we don't have a valid closure (but we should always have
this).
Since the displaced-step buffers are always allocated, but not always used,
that means the buffers will always contain data. In particular, the buffer
addr field cannot be used to determine if the buffer is active or not.
For instance, we cannot set the buffer addr field to 0x0, as that can be a
valid PC in some cases.
My understanding is that the current_thread field should be a good candidate
to signal that a particular displaced-step buffer is active or not. If it is
nullptr, we have no threads using that buffer to displaced-step. Otherwise,
it is an active buffer in use by a particular thread.
The following fix modifies the displaced_step_buffers::copy_insn_closure_by_addr
function so we only attempt to return a closure if the buffer has an assigned
current_thread and if the buffer address matches the address argument.
Alternatively, I think we could use a function to answer the question of
whether we're actively displaced-stepping (so we have an active buffer) or
not.
I've also added a testcase that exercises the problem. It should reproduce
reliably on Arm, as that is the only architecture that faces this problem
at the moment.
Regression-tested on Ubuntu 20.04. OK?
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30872
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit aims to address a problem that exists with the current
approach to displaced stepping, and was identified in PR gdb/22921.
Displaced stepping is currently supported on AArch64, ARM, amd64,
i386, rs6000 (ppc), and s390. Of these, I believe there is a problem
with the current approach which will impact amd64 and ARM, and can
lead to random register corruption when the inferior makes use of
asynchronous signals and GDB is using displaced stepping.
The problem can be found in displaced_step_buffers::finish in
displaced-stepping.c, and is this; after GDB tries to perform a
displaced step, and the inferior stops, GDB classifies the stop into
one of two states, either the displaced step succeeded, or the
displaced step failed.
If the displaced step succeeded then gdbarch_displaced_step_fixup is
called, which has the job of fixing up the state of the current
inferior as if the step had not been performed in a displaced manner.
This all seems just fine.
However, if the displaced step is considered to have not completed
then GDB doesn't call gdbarch_displaced_step_fixup, instead GDB
remains in displaced_step_buffers::finish and just performs a minimal
fixup which involves adjusting the program counter back to its
original value.
The problem here is that for amd64 and ARM setting up for a displaced
step can involve changing the values in some temporary registers. If
the displaced step succeeds then this is fine; after the step the
temporary registers are restored to their original values in the
architecture specific code.
But if the displaced step does not succeed then the temporary
registers are never restored, and they retain their modified values.
In this context a temporary register is simply any register that is
not otherwise used by the instruction being stepped that the
architecture specific code considers safe to borrow for the lifetime
of the instruction being stepped.
In the bug PR gdb/22921, the amd64 instruction being stepped is
an rip-relative instruction like this:
jmp *0x2fe2(%rip)
When we displaced step this instruction we borrow a register, and
modify the instruction to something like:
jmp *0x2fe2(%rcx)
with %rcx having its value adjusted to contain the original %rip
value.
Now if the displaced step does not succeed, then %rcx will be left
with a corrupted value. Obviously corrupting any register is bad; in
the bug report this problem was spotted because %rcx is used as a
function argument register.
And finally, why might a displaced step not succeed? Asynchronous
signals provides one reason. GDB sets up for the displaced step and,
at that precise moment, the OS delivers a signal (SIGALRM in the bug
report), the signal stops the inferior at the address of the displaced
instruction. GDB cancels the displaced instruction, handles the
signal, and then tries again with the displaced step. But it is that
first cancellation of the displaced step that causes the problem; in
that case GDB (correctly) sees the displaced step as having not
completed, and so does not perform the architecture specific fixup,
leaving the register corrupted.
The reason why I think AArch64, rs600, i386, and s390 are not effected
by this problem is that I don't believe these architectures make use
of any temporary registers, so when a displaced step is not completed
successfully, the minimal fix up is sufficient.
On amd64 we use at most one temporary register.
On ARM, looking at arm_displaced_step_copy_insn_closure, we could
modify up to 16 temporary registers, and the instruction being
displaced stepped could be expanded to multiple replacement
instructions, which increases the chances of this bug triggering.
This commit only aims to address the issue on amd64 for now, though I
believe that the approach I'm proposing here might be applicable for
ARM too.
What I propose is that we always call gdbarch_displaced_step_fixup.
We will now pass an extra argument to gdbarch_displaced_step_fixup,
this a boolean that indicates whether GDB thinks the displaced step
completed successfully or not.
When this flag is false this indicates that the displaced step halted
for some "other" reason. On ARM GDB can potentially read the
inferior's program counter in order figure out how far through the
sequence of replacement instructions we got, and from that GDB can
figure out what fixup needs to be performed.
On targets like amd64 the problem is slightly easier as displaced
stepping only uses a single replacement instruction. If the displaced
step didn't complete the GDB knows that the single instruction didn't
execute.
The point is that by always calling gdbarch_displaced_step_fixup, each
architecture can now ensure that the inferior state is fixed up
correctly in all cases, not just the success case.
On amd64 this ensures that we always restore the temporary register
value, and so bug PR gdb/22921 is resolved.
In order to move all architectures to this new API, I have moved the
minimal roll-back version of the code inside the architecture specific
fixup functions for AArch64, rs600, s390, and ARM. For all of these
except ARM I think this is good enough, as no temporaries are used all
that's needed is the program counter restore anyway.
For ARM the minimal code is no worse than what we had before, though I
do consider this architecture's displaced-stepping broken.
I've updated the gdb.arch/amd64-disp-step.exp test to cover the
'jmpq*' instruction that was causing problems in the original bug, and
also added support for testing the displaced step in the presence of
asynchronous signal delivery.
I've also added two new tests (for amd64 and i386) that check that GDB
can correctly handle displaced stepping over a single instruction that
branches to itself. I added these tests after a first version of this
patch relied too much on checking the program-counter value in order
to see if the displaced instruction had executed. This works fine in
almost all cases, but when an instruction branches to itself a pure
program counter check is not sufficient. The new tests expose this
problem.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=22921
Approved-By: Pedro Alves <pedro@palves.net>
|
|
It was pointed out during review of another patch that the function
displaced_step_dump_bytes really isn't specific to displaced stepping,
and should really get a more generic name and move into gdbsupport/.
This commit does just that. The function is renamed to
bytes_to_string and is moved into gdbsupport/common-utils.{cc,h}. The
function implementation doesn't really change. Much...
... I have updated the function to take an array view, which makes it
slightly easier to call in a couple of places where we already have a
gdb::bytes_vector. I've then added an inline wrapper to convert a raw
pointer and length into an array view, which is used in places where
we don't easily have a gdb::bytes_vector (or similar).
Updated all users of displaced_step_dump_bytes.
There should be no user visible changes after this commit.
Finally, I ended up having to add an include of gdb_assert.h into
array-view.h. When I include array-view.h into common-utils.h I ran
into build problems because array-view.h calls gdb_assert.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit tweaks displaced_step_finish & friends to pass down a
target_waitstatus instead of a gdb_signal. This is needed because a
patch later in the step-over-{thread-exit,clone] series will want to
make displaced_step_buffers::finish handle
TARGET_WAITKIND_THREAD_EXITED. It also helps with the
TARGET_WAITKIND_THREAD_CLONED patch later in that same series.
It's also a bit more logical this way, as we don't have to pass down
signals when the thread didn't actually stop for a signal. So we can
also think of it as a clean up.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27338
Change-Id: I4c5d338647b028071bc498c4e47063795a2db4c0
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
The gdbarch::max_insn_length field is used mostly to support displaced
stepping; it controls the size of the buffers allocated for the
displaced-step instruction, and is also used when first copying the
instruction, and later, when fixing up the instruction, in order to
read in and parse the instruction being stepped.
However, it has started to be used in other places in GDB, for
example, it's used in the Python disassembler API, and it is used on
amd64 as part of branch-tracing instruction classification.
The problem is that the value assigned to max_insn_length is not
always the maximum instruction length, but sometimes is a multiple of
that length, as required to support displaced stepping, see rs600,
ARM, and AArch64 for examples of this.
It seems to me that we are overloading the meaning of the
max_insn_length field, and I think that could potentially lead to
confusion.
I propose that we add a new gdbarch field,
gdbarch::displaced_step_buffer_length, this new field will do
exactly what it says on the tin; represent the required displaced step
buffer size. The max_insn_length field can then do exactly what it
claims to do; represent the maximum length of a single instruction.
As some architectures (e.g. i386, and amd64) only require their
displaced step buffers to be a single instruction in size, I propose
that the default for displaced_step_buffer_length will be the
value of max_insn_length. Architectures than need more buffer space
can then override this default as needed.
I've updated all architectures to setup the new field if appropriate,
and I've audited all calls to gdbarch_max_insn_length and switched to
gdbarch_displaced_step_buffer_length where appropriate.
There should be no user visible changes after this commit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
copy_insn_closure_by_addr
PR gdb/29272
Investigating PR29272, it was mentioned a particular test used to work on
GDB 10, but it started failing with GDB 11 onwards. I tracked it down to
some displaced stepping improvements on commit
187b041e2514827b9d86190ed2471c4c7a352874.
In particular, one of the corner cases using copy_insn_closure_by_addr got
silently broken. It is hard to spot because it doesn't have any good tests
for it, and the situation is quite specific to the Arm target.
Essentially, the change from the displaced stepping improvements made it so
we could still invoke copy_insn_closure_by_addr correctly to return the
pointer to a copy_insn_closure, but it always returned nullptr due to
the order of the statements in displaced_step_buffer::prepare.
The way it is now, we first write the address of the displaced step buffer
to PC and then save the copy_insn_closure pointer.
The problem is that writing to PC for the Arm target requires figuring
out if the new PC is thumb mode or not.
With no copy_insn_closure data, the logic to determine the thumb mode
during displaced stepping doesn't work, and gives random results that
are difficult to track (SIGILL, SIGSEGV etc).
Fix this by reordering the PC write in displaced_step_buffer::prepare
and, for safety, add an assertion to
displaced_step_buffer::copy_insn_closure_by_addr so GDB stops right
when it sees this invalid situation. If this gets broken again in the
future, it will be easier to spot.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29272
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Now that filtered and unfiltered output can be treated identically, we
can unify the printf family of functions. This is done under the name
"gdb_printf". Most of this patch was written by script.
|
|
Same idea as 0fab79556484 ("gdb: use ptid_t::to_string in infrun debug
messages"), but throughout GDB.
Change-Id: I62ba36eaef29935316d7187b9b13d7b88491acc1
|
|
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
This commits the result of running gdb/copyright.py as per our Start
of New Year procedure...
gdb/ChangeLog
Update copyright year range in copyright header of all GDB files.
|
|
The displaced_step_buffer class, introduced in the previous patch,
manages access to a single displaced step buffer. Change it into
displaced_step_buffers (note the plural), which manages access to
multiple displaced step buffers.
When preparing a displaced step for a thread, it looks for an unused
buffer.
For now, all users still pass a single displaced step buffer, so no real
behavior change is expected here. The following patch makes a user pass
more than one buffer, so the functionality introduced by this patch is
going to be useful in the next one.
gdb/ChangeLog:
* displaced-stepping.h (struct displaced_step_buffer): Rename
to...
(struct displaced_step_buffers): ... this.
<m_addr, m_current_thread, m_copy_insn_closure>: Remove.
<struct displaced_step_buffer>: New inner class.
<m_buffers>: New.
* displaced-stepping.c (displaced_step_buffer::prepare): Rename
to...
(displaced_step_buffers::prepare): ... this, adjust for multiple
buffers.
(displaced_step_buffer::finish): Rename to...
(displaced_step_buffers::finish): ... this, adjust for multiple
buffers.
(displaced_step_buffer::copy_insn_closure_by_addr): Rename to...
(displaced_step_buffers::copy_insn_closure_by_addr): ... this,
adjust for multiple buffers.
(displaced_step_buffer::restore_in_ptid): Rename to...
(displaced_step_buffers::restore_in_ptid): ... this, adjust for
multiple buffers.
* linux-tdep.h (linux_init_abi): Change supports_displaced_step
for num_disp_step_buffers.
* linux-tdep.c (struct linux_gdbarch_data)
<num_disp_step_buffers>: New field.
(struct linux_info) <disp_step_buf>: Rename to...
<disp_step_bufs>: ... this, change type to
displaced_step_buffers.
(linux_displaced_step_prepare): Use
linux_gdbarch_data::num_disp_step_buffers to create that number
of buffers.
(linux_displaced_step_finish): Adjust.
(linux_displaced_step_copy_insn_closure_by_addr): Adjust.
(linux_displaced_step_restore_all_in_ptid): Adjust.
(linux_init_abi): Change supports_displaced_step parameter for
num_disp_step_buffers, save it in linux_gdbarch_data.
* aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust.
* alpha-linux-tdep.c (alpha_linux_init_abi): Adjust.
* amd64-linux-tdep.c (amd64_linux_init_abi_common): Change
supports_displaced_step parameter for num_disp_step_buffers.
(amd64_linux_init_abi): Adjust.
(amd64_x32_linux_init_abi): Adjust.
* arc-linux-tdep.c (arc_linux_init_osabi): Adjust.
* arm-linux-tdep.c (arm_linux_init_abi): Adjust.
* bfin-linux-tdep.c (bfin_linux_init_abi): Adjust.
* cris-linux-tdep.c (cris_linux_init_abi): Adjust.
* csky-linux-tdep.c (csky_linux_init_abi): Adjust.
* frv-linux-tdep.c (frv_linux_init_abi): Adjust.
* hppa-linux-tdep.c (hppa_linux_init_abi): Adjust.
* i386-linux-tdep.c (i386_linux_init_abi): Adjust.
* ia64-linux-tdep.c (ia64_linux_init_abi): Adjust.
* m32r-linux-tdep.c (m32r_linux_init_abi): Adjust.
* m68k-linux-tdep.c (m68k_linux_init_abi):
* microblaze-linux-tdep.c (microblaze_linux_init_abi):
* mips-linux-tdep.c (mips_linux_init_abi): Adjust.
* mn10300-linux-tdep.c (am33_linux_init_osabi): Adjust.
* nios2-linux-tdep.c (nios2_linux_init_abi): Adjust.
* or1k-linux-tdep.c (or1k_linux_init_abi): Adjust.
* ppc-linux-tdep.c (ppc_linux_init_abi): Adjust.
* riscv-linux-tdep.c (riscv_linux_init_abi): Adjust.
* rs6000-tdep.c (struct ppc_inferior_data) <disp_step_buf>:
Change type to displaced_step_buffers.
* s390-linux-tdep.c (s390_linux_init_abi_any): Adjust.
* sh-linux-tdep.c (sh_linux_init_abi): Adjust.
* sparc-linux-tdep.c (sparc32_linux_init_abi): Adjust.
* sparc64-linux-tdep.c (sparc64_linux_init_abi): Adjust.
* tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Adjust.
* tilegx-linux-tdep.c (tilegx_linux_init_abi): Adjust.
* xtensa-linux-tdep.c (xtensa_linux_init_abi): Adjust.
Change-Id: Ia9c02f207da2c9e1d9188020139619122392bb70
|
|
displaced steps
Today, GDB only allows a single displaced stepping operation to happen
per inferior at a time. There is a single displaced stepping buffer per
inferior, whose address is fixed (obtained with
gdbarch_displaced_step_location), managed by infrun.c.
In the case of the AMD ROCm target [1] (in the context of which this
work has been done), it is typical to have thousands of threads (or
waves, in SMT terminology) executing the same code, hitting the same
breakpoint (possibly conditional) and needing to to displaced step it at
the same time. The limitation of only one displaced step executing at a
any given time becomes a real bottleneck.
To fix this bottleneck, we want to make it possible for threads of a
same inferior to execute multiple displaced steps in parallel. This
patch builds the foundation for that.
In essence, this patch moves the task of preparing a displaced step and
cleaning up after to gdbarch functions. This allows using different
schemes for allocating and managing displaced stepping buffers for
different platforms. The gdbarch decides how to assign a buffer to a
thread that needs to execute a displaced step.
On the ROCm target, we are able to allocate one displaced stepping
buffer per thread, so a thread will never have to wait to execute a
displaced step.
On Linux, the entry point of the executable if used as the displaced
stepping buffer, since we assume that this code won't get used after
startup. From what I saw (I checked with a binary generated against
glibc and musl), on AMD64 we have enough space there to fit two
displaced stepping buffers. A subsequent patch makes AMD64/Linux use
two buffers.
In addition to having multiple displaced stepping buffers, there is also
the idea of sharing displaced stepping buffers between threads. Two
threads doing displaced steps for the same PC could use the same buffer
at the same time. Two threads stepping over the same instruction (same
opcode) at two different PCs may also be able to share a displaced
stepping buffer. This is an idea for future patches, but the
architecture built by this patch is made to allow this.
Now, the implementation details. The main part of this patch is moving
the responsibility of preparing and finishing a displaced step to the
gdbarch. Before this patch, preparing a displaced step is driven by the
displaced_step_prepare_throw function. It does some calls to the
gdbarch to do some low-level operations, but the high-level logic is
there. The steps are roughly:
- Ask the gdbarch for the displaced step buffer location
- Save the existing bytes in the displaced step buffer
- Ask the gdbarch to copy the instruction into the displaced step buffer
- Set the pc of the thread to the beginning of the displaced step buffer
Similarly, the "fixup" phase, executed after the instruction was
successfully single-stepped, is driven by the infrun code (function
displaced_step_finish). The steps are roughly:
- Restore the original bytes in the displaced stepping buffer
- Ask the gdbarch to fixup the instruction result (adjust the target's
registers or memory to do as if the instruction had been executed in
its original location)
The displaced_step_inferior_state::step_thread field indicates which
thread (if any) is currently using the displaced stepping buffer, so it
is used by displaced_step_prepare_throw to check if the displaced
stepping buffer is free to use or not.
This patch defers the whole task of preparing and cleaning up after a
displaced step to the gdbarch. Two new main gdbarch methods are added,
with the following semantics:
- gdbarch_displaced_step_prepare: Prepare for the given thread to
execute a displaced step of the instruction located at its current PC.
Upon return, everything should be ready for GDB to resume the thread
(with either a single step or continue, as indicated by
gdbarch_displaced_step_hw_singlestep) to make it displaced step the
instruction.
- gdbarch_displaced_step_finish: Called when the thread stopped after
having started a displaced step. Verify if the instruction was
executed, if so apply any fixup required to compensate for the fact
that the instruction was executed at a different place than its
original pc. Release any resources that were allocated for this
displaced step. Upon return, everything should be ready for GDB to
resume the thread in its "normal" code path.
The displaced_step_prepare_throw function now pretty much just offloads
to gdbarch_displaced_step_prepare and the displaced_step_finish function
offloads to gdbarch_displaced_step_finish.
The gdbarch_displaced_step_location method is now unnecessary, so is
removed. Indeed, the core of GDB doesn't know how many displaced step
buffers there are nor where they are.
To keep the existing behavior for existing architectures, the logic that
was previously implemented in infrun.c for preparing and finishing a
displaced step is moved to displaced-stepping.c, to the
displaced_step_buffer class. Architectures are modified to implement
the new gdbarch methods using this class. The behavior is not expected
to change.
The other important change (which arises from the above) is that the
core of GDB no longer prevents concurrent displaced steps. Before this
patch, start_step_over walks the global step over chain and tries to
initiate a step over (whether it is in-line or displaced). It follows
these rules:
- if an in-line step is in progress (in any inferior), don't start any
other step over
- if a displaced step is in progress for an inferior, don't start
another displaced step for that inferior
After starting a displaced step for a given inferior, it won't start
another displaced step for that inferior.
In the new code, start_step_over simply tries to initiate step overs for
all the threads in the list. But because threads may be added back to
the global list as it iterates the global list, trying to initiate step
overs, start_step_over now starts by stealing the global queue into a
local queue and iterates on the local queue. In the typical case, each
thread will either:
- have initiated a displaced step and be resumed
- have been added back by the global step over queue by
displaced_step_prepare_throw, because the gdbarch will have returned
that there aren't enough resources (i.e. buffers) to initiate a
displaced step for that thread
Lastly, if start_step_over initiates an in-line step, it stops
iterating, and moves back whatever remaining threads it had in its local
step over queue to the global step over queue.
Two other gdbarch methods are added, to handle some slightly annoying
corner cases. They feel awkwardly specific to these cases, but I don't
see any way around them:
- gdbarch_displaced_step_copy_insn_closure_by_addr: in
arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given
buffer address.
- gdbarch_displaced_step_restore_all_in_ptid: when a process forks
(at least on Linux), the address space is copied. If some displaced
step buffers were in use at the time of the fork, we need to restore
the original bytes in the child's address space.
These two adjustments are also made in infrun.c:
- prepare_for_detach: there may be multiple threads doing displaced
steps when we detach, so wait until all of them are done
- handle_inferior_event: when we handle a fork event for a given
thread, it's possible that other threads are doing a displaced step at
the same time. Make sure to restore the displaced step buffer
contents in the child for them.
[1] https://github.com/ROCm-Developer-Tools/ROCgdb
gdb/ChangeLog:
* displaced-stepping.h (struct
displaced_step_copy_insn_closure): Adjust comments.
(struct displaced_step_inferior_state) <step_thread,
step_gdbarch, step_closure, step_original, step_copy,
step_saved_copy>: Remove fields.
(struct displaced_step_thread_state): New.
(struct displaced_step_buffer): New.
* displaced-stepping.c (displaced_step_buffer::prepare): New.
(write_memory_ptid): Move from infrun.c.
(displaced_step_instruction_executed_successfully): New,
factored out of displaced_step_finish.
(displaced_step_buffer::finish): New.
(displaced_step_buffer::copy_insn_closure_by_addr): New.
(displaced_step_buffer::restore_in_ptid): New.
* gdbarch.sh (displaced_step_location): Remove.
(displaced_step_prepare, displaced_step_finish,
displaced_step_copy_insn_closure_by_addr,
displaced_step_restore_all_in_ptid): New.
* gdbarch.c: Re-generate.
* gdbarch.h: Re-generate.
* gdbthread.h (class thread_info) <displaced_step_state>: New
field.
(thread_step_over_chain_remove): New declaration.
(thread_step_over_chain_next): New declaration.
(thread_step_over_chain_length): New declaration.
* thread.c (thread_step_over_chain_remove): Make non-static.
(thread_step_over_chain_next): New.
(global_thread_step_over_chain_next): Use
thread_step_over_chain_next.
(thread_step_over_chain_length): New.
(global_thread_step_over_chain_enqueue): Add debug print.
(global_thread_step_over_chain_remove): Add debug print.
* infrun.h (get_displaced_step_copy_insn_closure_by_addr):
Remove.
* infrun.c (get_displaced_stepping_state): New.
(displaced_step_in_progress_any_inferior): Remove.
(displaced_step_in_progress_thread): Adjust.
(displaced_step_in_progress): Adjust.
(displaced_step_in_progress_any_thread): New.
(get_displaced_step_copy_insn_closure_by_addr): Remove.
(gdbarch_supports_displaced_stepping): Use
gdbarch_displaced_step_prepare_p.
(displaced_step_reset): Change parameter from inferior to
thread.
(displaced_step_prepare_throw): Implement using
gdbarch_displaced_step_prepare.
(write_memory_ptid): Move to displaced-step.c.
(displaced_step_restore): Remove.
(displaced_step_finish): Implement using
gdbarch_displaced_step_finish.
(start_step_over): Allow starting more than one displaced step.
(prepare_for_detach): Handle possibly multiple threads doing
displaced steps.
(handle_inferior_event): Handle possibility that fork event
happens while another thread displaced steps.
* linux-tdep.h (linux_displaced_step_prepare): New.
(linux_displaced_step_finish): New.
(linux_displaced_step_copy_insn_closure_by_addr): New.
(linux_displaced_step_restore_all_in_ptid): New.
(linux_init_abi): Add supports_displaced_step parameter.
* linux-tdep.c (struct linux_info) <disp_step_buf>: New field.
(linux_displaced_step_prepare): New.
(linux_displaced_step_finish): New.
(linux_displaced_step_copy_insn_closure_by_addr): New.
(linux_displaced_step_restore_all_in_ptid): New.
(linux_init_abi): Add supports_displaced_step parameter,
register displaced step methods if true.
(_initialize_linux_tdep): Register inferior_execd observer.
* amd64-linux-tdep.c (amd64_linux_init_abi_common): Add
supports_displaced_step parameter, adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
(amd64_linux_init_abi): Adjust call to
amd64_linux_init_abi_common.
(amd64_x32_linux_init_abi): Likewise.
* aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
* arm-linux-tdep.c (arm_linux_init_abi): Likewise.
* i386-linux-tdep.c (i386_linux_init_abi): Likewise.
* alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to
linux_init_abi.
* arc-linux-tdep.c (arc_linux_init_osabi): Likewise.
* bfin-linux-tdep.c (bfin_linux_init_abi): Likewise.
* cris-linux-tdep.c (cris_linux_init_abi): Likewise.
* csky-linux-tdep.c (csky_linux_init_abi): Likewise.
* frv-linux-tdep.c (frv_linux_init_abi): Likewise.
* hppa-linux-tdep.c (hppa_linux_init_abi): Likewise.
* ia64-linux-tdep.c (ia64_linux_init_abi): Likewise.
* m32r-linux-tdep.c (m32r_linux_init_abi): Likewise.
* m68k-linux-tdep.c (m68k_linux_init_abi): Likewise.
* microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise.
* mips-linux-tdep.c (mips_linux_init_abi): Likewise.
* mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise.
* nios2-linux-tdep.c (nios2_linux_init_abi): Likewise.
* or1k-linux-tdep.c (or1k_linux_init_abi): Likewise.
* riscv-linux-tdep.c (riscv_linux_init_abi): Likewise.
* s390-linux-tdep.c (s390_linux_init_abi_any): Likewise.
* sh-linux-tdep.c (sh_linux_init_abi): Likewise.
* sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise.
* sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise.
* tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise.
* tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise.
* xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise.
* ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
* arm-tdep.c (arm_pc_is_thumb): Call
gdbarch_displaced_step_copy_insn_closure_by_addr instead of
get_displaced_step_copy_insn_closure_by_addr.
* rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to
clear gdbarch methods.
* rs6000-tdep.c (struct ppc_inferior_data): New structure.
(get_ppc_per_inferior): New function.
(ppc_displaced_step_prepare): New function.
(ppc_displaced_step_finish): New function.
(ppc_displaced_step_restore_all_in_ptid): New function.
(rs6000_gdbarch_init): Register new gdbarch methods.
* s390-tdep.c (s390_gdbarch_init): Don't call
set_gdbarch_displaced_step_location, set new gdbarch methods.
gdb/testsuite/ChangeLog:
* gdb.arch/amd64-disp-step-avx.exp: Adjust pattern.
* gdb.threads/forking-threads-plus-breakpoint.exp: Likewise.
* gdb.threads/non-stop-fair-events.exp: Likewise.
Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
|
|
Move displaced-stepping related stuff unchanged to displaced-stepping.h
and displaced-stepping.c. This helps make the following patch a bit
smaller and easier to read.
gdb/ChangeLog:
* Makefile.in (COMMON_SFILES): Add displaced-stepping.c.
* aarch64-tdep.h: Include displaced-stepping.h.
* displaced-stepping.h (struct displaced_step_copy_insn_closure):
Move here.
(displaced_step_copy_insn_closure_up): Move here.
(struct buf_displaced_step_copy_insn_closure): Move here.
(struct displaced_step_inferior_state): Move here.
(debug_displaced): Move here.
(displaced_debug_printf_1): Move here.
(displaced_debug_printf): Move here.
* displaced-stepping.c: New file.
* gdbarch.sh: Include displaced-stepping.h in gdbarch.h.
* gdbarch.h: Re-generate.
* inferior.h: Include displaced-stepping.h.
* infrun.h (debug_displaced): Move to displaced-stepping.h.
(displaced_debug_printf_1): Likewise.
(displaced_debug_printf): Likewise.
(struct displaced_step_copy_insn_closure): Likewise.
(displaced_step_copy_insn_closure_up): Likewise.
(struct buf_displaced_step_copy_insn_closure): Likewise.
(struct displaced_step_inferior_state): Likewise.
* infrun.c (show_debug_displaced): Move to displaced-stepping.c.
(displaced_debug_printf_1): Likewise.
(displaced_step_copy_insn_closure::~displaced_step_copy_insn_closure):
Likewise.
(_initialize_infrun): Don't register "set/show debug displaced".
Change-Id: I29935f5959b80425370630a45148fc06cd4227ca
|