Age | Commit message (Collapse) | Author | Files | Lines |
|
gdb_print_host_address is just a simple wrapper around
fprintf_filtered. However, it is readily replaced in all callers by a
combination of %s and call to host_address_to_string. This also
simplifies the code, so I think it's worthwhile to remove this
function.
Regression tested on x86-64 Fedora 64.
|
|
gdb_bfd.c contains most of gdb's BFD-related utility functions.
However, gdb_bfd_errmsg is in utils.c. It seemed better to me to move
this out of util.[ch] and into the BFD-related file instead.
Tested by rebuilding.
|
|
Set of fixes to resolve some duplicate test names in the gdb.mi/
directory. There should be no real test changes after this set of
fixes, they are all either:
- Adding with_test_prefix type constructs to make test names unique,
or
- Changing the test name to be more descriptive, or better reflect
the test being run.
|
|
Bug PR gdb/28405 reports a regression when using attach with an
extended-remote target. In this case the target is not including a
thread-id in the stop packet it sends back after the attach.
The regression was introduced with this commit:
commit 8f66807b98f7634c43149ea62e454ea8f877691d
Date: Wed Jan 13 20:26:58 2021 -0500
gdb: better handling of 'S' packets
The problem is that when GDB processes the stop packet, it sees that
there is no thread-id and so has to "guess" which thread the stop
should apply to.
In this case the target only has one thread, so really, there's no
guessing needed, but GDB still runs through the same process, this
shouldn't cause us any problems.
However, after the above commit, GDB now expects itself to be more
internally consistent, specifically, only a thread that GDB thinks is
resumed, can be a candidate for having stopped.
It turns out that, when GDB attaches to a process through an
extended-remote target, the threads of the process being attached too,
are not, initially, marked as resumed.
And so, when GDB tries to figure out which thread the stop might apply
too, it finds no threads in the processes marked resumed, and so an
assert triggers.
In extended_remote_target::attach we create a new thread with a call
to add_thread_silent, rather than remote_target::remote_add_thread,
the reason is that calling the latter will result in a call to
'add_thread' rather than 'add_thread_silent'. However,
remote_target::remote_add_thread includes additional
actions (i.e. calling remote_thread_info::set_resumed and set_running)
which are missing from extended_remote_target::attach. These missing
calls are what would serve to mark the new thread as resumed.
In this commit I propose that we add an extra parameter to
remote_target::remote_add_thread. This new parameter will force the
new thread to be added with a call to add_thread_silent. We can now
call remote_add_thread from the ::attach method, the extra
actions (listed above) will now be performed, and the thread will be
left in the correct state.
Additionally, in PR gdb/28405, a segfault is reported. This segfault
triggers when 'set debug remote 1' is used before trying to reproduce
the original assertion failure. The cause of this is in
remote_target::select_thread_for_ambiguous_stop_reply, where we do
this:
remote_debug_printf ("first resumed thread is %s",
pid_to_str (first_resumed_thread->ptid).c_str ());
remote_debug_printf ("is this guess ambiguous? = %d", ambiguous);
gdb_assert (first_resumed_thread != nullptr);
Notice that when debug printing is on we dereference
first_resumed_thread before we assert that the pointer is not
nullptr. This is the cause of the segfault, and is resolved by moving
the assert before the debug printing code.
I've extended an existing test, ext-attach.exp, so that the original
test is run multiple times; we run in the original mode, as normal,
but also, we now run with different packets disabled in gdbserver. In
particular, disabling Tthread would trigger the assertion as it was
reported in the original bug. I also run the test in all-stop and
non-stop modes now for extra coverage, we also run the tests with
target-async enabled, and disabled.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28405
|
|
Fixes PR gdb/28681. It was observed that after using the `finish`
command an incorrect value was displayed in some cases. Specifically,
this behaviour was observed on an x86-64 target.
Consider this test program:
struct A
{
int i;
A ()
{ this->i = 0; }
A (const A& a)
{ this->i = a.i; }
};
A
func (int i)
{
A a;
a.i = i;
return a;
}
int
main ()
{
A a = func (3);
return a.i;
}
And this GDB session:
$ gdb -q ex.x
Reading symbols from ex.x...
(gdb) b func
Breakpoint 1 at 0x401115: file ex.cc, line 14.
(gdb) r
Starting program: /home/andrew/tmp/ex.x
Breakpoint 1, func (i=3) at ex.cc:14
14 A a;
(gdb) finish
Run till exit from #0 func (i=3) at ex.cc:14
main () at ex.cc:23
23 return a.i;
Value returned is $1 = {
i = -19044
}
(gdb) p a
$2 = {
i = 3
}
(gdb)
Notice how after the `finish` the contents of $1 are junk, but, when I
immediately ask for the value of `a`, I get back the correct value.
The problem here is that after the finish command GDB calls the
function amd64_return_value to figure out where the return value can
be found (on x86-64 targets anyway).
This function makes the wrong choice for the struct A in our case, as
sizeof(A) <= 8, then amd64_return_value decides that A will be
returned in a register. GDB then reads the return value register an
interprets the contents as an instance of A.
Unfortunately, A is not trivially copyable (due to its copy
constructor), and the sys-v specification for argument and return
value passing, says that any non-trivial C++ object should have space
allocated for it by the caller, and the address of this space is
passed to the callee as a hidden first argument. The callee should
then return the address of this space as the return value.
And so, the register that GDB is treating as containing an instance of
A, actually contains the address of an instance of A (in this case on
the stack), this is why GDB shows the incorrect result.
The call stack within GDB for where we actually go wrong is this:
amd64_return_value
amd64_classify
amd64_classify_aggregate
And it is in amd64_classify_aggregate that we should be classifying
the type as AMD64_MEMORY, instead of as AMD64_INTEGER as we currently
do (via a call to amd64_classify_aggregate_field).
At the top of amd64_classify_aggregate we already have this logic:
if (TYPE_LENGTH (type) > 16 || amd64_has_unaligned_fields (type))
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
Which handles some easy cases where we know a struct will be placed
into memory, that is (a) the struct is more than 16-bytes in size,
or (b) the struct has any unaligned fields.
All we need then, is to add a check here to see if the struct is
trivially copyable. If it is not then we know the struct will be
passed in memory.
I originally structured the code like this:
if (TYPE_LENGTH (type) > 16
|| amd64_has_unaligned_fields (type)
|| !language_pass_by_reference (type).trivially_copyable)
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
This solved the example from the bug, and my small example above. So
then I started adding some more extensive tests to the GDB testsuite,
and I ran into a problem. I hit this error:
gdbtypes.h:676: internal-error: loc_bitpos: Assertion `m_loc_kind == FIELD_LOC_KIND_BITPOS' failed.
This problem is triggered from:
amd64_classify_aggregate
amd64_has_unaligned_fields
field::loc_bitpos
Inside the unaligned field check we try to get the bit position of
each field. Unfortunately, in some cases the field location is not
FIELD_LOC_KIND_BITPOS, but is FIELD_LOC_KIND_DWARF_BLOCK.
An example that shows this bug is:
struct B
{
short j;
};
struct A : virtual public B
{
short i;
A ()
{ this->i = 0; }
A (const A& a)
{ this->i = a.i; }
};
A
func (int i)
{
A a;
a.i = i;
return a;
}
int
main ()
{
A a = func (3);
return a.i;
}
It is the virtual base class, B, that causes the problem. The base
class is represented, within GDB, as a field within A. However, the
location type for this field is a DWARF_BLOCK.
I spent a little time trying to figure out how to convert the
DWARF_BLOCK to a BITPOS, however, I realised that, in this case at
least, conversion is not needed.
The C++ standard says that a class is not trivially copyable if it has
any virtual base classes. And so, in this case, even if I could
figure out the BITPOS for the virtual base class fields, I know for
sure that I would immediately fail the trivially_copyable check. So,
lets just reorder the checks in amd64_classify_aggregate to:
if (TYPE_LENGTH (type) > 16
|| !language_pass_by_reference (type).trivially_copyable
|| amd64_has_unaligned_fields (type))
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
Now, if we have a class with virtual bases we will fail quicker, and
avoid the unaligned fields check completely.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28681
|
|
While working on another patch relating to how GDB manages threads
executing and resumed state, I spotted the following code in
record-btrace.c:
executing = tp->executing ();
set_executing (proc_target, inferior_ptid, false);
id = null_frame_id;
try
{
id = get_frame_id (get_current_frame ());
}
catch (const gdb_exception &except)
{
/* Restore the previous execution state. */
set_executing (proc_target, inferior_ptid, executing);
throw;
}
/* Restore the previous execution state. */
set_executing (proc_target, inferior_ptid, executing);
return id;
I notice that we only catch the exception so we can call
set_executing, and this is the same call to set_executing that we need
to perform in the non-exception return path.
This would be much cleaner if we could use SCOPE_EXIT to avoid the
try/catch, so lets do that.
While cleaning this up, I also applied a similar patch to
record-full.c, though there's no try/catch in that case, but using
SCOPE_EXIT makes the code safe if, in the future, we do start throwing
exceptions.
There should be no user visible changes after this commit.
|
|
I noticed that the mi-async setting was not referenced from the index
in any way, this commit tries to rectify that a bit.
The @cindex lines I think are not controversial, these same index
entries are used elsewhere in the manual for async related topics (see
@node Background Execution).
The only bit that might be controversial is that I've added a @kindex
entry for 'set mi-async' when the command is documented as '-gdb-set
mi-async' (with a similar difference for the show/-gdb-show).
My reasoning here is that nothing else is indexed under -gdb-set or
-gdb-show, and as -gdb-set/-gdb-show are just the MI equivalent for
set/show anything that is documented under set/show can be adjusted
using -gdb-set/-gdbshow, and so, I've tried to keep the index
consistent for mi-async.
|
|
Convert the 'set debug lin-lwp' command to a boolean. Adds a new
LINUX_NAT_SCOPED_DEBUG_ENTER_EXIT macro, and makes use of it in one
place (linux_nat_target::stop).
The manual entry for 'set debug lin-lwp' is already vague about
exactly what arguments this command takes, and the description talks
about turning debug on and off, so I don't think there's any updates
required there.
I have updated the doc strings shown when the users enters 'help show
debug lin-lwp' or 'help show debug lin-lwp'. The old title lines used
to talk about the 'GNU/Linux lwp module', but this debug flag is now
used for any native linux target debug, so we now talk about
'GNU/Linux native target'. The body string for this setting has been
changed from 'Enables printf debugging output.' to 'When on, print
debug messages relating to the GNU/Linux native target.', the old
value looks like a cut&paste error to me.
|
|
Add new commands:
set debug threads on|off
show debug threads
Prints additional debug information relating to thread creation and
deletion.
GDB already announces when threads are created of course.... most of
the time, but sometimes threads are added silently, in which case this
debug message is the only mechanism to see the thread being added.
Also, though GDB does announce when a thread exits, it doesn't
announce when the thread object is deleted, I've added a debug message
for that.
Additionally, having message printed through the debug system will
cause the messages to be nested to an appropriate depth when other
debug sub-systems are turned on (especially things like `infrun` and
`lin-lwp`).
|
|
During review, it was suggested to change the "params" parameter from a
tuple to a list, for esthetic reasons. The empty ones are still tuples
though, they should probably be changed to be empty lists, for
consistency. It does not change anything in the script result.
Change-Id: If13c6c527aa167a5ee5b45740e5f1bda1e9517e4
|
|
Fix mispelling of PROT_ME to PROT_MTE in the error messages.
|
|
This removes the print_spaces helper function, in favor of using the
"*%s" idiom that's already used in many places in gdb. One spot (in
symmisc.c) is changed to use print_spaces_filtered, because the rest
of that function is using filtered output. (This highlights one way
that the printf idiom is better -- this error is harder to make when
using that.)
Regression tested on x86-64 Fedora 34.
|
|
I noticed that puts_debug isn't used in the tree. git log tells me
that the last use was removed in 2015:
commit 40e0b27177e747600d3ec186458fe0e482a1cf77
Author: Pedro Alves <palves@redhat.com>
Date: Mon Aug 24 15:40:26 2015 +0100
Delete the remaining ROM monitor targets
... and this commit mentions that the code being removed here probably
hadn't worked for 6 years prior to that.
Based on this, I'm removing puts_debug. I don't think it's useful.
Tested by rebuilding.
|
|
n_spaces keeps the spaces in a static buffer. If a caller overwrites
these, it may give an incorrect result to a subsequent caller. So,
make the return type const to help avoid this outcome.
|
|
|
|
This commit reformats a comment in gdb/ada-exp.y to avoid
the leading '*' at the beginning of each line of the comment.
|
|
This commit reformats a comment in gdb/ada-lang.h to avoid
the leading '*' at the beginning of each line of the comment.
|
|
This command adds the "exit" command as an alias for the "quit"
command, as requested in PR gdb/28406.
The documentation is also updated to mention this new command.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28406
|
|
While working on another patch I ended up in a situation where I had
async mode disabled (with 'maint set target-async off'), but the async
event token got marked anyway.
In this situation GDB was continually calling into
remote_target::wait, however, the async token would never become
unmarked as the unmarking is guarded by target_is_async_p.
We could just unconditionally unmark the token, but that would feel
like just ignoring a bug, so, instead, lets assert that if
!target_is_async_p, then the async token should not be marked.
This assertion would have caught my earlier mistake.
There should be no user visible changes with this commit.
|
|
While working on another patch relating to remote targets, I wanted to
test with 'maint set target-async off' in place. Unfortunately I ran
into some problems. This commit is an attempt to fix one of the
issues I hit.
In my particular case I was actually running with:
maint set target-async off
maint set target-non-stop off
that is, we're telling GDB to force the targets to operate in
non-async mode, and in all-stop mode. Here's my GDB session showing
the problem:
(gdb) maintenance set target-async off
(gdb) maintenance set target-non-stop off
(gdb) target extended-remote :54321
Remote debugging using :54321
(gdb) attach 2365960
Attaching to process 2365960
No unwaited-for children left.
(gdb)
Notice the 'No unwaited-for children left.' error, this is the
problem. There's no reason why GDB should not be able to attach to
the process.
The problem is this:
1. The user runs 'attach PID' and this sends GDB into attach_command
in infcmd.c. From here we call the ::attach method on the attach
target, which will be the extended_remote_target.
2. In extended_remote_target::attach, we attach to the remote target
and get the first reply (which is a stop packet). We put off
processing the stop packet until the end of ::attach. We setup the
inferior and thread to represent the process we attached to, and
download the target description. Finally, we process the initial
stop packet.
If '!target_is_non_stop_p ()' and '!target_can_async_p ()', which is
the case for us given the maintenance commands we used, we cache the
stop packet within the remote_state::buf for later processing.
3. Back in attach_command, if 'target_is_non_stop_p ()' then we
request that the target stops. This will either process any cached
stop replies, or request that the target stops, and process the stop
replies. However, this code is not what we use due to non-stop mode
being disabled. So, we skip to the next step which is to call
validate_exec_file.
4. Calling validate_exec_file can cause packets to be sent to the
remote target, and replies received, the first path I hit is the
call to target_pid_to_exec_file, which calls
remote_target::pid_to_exec_file, which can then try to read the
executable from the remote. Sending an receiving packets will make
use of the remote_state::buf object.
5. The attempt to attach continues, but the damage is already done...
So, the problem is that, in step #2 we cache a stop reply in the
remote_state::buf, and then in step #4 we reuse the remote_state::buf
object, discarding any cached stop reply. As a result, the initial
stop, which is sent when GDB first attaches to the target, is lost.
This problem can clearly be seen, I feel, by looking at the
remote_state::cached_wait_status flag. This flag tells GDB if there
is a wait status cached in remote_state::buf. However, in
remote_target::putpkt_binary and remote_target::getpkt_or_notif_sane_1
this flag is just set back to 0, doing this immediately discards any
cached data.
I don't know if this scheme ever made sense, looking at commit
2d717e4f8a54, where the cached_wait_status flag was added, it appears
that there was nothing between where the stop was cached, and where
the stop was consumed, so, I suspect, there never was a situation
where we ended up in putpkt_binary or getpkt_or_notif_sane_1 and
needed to clear to the flag, maybe the clearing was added "just in
case". Whatever the history, I claim that this clearing this flag is
no longer a good idea.
So, my first step toward fixing this issue was to replace the two
instances of 'rs->cached_wait_status = 0;' in ::putpkt_binary and
::getpkt_or_notif_sane_1 with 'gdb_assert (rs->cached_wait_status ==
0);', this, at least would show me when GDB was doing something
dangerous, and indeed, this assert is now hit in my test case above.
I did play with using some kind of scoped restore to backup, and
restore the remote_state::buf object in all the places within remote.c
that I was hitting where the ::buf was being corrupted. The first
problem with this is that, where the ::cached_wait_status flag is
reset is _not_ where ::buf is corrupted. For the ::putpkt_binary
case, by the time we get to the method the buffer has already been
corrupted in many cases, so we end up needing to add the scoped
save/restore within the callers, which means we need the save/restore
in _lots_ of places.
Plus, using this save/restore model feels like the wrong solution. I
don't think that it's obvious that the buffer might be holding cached
data, and I think it would be too easy for new corruptions of the
buffer to be introduced, which could easily go unnoticed for a long
time.
So, I really wanted a solution that didn't require us to cache data in
the ::buf object.
Luckily, I think we already have such a solution in place, the
remote_state::stop_reply_queue, it seems like this does exactly the
same task, just in a slightly different way. With the
::stop_reply_queue, the stop packets are processed upon receipt and
the stop_reply object is added to the queue. With the ::buf cache
solution, the unprocessed stop reply is cached in the ::buf, and
processed later.
So, finally, in this commit, I propose to remove the
remote_state::cached_wait_status flag and to stop using the ::buf to
cache stop replies. Instead, stop replies will now always be stored
in the ::stop_reply_queue.
There are two places where we use the ::buf to hold a cached stop
reply, the first is in the ::attach method, and the second is in
remote_target::start_remote, however, the second of these cases is far
less problematic, as after caching the stop reply in ::buf we call the
global start_remote function, which does very little work before
calling normal_stop, which processes the cached stop reply. However,
my plan is to switch both users over to using ::stop_reply_queue so
that the old (unsafe) ::cached_wait_status mechanism can be completely
removed.
The next problem is that the ::stop_reply_queue is currently only used
for async-mode, and so, in remote_target::push_stop_reply, where we
push stop_reply objects into the ::stop_reply_queue, we currently also
mark the async event token. I've modified this so we only mark the
async event token if 'target_is_async_p ()' - note, _is_, not _can_
here. The ::push_stop_reply method is called in places where async
mode has been temporarily disabled, but, when async mode is switched
back on (see remote_target::async) we will mark the event token if
there are events in the queue.
Another change of interest is in remote_target::remote_interrupt_as.
Previously this code checked ::cached_wait_status, but didn't check
for events in the ::stop_reply_queue. Now that ::cached_wait_status
has been removed we now check the queue length instead, which should
have the same result.
Finally, in remote_target::wait_as, I've tried to merge the processing
of the ::stop_reply_queue with how we used to handle the
::cached_wait_status flag.
Currently, when processing the ::stop_reply_queue we call
process_stop_reply and immediately return. However, when handling
::cached_wait_status we run through the whole of ::wait_as, and return
at the end of the function.
If we consider a standard stop packet, the two differences I see are:
1. Resetting of the remote_state::waiting_for_stop_reply, flag; this
is not currently done when processing a stop from the
::stop_reply_queue.
2. The final return value has the possibility of being adjusted at
the end of ::wait_as, as well as there being calls to
record_currthread, non of which are done if we process a stop from
the ::stop_reply_queue.
After discussion on the mailing list:
https://sourceware.org/pipermail/gdb-patches/2021-December/184535.html
it was suggested that, when an event is pushed into the
::stop_reply_queue, the ::waiting_for_stop_reply flag is never going
to be set. As a result, we don't need to worry about the first
difference. I have however, added a gdb_assert to validate the
assumption that the flag is never going to be set. If in future the
situation ever changes, then we should find out pretty quickly.
As for the second difference, I have resolved this by having all stop
packets taken from the ::stop_reply_queue, pass through the return
value adjustment code at the end of ::wait_as.
An example of a test that reveals the benefits of this commit is:
make check-gdb \
RUNTESTFLAGS="--target_board=native-extended-gdbserver \
GDBFLAGS='-ex maint\ set\ target-async\ off \
-ex maint\ set\ target-non-stop\ off' \
gdb.base/attach.exp"
For testing I've been running test on x86-64/GNU Linux, and run with
target boards unix, native-gdbserver, and native-extended-gdbserver.
For each board I've run with the default GDBFLAGS, as well as with:
GDBFLAGS='-ex maint\ set\ target-async\ off \
-ex maint\ set\ target-non-stop\ off' \
Though running with the above GDBFLAGS is clearly a lot more unstable
both before and after my patch, I'm not seeing any consistent new
failures with my patch, except, with the native-extended-gdbserver
board, where I am seeing new failures, but only because more tests are
now running. For that configuration alone I see the number of
unresolved go down by 49, the number of passes goes up by 446, and the
number of failures also increases by 144. All of the failures are new
tests as far as I can tell.
|
|
This adds a comment to document how to update gdbarch.
|
|
This patch runs gdbarch.py and removes gdbarch.sh.
|
|
The new gdbarch generator is a Python program. It reads the
"components.py" that was created in the previous patch, and generates
gdbarch.c and gdbarch-gen.h.
This is a relatively straightforward translation of the existing .sh
code. It doesn't try very hard to be idiomatic Python or to be
especially smart.
It is, however, incredibly faster:
$ time ./gdbarch.sh
real 0m8.197s
user 0m5.779s
sys 0m3.384s
$ time ./gdbarch.py
real 0m0.065s
user 0m0.053s
sys 0m0.011s
Co-Authored-By: Tom Tromey <tom@tromey.com>
|
|
The new gdbarch.sh approach will be to edit a Python file, rather than
adding a line to a certain part of gdbarch.sh. We use the existing sh
code, though, to generate the first draft of this .py file.
Documentation on the format will come in a subsequent patch.
Note that some info (like "staticdefault") in the current code is
actually unused, and so is ignored by this new generator.
|
|
This changes gdbarch.sh so that it no longer sorts the fields in
gdbarch_dump. This sorting isn't done anywhere else by gdbarch.sh,
and this simplifies the new generator a little bit.
|
|
Now that gdbarch.h has been split, we no longer need the generator
code in gdbarch.sh, so remove it.
|
|
This patch splits gdbarch.h into two files -- gdbarch.h now is
editable and hand-maintained, and the new gdbarch-gen.h file is the
only thing generated by gdbarch.sh. This lets us avoid maintaining
boilerplate in the gdbarch.sh file.
Note that gdbarch.sh still generates gdbarch.h after this patch. This
makes it easier to re-run when rebasing. This code is removed in a
subsequent patch.
|
|
While I think it makes sense to generate gdbarch.c, at the same time I
think it is better for ordinary code to be editable in a C file -- not
as a hunk of C code embedded in the generator.
This patch moves this sort of code out of gdbarch.sh and gdbarch.c and
into arch-utils.c, then has arch-utils.c include gdbarch.c.
|
|
Move inner dimension's element type determination outside the respective
loops in `fortran_array_walker'. The operation is exactly the same with
each iteration, so there is no point in redoing it for each element and
while a smart compiler might be able to move it outside the loop it is
regardless a bad coding style. No functional change.
|
|
Following our coding convention initialize the `m_ndimensions' member in
the member initializer list rather than in the body of the constructor
of the `fortran_array_walker' class. No functional change.
|
|
PR26056 reports that when GDB is connected to non-TTY stdin/stdout, it
crashes when it receives a SIGWINCH signal.
This can be reproduced as follows:
$ gdb/gdb -nx -batch -ex 'run' --args sleep 60 </dev/null 2>&1 | cat
# from another terminal:
$ kill -WINCH %(pidof gdb)
When doing so, the process crashes in a call to rl_resize_terminal:
void
rl_resize_terminal (void)
{
_rl_get_screen_size (fileno (rl_instream), 1);
...
}
The problem is that at this point rl_instream has the value NULL.
The rl_instream variable is supposed to be initialized during a call to
readline_initialize_everything, which in a normal startup sequence is
called under this call chain:
tui_interp::init
tui_ensure_readline_initialized
rl_initialize
readline_initialize_everything
In tui_interp::init, we have the following sequence:
tui_initialize_io ();
tui_initialize_win (); // <- Installs SIGWINCH
if (gdb_stdout->isatty ())
tui_ensure_readline_initialized (); // <- Initializes rl_instream
This function unconditionally installs the SIGWINCH signal handler (this
is done by tui_initialize_win), and then if gdb_stdout is a TTY it
initializes readline. Therefore, if stdout is not a TTY, SIGWINCH is
installed but readline is not initialized. In such situation
rl_instream stays NULL, and when GDB receives a SIGWINCH it calls its
handler and in fine tries to access rl_instream leading to the crash.
This patch proposes to fix this issue by installing the SIGWINCH signal
handler only if GDB is connected to a TTY. Given that this
initialization it the only task of tui_initialize_win, this patch moves
tui_initialize_win just after the call to
tui_ensure_readline_initialized.
Tested on x86_64-linux.
Co-authored-by: Pedro Alves <pedro@palves.net>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=26056
Change-Id: I6458acef7b0d9beda2a10715d0345f02361076d9
|
|
Run black 21.12b0 on gdb/, there is a single whitespace change. I will
update the wiki [1] in parallel to bump the version of black to 21.12b0.
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
Change-Id: Ib3b859e3506c74a4f15d16f1e44ef402de3b98e2
|
|
Run black 21.9b0 on gdb/ (this is the version currently mentioned on the
wiki [1], the subsequent commit will bump that version).
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
Change-Id: I5ceaab42c42428e053e2572df172aa42a88f0f86
|
|
GDB/GDBserver
Add the --enable-threading configure option so multithreading can be disabled
at configure time. This is useful for statically-linked builds of
GDB/GDBserver, since the thread library doesn't play well with that setup.
If you try to run a statically-linked GDB built with threading, it will crash
when setting up the number of worker threads.
This new option is also convenient when debugging GDB in a system with lots of
threads, where the thread discovery code in GDB will emit too many messages,
like so:
[New Thread 0xfffff74d3a50 (LWP 2625599)]
If you have X threads, that message will be repeated X times.
The default for --enable-threading is "yes".
|
|
Just give the function build_table a more descriptive name. There
should be no user visible changes after this commit.
|
|
Just give this class a new name, more inline with the name of the
sub-classes. I've also updated mi_cmd_up to mi_command_up in
mi-cmds.c inline with this new naming scheme.
There should be no user visible changes after this commit.
|
|
This commit changes the infrastructure in mi-cmds.{c,h} to add new
sub-classes for the different types of MI command. Instances of these
sub-classes are then created and added into mi_cmd_table.
The existing mi_cmd class becomes the abstract base class, this has an
invoke method and takes care of the suppress notifications handling,
before calling a do_invoke virtual method which is implemented by all
of the sub-classes.
There's currently two different sub-classes, one of pure MI commands,
and a second for MI commands that delegate to CLI commands.
There should be no user visible changes after this commit.
|
|
Change an argument of mi_execute_cli_command from int to bool. Update
the callers to take this into account. Within mi_execute_cli_command,
update a comparison of a pointer to 0 with a comparison to nullptr,
and add an assert, if we are not using the argument string then the
string should be nullptr. Also removed a cryptic 'gdb_????' comment,
which isn't really helpful.
There should be no user visible changes after this commit.
|
|
This changes the hashmap used in mi-cmds.c from a custom structure to
std::map. Not only is replacing a custom container with a standard
one an improvement, but using std::map will make it easier to
dynamically add commands; which is something that is planned for a
later series, where we will allow MI commands to be implemented in
Python.
There should be no user visible changes after this commit.
|
|
Lets give this function a more descriptive name. I've also improved
the comments in the header and source files.
There should be no user visible changes after this commit.
|
|
Powerpc is not reporting the
Catchpoint 1 (returned from syscall execve), ....
as expected. The issue appears to be with the kernel not returning the
expected result. This patch marks the test failure as an xfail.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28623
|
|
While working on a Python script, which was interacting with a remote
target, I noticed some weird slowness in GDB. In my program I had a
structure something like this:
struct foo_t
{
int array[5];
};
struct foo_t global_foo;
Then in the Python script I was fetching a complete copy of global
foo, like:
val = gdb.parse_and_eval('global_foo')
val.fetch_lazy()
Then I would work with items in foo_t.array, like:
print(val['array'][1])
I called the fetch_lazy method specifically because I knew I was going
to end up accessing almost all of the contents of val, and so I wanted
GDB to do a single remote protocol call to fetch all the contents in
one go, rather than trying to do lazy fetches for a couple of bytes at
a time.
What I observed was that, after the fetch_lazy call, GDB does,
correctly, fetch the entire contents of global_foo, including all of
the contents of array, however, when I access val.array[1], GDB still
goes and fetches the value of this element from the remote target.
What's going on is that in valarith.c, in value_subscript, for C like
languages, we always end up treating the array value as a pointer, and
then doing value_ptradd, and value_ind, the second of these calls
always returns a lazy value.
My guess is that this approach allows us to handle indexing off the
end of an array, when working with zero element arrays, or when
indexing a raw pointer as an array. And, I agree, that in these
cases, where, even when the original value is non-lazy, we still will
not have the content of the array loaded, we should be using the
value_ind approach.
However, for cases where we do have the array contents loaded, and we
do know the bounds of the array, I think we should be using
value_subscripted_rvalue, which is what we use for non C like
languages.
One problem I did run into, exposed by gdb.base/charset.exp, was that
value_subscripted_rvalue stripped typedefs from the element type of
the array, which means the value returned will not have the same type
as an element of the array, but would be the raw, non-typedefed,
type. In charset.exp we got back an 'int' instead of a
'wchar_t' (which is a typedef of 'int'), and this impacts how we print
the value. Removing typedefs from the resulting value just seems
wrong, so I got rid of that, and I don't see any test regressions.
With this change in place, my original Python script is now doing no
additional memory accesses, and its performance increases about 10x!
|
|
This commit updates uses of 'loc' and 'loc_kind' to 'm_loc' and
'm_loc_kind' respectively, in gdb-gdb.py.in, which is required after
this commit:
commit cd3f655cc7a55437a05aa8e7b1fcc9051b5fe404
Date: Thu Sep 30 22:38:29 2021 -0400
gdb: add accessors for field (and call site) location
I have also incorporated this change:
https://sourceware.org/pipermail/gdb-patches/2021-September/182171.html
Which means we print 'm_name' instead of 'name' when displaying the
'm_name' member variable.
Finally, I have also added support for the new TYPE_SPECIFIC_INT
fields, which were added with this commit:
commit 20a5fcbd5b28cca88511ac5a9ad5e54251e8fa6d
Date: Wed Sep 23 09:39:24 2020 -0600
Handle bit offset and bit size in base types
I updated the gdb.gdb/python-helper.exp test to cover all of these
changes.
|
|
While working on a later patch that required me to understand how GDB
starts up inferiors, I was confused by the
target_ops::post_startup_inferior method.
The post_startup_inferior target function is only called from
inf_ptrace_target::create_inferior.
Part of the target class hierarchy looks like this:
inf_child_target
|
'-- inf_ptrace_target
|
|-- linux_nat_target
|
|-- fbsd_nat_target
|
|-- nbsd_nat_target
|
|-- obsd_nat_target
|
'-- rs6000_nat_target
Every sub-class of inf_ptrace_target, except rs6000_nat_target,
implements ::post_startup_inferior. The rs6000_nat_target picks up
the implementation of ::post_startup_inferior not from
inf_ptrace_target, but from inf_child_target.
No descendent of inf_child_target, outside the inf_ptrace_target
sub-tree, implements ::post_startup_inferior, which isn't really
surprising, as they would never see the method called (remember, the
method is only called from inf_ptrace_target::create_inferior).
What I find confusing is the role inf_child_target plays in
implementing, what is really a helper function for just one of its
descendents.
In this commit I propose that we formally make ::post_startup_inferior
a helper function of inf_ptrace_target. To do this I will remove the
::post_startup_inferior from the target_ops API, and instead make this
a protected, pure virtual function on inf_ptrace_target.
I'll remove the empty implementation of ::post_startup_inferior from
the inf_child_target class, and add a new empty implementation to the
rs6000_nat_target class.
All the other descendents of inf_ptrace_target already provide an
implementation of this method and so don't need to change beyond
making the method protected within their class declarations.
To me, this makes much more sense now. The helper function, which is
only called from within the inf_ptrace_target class, is now a part of
the inf_ptrace_target class.
The only way in which this change is visible to a user is if the user
turns on 'set debug target 1'. With this debug flag on, prior to this
patch the user would see something like:
-> native->post_startup_inferior (...)
<- native->post_startup_inferior (2588939)
After this patch these lines are no longer present, as the
post_startup_inferior is no longer a top level target method. For me,
this is an acceptable change.
|
|
While working on another patch I had reason to look at
mips-netbsd-nat.c, and noticed that the class mips_nbsd_nat_target
inherits directly from inf_ptrace_target.
This is a little strange as alpha_bsd_nat_target,
arm_netbsd_nat_target, hppa_nbsd_nat_target, m68k_bsd_nat_target,
ppc_nbsd_nat_target, sh_nbsd_nat_target, and vax_bsd_nat_target all
inherit from nbsd_nat_target.
Originally, in this commit:
commit f6ac5f3d63e03a81c4ff3749aba234961cc9090e
Date: Thu May 3 00:37:22 2018 +0100
Convert struct target_ops to C++
When the target tree was converted to C++, all of the above classes
inherited from inf_ptrace_target except for hppa_nbsd_nat_target,
which was the only class that originally inherited from
nbsd_nat_target.
Later on all the remaining targets (except mips) were converted to
inherit from nbsd_nat_target, these are the commits:
commit 4fed520be264b60893aa674071947890f8172915
Date: Sat Mar 14 16:05:24 2020 +0100
Inherit alpha_netbsd_nat_target from nbsd_nat_target
commit 6018d381a00515933016c539d2fdc18ad0d304b8
Date: Sat Mar 14 14:50:51 2020 +0100
Inherit arm_netbsd_nat_target from nbsd_nat_target
commit 01a801176ea15ddfc988cade2e3d84c3b0abfec3
Date: Sat Mar 14 16:54:42 2020 +0100
Inherit m68k_bsd_nat_target from nbsd_nat_target
commit 9faa006d11a5e08264a007463435f84b77864c9c
Date: Thu Mar 19 14:52:57 2020 +0100
Inherit ppc_nbsd_nat_target from nbsd_nat_target
commit 9809762324491b851332ce600ae9bde8dd34f601
Date: Tue Mar 17 15:07:39 2020 +0100
Inherit sh_nbsd_nat_target from nbsd_nat_target
commit d5be5fa4207da00d039a1d5a040ba316e7092cbd
Date: Sat Mar 14 13:21:58 2020 +0100
Inherit vax_bsd_nat_target from nbsd_nat_target
I could only find mailing list threads for ppc and sh in the archive ,
and unfortunately, none of the commits has any real detail that might
explain why mips was missed out, the only extra context I could find
was this message:
https://sourceware.org/pipermail/gdb-patches/2020-March/166853.html
Which says that "proper" OS support is going to be added to
nbsd_nat_target, hence the need to inherit from that class.
My guess is that leaving mips_nbsd_nat_target unchanged was an
oversight, so, in this commit, I propose changing mips_nbsd_nat_target
to inherit from nbsd_nat_target just like all the other nbsd targets.
My motivation for this patch relates to the post_startup_inferior
target method. In a future commit I want to change how this method is
handled. Currently the mips_nbsd_nat_target will pick up the empty
implementation of inf_child_target::post_startup_inferior rather than
the version in netbsd-nat.c. This feels like a bug to me, as surely,
enabling of proc events is something that would need to be done for
all netbsd targets, regardless of architecture.
In my future patch I have a choice then, either (a) add a new, empty
implementation of post_startup_inferior to mips_nbsd_nat_target,
or (b) this commit, have mips_nbsd_nat_target inherit from
nbsd_nat_target. Option (b) seems like the right way to go, hence,
this commit.
I've done absolutely no testing for this change, not even building it,
as that would require at least an environment in which I can x-build
mips-netbsd applications, which I have no idea how to set up.
|
|
While testing another patch I was trying to build different
configurations of GDB, and, during one test build I ran into a
problem, I configured with `--enable-targets=all
--host=i686-w64-mingw32` and saw this error while linking GDB:
.../i686-w64-mingw32/bin/ld: mips-tdep.o: in function `mips_gdbarch_init':
.../src/gdb/mips-tdep.c:8730: undefined reference to `disassembler_options_mips'
.../i686-w64-mingw32/bin/ld: riscv-tdep.o: in function `riscv_gdbarch_init':
.../src/gdb/riscv-tdep.c:3851: undefined reference to `disassembler_options_riscv'
So the `disassembler_options_mips` and `disassembler_options_riscv`
symbols are missing.
This turns out to be because mips-dis.c and riscv-dis.c, in which
these symbols are defined, are in the TARGET64_LIBOPCODES_CFILES list
in opcodes/Makefile.am, these files are only built when we are
building with a 64-bit bfd.
If we look further, into bfd/Makefile.am, we can see that all the
files matching elf*-riscv.lo are in the BFD64_BACKENDS list, as are
the elf*-mips.lo files, and (I know because I tried), the two
disassemblers do, not surprisingly, depend on features supplied from
libbfd.
So, though we can build most of GDB just fine for riscv and mips with
a 32-bit bfd, if I understand correctly, the final GDB
executable (assuming we could get it to link) would not understand
these architectures at the bfd level, nor would there be any
disassembler available. This sounds like a pretty useless GDB to me.
So, in this commit, I move the riscv and mips targets into GDB's list
of targets that require a 64-bit bfd. After this I can build GDB just
fine with the configure options given above.
This was discussed on the mailing list in a couple of threads:
https://sourceware.org/pipermail/gdb-patches/2021-December/184365.html
https://sourceware.org/pipermail/binutils/2021-November/118498.html
and it is agreed, that it is unfortunate that the 32-bit riscv and
32-bit mips targets require a 64-bit bfd. If in the future this
situation ever changes then it would be expected that some (or all) of
this patch would be reverted. Until then though, this patch allows
GDB to build when configured with --enable-targets=all, and when using
a 32-bit libbfd.
|
|
I found some uses of xfree in the path substitution code in source.c.
C++-ifying struct substitute_path_rule both simplifies the code and
removes manual memory management.
Regression tested on x86-64 Fedora 34.
|
|
The comment on top of gdb/testsuite/boards/remote-stdio-gdbserver.exp says
that user can specify path to gdbserver on remote system by setting
GDBSERVER variable. However, this variable was ignored and /usr/bin/gdbserver
was used unconditionally.
This commit updates the code to respect GDBSERVER if set and fall back to
/usr/bin/gdbserver if not.
|
|
Fix this, seen when building with clang 14:
CXX microblaze-tdep.o
/home/simark/src/binutils-gdb/gdb/microblaze-tdep.c:207:7: error: variable 'flags' set but not used [-Werror,-Wunused-but-set-variable]
int flags = 0;
^
Change-Id: I59f726ed33e924912748bc475b6fd9a9394fc0d0
|
|
Fix these, seen when building with clang 14:
CXX csky-tdep.o
/home/simark/src/binutils-gdb/gdb/csky-tdep.c:332:7: error: variable 'need_dummy_stack' set but not used [-Werror,-Wunused-but-set-variable]
int need_dummy_stack = 0;
^
/home/simark/src/binutils-gdb/gdb/csky-tdep.c:805:12: error: variable 'offset' set but not used [-Werror,-Wunused-but-set-variable]
int offset = 0;
^
Change-Id: I6703bcb50e83c50083f716f4084ef6aa30d659c4
|