Age | Commit message (Collapse) | Author | Files | Lines |
|
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed a few more style issues in commit 8b9c08eddac ("[gdb/symtab] Add
name_of_main and language_of_main to the DWARF index"), after checking it
with gcc's check_GNU_style.{sh,py}.
Fix these.
Build on x86_64-linux.
|
|
Andry pointed out that the DAP code did not properly handle
gdb.LazyString results from a pretty-printer, yielding:
TypeError: Object of type LazyString is not JSON serializable
This patch fixes the problem, partly with a small patch in varref.py,
but mainly by implementing tp_str for LazyString.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds a new Python function, gdb.notify_mi, that can be used
to emit custom async notification to MI channel. This can be used, among
other things, to implement notifications about events MI does not support,
such as remote connection closed or register change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This patch adds a new section to the DWARF index containing the name
and the language of the main function symbol, gathered from
`cooked_index::get_main`, if available. Currently, for lack of a better name,
this section is called the "shortcut table". The way this name is both saved and
applied upon an index being loaded in mirrors how it is done in
`cooked_index_functions`, more specifically, the full name of the main function
symbol is saved and `set_objfile_main_name` is used to apply it after it is
loaded.
The main use case for this patch is in improving startup times when dealing with
large binaries. Currently, when an index is used, GDB has to expand symtabs
until it finds out what the language of the main function symbol is. For some
large executables, this may take a considerable amount of time to complete,
slowing down startup. This patch bypasses that operation by having both the name
and language of the main function symbol be provided ahead of time by the index.
In my testing (a binary with about 1.8GB worth of DWARF data) this change brings
startup time down from about 34 seconds to about 1.5 seconds.
When testing the patch with target board cc-with-gdb-index, test-case
gdb.fortran/nested-funcs-2.exp starts failing, but this is due to a
pre-existing issue, filed as PR symtab/30946.
Tested on x86_64-linux, with target board unix and cc-with-gdb-index.
PR symtab/24549
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=24549
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
This commit a new section for the next release branch, and renames
the section of the current branch, now that it has been cut.
|
|
I spotted two entries in the NEWS file that I believe are in the wrong
place, these are:
- An entry about MI v1 being deprecated, this feels like it should
be the first entry under the 'MI changes' heading, and
- An entry for the $_shell convenience function which is currently
under the 'New commands' heading (sort of), when I think this
should be listed in the general news section.
|
|
If you want to install GDB in a custom prefix, have it look for debug info
in that prefix but also in the distro's default location (typically,
/usr/lib/debug) and run the GDB testsuite before doing "make install", you
have a bit of a problem:
Configuring GDB with '--prefix=$PREFIX' sets the GDB 'debug-file-directory'
parameter to $PREFIX/lib/debug. Unfortunately this precludes GDB from
looking for distro-installed debug info in /usr/lib/debug. For regular GDB
use you could set debug-file-directory to $PREFIX:/usr/lib/debug in
$PREFIX/etc/gdbinit so that GDB will look in both places, but if you want
to run the testsuite then that doesn't help because in that case GDB runs
with the '-nx' option.
There's the configure option '--with-separate-debug-dir' to set the default
value for 'debug-file-directory', but it accepts only one directory and not
a list. I considered modifying it to accept a list, but it's not obvious
how to do that because its value is also used by BFD, as well as processed
for "relocatability".
I thought it was simpler to add a new option to specify a list of
additional directories that will be appended to the debug-file-directory
setting.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Document changes introduced by gdb's SME2 support.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
|
|
Provide documentation for the SME feature and other information that
should be useful for users that need to debug a SME-capable target.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
|
|
Initially I just wanted a Python event for when GDB removes a program
space, I'm writing a Python extension that caches information for each
program space, and need to know when I should discard entries for a
particular program space.
But, it seemed easy enough to also add an event for when GDB adds a
new program space, so I went ahead and added both new events.
Of course, we don't currently have an observable for program space
addition or removal, so I first needed to add these. After that it's
pretty simple to add two new Python events and have these trigger.
The two new event registries are:
events.new_progspace
events.free_progspace
These emit NewProgspaceEvent and FreeProgspaceEvent objects
respectively, each of these new event types has a 'progspace'
attribute that contains the relevant gdb.Progspace object.
There's a couple of things to be mindful of.
First, it is not possible to catch the NewProgspaceEvent for the very
first program space, the one that is created when GDB first starts, as
this program space is created before any Python scripts are sourced.
In order to allow this event to be caught we would need to defer
creating the first program space, and as a consequence the first
inferior, until some later time. But, existing scripts could easily
depend on there being an initial inferior, so I really don't think we
should change that -- and so, we end up with the consequence that we
can't catch the event for the first program space.
The second, I think minor, issue, is that GDB doesn't clean up its
program spaces upon exit -- or at least, they are not cleaned up
before Python is shut down. As a result, any program spaces in use at
the time GDB exits don't generate a FreeProgspaceEvent. I'm not
particularly worried about this for my use case, I'm using the event
to ensure that a cache doesn't hold stale entries within a single GDB
session. It's also easy enough to add a Python at-exit callback which
can do any final cleanup if needed.
Finally, when testing, I did hit a slightly weird issue with some of
the remote boards (e.g. remote-stdio-gdbserver). As a consequence of
this issue I see some output like this in the gdb.log:
(gdb) PASS: gdb.python/py-progspace-events.exp: inferior 1
step
FreeProgspaceEvent: <gdb.Progspace object at 0x7fb7e1d19c10>
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/libc.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/ld-linux-x86-64.so.2": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
do_parent_stuff () at py-progspace-events.c:41
41 ++global_var;
(gdb) PASS: gdb.python/py-progspace-events.exp: step
The 'FreeProgspaceEvent ...' line is expected, that's my test Python
extension logging the event. What isn't expected are all the blocks
like:
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
It turns out that this has nothing to do with my changes, this is just
a consequence of reading files over the remote protocol. The test
forks a child process which GDB stays attached too. When the child
exits, GDB cleans up by calling prune_inferiors, which in turn can
result in GDB trying to close some files that are open because of the
inferior being deleted.
If the prune_inferiors call occurs when the remote target is
running (and in non-async mode) then GDB will try to send a fileio
packet while the remote target is waiting for a stop reply, and the
remote target will throw an error, see remote_target::putpkt_binary in
remote.c for details.
I'm going to look at fixing this, but, as I said, this is nothing to
do with this change, I just mention it because I ended up needing to
account for these warning messages in one of my tests, and it all
looks a bit weird.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
I ran across this site:
https://no-color.org/
... which lobbies for tools to recognize the NO_COLOR environment
variable and disable any terminal styling when it is seen.
This patch implements this for gdb.
Regression tested on x86-64 Fedora 38.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-by: Kevin Buettner <kevinb@redhat.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit makes the executable_changed observable available through
the Python API as an event. There's nothing particularly interesting
going on here, it just follows the same pattern as many of the other
Python events we support.
The new event registry is called events.executable_changed, and this
emits an ExecutableChangedEvent object which has two attributes, a
gdb.Progspace called 'progspace', this is the program space in which
the executable changed, and a Boolean called 'reload', which is True
if the same executable changed on disk and has been reloaded, or is
False when a new executable has been loaded.
One interesting thing did come up during testing though, you'll notice
the test contains a setup_kfail call. During testing I observed that
the executable_changed event would trigger twice when GDB restarted an
inferior. However, the ExecutableChangedEvent object is identical for
both calls, so the wrong information is never sent out, we just see
one too many events.
I tracked this down to how the reload_symbols function (symfile.c)
takes care to also reload the executable, however, I've split fixing
this into a separate commit, so see the next commit for details.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.executable_filename attribute that contains the
path to the executable for this program space, or None if no
executable is set.
The path within this attribute will be set by the "exec-file" and/or
"file" commands.
Accessing this attribute for an invalid program space will raise an
exception.
This new attribute is similar too, but not the same as the existing
gdb.Progspace.filename attribute. If I could change the past, I'd
change the 'filename' attribute to 'symbol_filename', which is what it
actually represents. The old attribute will be set by the
'symbol-file' command, while the new attribute is set by the
'exec-file' command. Obviously the 'file' command sets both of these
attributes.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.symbol_file attribute. This attribute holds the
gdb.Objfile object that corresponds to Progspace.filename, or None if
there is no main symbol file currently set.
Currently, to get this gdb.Objfile, a user would need to use
Progspace.objfiles, and then search for the objfile with a name that
matches Progspace.filename -- which should work just fine, but having
direct access seems a little nicer.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
There was an earlier thread about adding new methods to
pretty-printers:
https://sourceware.org/pipermail/gdb-patches/2023-June/200503.html
We've known about the need for printer extensibility for a while, but
have been hampered by backward-compatibilty concerns: gdb never
documented that printers might acquire new methods, and so existing
printers may have attribute name clashes.
To solve this problem, this patch adds a new pretty-printer tag class
that signals to gdb that the printer follows new extensibility rules.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30816
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Rationale:
I use the mouse with my terminal to select and copy text. In gdb, I use
the mouse to select a function name to set a breakpoint, or a variable
name to print, for example.
When gdb is compiled with ncurses mouse support, gdb's TUI mode
intercepts mouse events. Left-clicking and dragging, which would
normally select text, seems to do nothing. This means I cannot select
text using my mouse anymore. This makes it harder to set breakpoints,
print variables, etc.
Solution:
I tried to fix this issue by editing the 'mousemask' call to only enable
buttons 4 and 5. However, this still caused my terminal (gnome-terminal)
to not allow text to be selected. The only way I could make it work is
by calling 'mousemask (0, NULL);'. But doing so disables the mouse code
entirely, which other people might want.
I therefore decided to make a setting in gdb called 'tui mouse-events'.
If enabled (the default), the behavior is as it is now: terminal mouse
events are given to gdb, disabling the terminal's default behavior.
If disabled (opt-in), the behavior is as it was before the year 2020:
terminal mouse events are not given to gdb, therefore the mouse can be
used to select and copy text.
Notes:
I am not attached to the setting name or its description. Feel free to
suggest better wording.
Testing:
I tested this change in gnome-terminal by performing the following steps
manually:
1. Run: gdb --args ./myprogram
2. Enable TUI: press ctrl-x ctrl-a
3. Click and drag text with the mouse. Observe no selection.
4. Input: set tui mouse-events off
5. Click and drag text with the mouse. Observe that selection works now.
6. Input: set tui mouse-events on.
7. Click and drag text with the mouse. Observe no selection.
|
|
After the series that added this command was pushed, Pedro mentioned
that the news description could easily be misinterpreted, as well as
some code and test improvements that should be made.
While fixing the test, I realized that code repetition wasn't
happening as it should, so I took care of that too.
Approved-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
gdb's language code may know how to display values specially. For
example, the Rust code understands that &str is a string-like type, or
Ada knows how to handle unconstrained arrays. This knowledge is
exposed via val-print, and via varobj -- but currently not via DAP.
This patch adds some support code to let DAP also handle these cases,
though in a somewhat more generic way.
Type.is_array_like and Value.to_array are added to make Python aware
of the cases where gdb knows that a structure type is really
"array-like".
Type.is_string_like is added to make Python aware of cases where gdb's
language code knows that a type is string-like.
Unlike Value.string, these cases are handled by the type's language,
rather than the current language.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit extends the breakpoint mechanism to allow for inferior
specific breakpoints (but not watchpoints in this commit).
As GDB gains better support for multiple connections, and so for
running multiple (possibly unrelated) inferiors, then it is not hard
to imagine that a user might wish to create breakpoints that apply to
any thread in a single inferior. To achieve this currently, the user
would need to create a condition possibly making use of the $_inferior
convenience variable, which, though functional, isn't the most user
friendly.
This commit adds a new 'inferior' keyword that allows for the creation
of inferior specific breakpoints.
Inferior specific breakpoints are automatically deleted when the
associated inferior is removed from GDB, this is similar to how
thread-specific breakpoints are deleted when the associated thread is
deleted.
Watchpoints are already per-program-space, which in most cases mean
watchpoints are already inferior specific. There is a small window
where inferior-specific watchpoints might make sense, which is after a
vfork, when two processes are sharing the same address space.
However, I'm leaving that as an exercise for another day. For now,
attempting to use the inferior keyword with a watchpoint will give an
error, like this:
(gdb) watch a8 inferior 1
Cannot use 'inferior' keyword with watchpoints
A final note on the implementation: currently, inferior specific
breakpoints, like thread-specific breakpoints, are inserted into every
inferior, GDB then checks once the inferior stops if we are in the
correct thread or inferior, and resumes automatically if we stopped in
the wrong thread/inferior.
An obvious optimisation here is to only insert breakpoint locations
into the specific program space (which mostly means inferior) that
contains either the inferior or thread we are interested in. This
would reduce the number times GDB has to stop and then resume again in
a multi-inferior setup.
I have a series on the mailing list[1] that implements this
optimisation for thread-specific breakpoints. Once this series has
landed I'll update that series to also handle inferior specific
breakpoints in the same way. For now, inferior specific breakpoints
are just slightly less optimal, but this is no different to
thread-specific breakpoints in a multi-inferior debug session, so I
don't see this as a huge problem.
[1] https://inbox.sourceware.org/gdb-patches/cover.1685479504.git.aburgess@redhat.com/
|
|
Add new commands:
set debug breakpoint on|off
show debug breakpoint
This patch introduces new debugging information that prints
breakpoint location insertion and removal flow.
The debug output looks like:
~~~
(gdb) set debug breakpoint on
(gdb) disassemble main
Dump of assembler code for function main:
0x0000555555555129 <+0>: endbr64
0x000055555555512d <+4>: push %rbp
0x000055555555512e <+5>: mov %rsp,%rbp
=> 0x0000555555555131 <+8>: mov $0x0,%eax
0x0000555555555136 <+13>: pop %rbp
0x0000555555555137 <+14>: ret
End of assembler dump.
(gdb) break *0x0000555555555137
Breakpoint 2 at 0x555555555137: file main.c, line 4.
[breakpoint] update_global_location_list: insert_mode = UGLL_MAY_INSERT
(gdb) c
Continuing.
[breakpoint] update_global_location_list: insert_mode = UGLL_INSERT
[breakpoint] insert_bp_location: Breakpoint 2 (0x5565daddb1e0) at address 0x555555555137 in main at main.c:4
[breakpoint] insert_bp_location: Breakpoint -2 (0x5565dab51c10) at address 0x7ffff7fd37b5
[breakpoint] insert_bp_location: Breakpoint -5 (0x5565dab68f30) at address 0x7ffff7fe509e
[breakpoint] insert_bp_location: Breakpoint -7 (0x5565dab694f0) at address 0x7ffff7fe63f4
[breakpoint] remove_breakpoint_1: Breakpoint 2 (0x5565daddb1e0) at address 0x555555555137 in main at main.c:4 due to regular remove
[breakpoint] remove_breakpoint_1: Breakpoint -2 (0x5565dab51c10) at address 0x7ffff7fd37b5 due to regular remove
[breakpoint] remove_breakpoint_1: Breakpoint -5 (0x5565dab68f30) at address 0x7ffff7fe509e due to regular remove
[breakpoint] remove_breakpoint_1: Breakpoint -7 (0x5565dab694f0) at address 0x7ffff7fe63f4 due to regular remove
Breakpoint 2, 0x0000555555555137 in main () at main.c:4
4 }
~~~
Co-Authored-By: Christina Schimpe <christina.schimpe@intel.com>
|
|
While working on an experiment, I realized that I needed the DAP
block_signals function. I figured other developers may need it as
well, so this patch moves it from DAP to the gdb module and exports
it.
I also added a new subclass of threading.Thread that ensures that
signals are blocked in the new thread.
Finally, this patch slightly rearranges the documentation so that
gdb-side threading issues and functions are all discussed in a single
node.
|
|
This adds a new objfile_for_address method to gdb.Progspace. This
makes it easy to find the objfile for a given address.
There's a related PR; and while this change would have been sufficient
for my original need, it's not clear to me whether I should close the
bug. Nevertheless I think it makes sense to at least mention it here.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19288
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Ada 2022 adds the "target name symbol", which can be used on the right
hand side of an assignment to refer to the left hand side. This
allows for convenient updates. This patch implements this for gdb's
Ada expression parser.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
When using "list" with no arguments, GDB will first print the lines
around where the inferior is stopped, then print the next N lines until
reaching the end of file, at which point it warns the user "Line X out
of range, file Y only has X-1 lines.". This is usually desirable, but
if the user can no longer see the original line, they may have forgotten
the current line or that a list command was used at all, making GDB's
error message look cryptic. It was reported in bugzilla as PR cli/30497.
This commit improves the user experience by changing the behavior of
"list" slightly when a user passes no arguments. It now prints that the
end of the file has been reached and recommends that the user use the
command "list ." instead.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30497
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Currently, after the user has used the list command once, there is no
self-contained way to ask GDB to print the location where the inferior is
stopped. The current best options require either using a separate
command to scope out where the inferior is stopped, or using "list *$pc"
requiring knowledge of GDB standard registers. This commit adds a way
to do that using '.' as a new argument for the 'list' command. If the
inferior isn't running, the command prints around the main function.
Because this necessitated having the inferior running and the test was
(seemingly unnecessarily) using printf in a non-essential way and it
would make the resulting log harder to read for no benefit, it was
replaced by a different statement.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This patch implements the Ada 2022 attributes 'Enum_Val and 'Enum_Rep.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
I noticed that the printf code for strings, printf_c_string and
printf_wide_c_string, don't take max-value-size into account, but do
load a complete string from the inferior into a GDB buffer.
As such it would be possible for an badly behaved inferior to cause
GDB to try and allocate an excessively large buffer, potentially
crashing GDB, or at least causing GDB to swap lots, which isn't
great.
We already have a setting to protect against this sort of thing, the
'max-value-size'. So this commit updates the two function mentioned
above to check the max-value-size and give an error if the
max-value-size is exceeded.
If the max-value-size is exceeded, I chose to continue reading
inferior memory to figure out how long the string actually is, we just
don't store the results. The benefit of this is that when we give the
user an error we can tell the user how big the string actually is,
which would allow them to correctly adjust max-value-size, if that's
what they choose to do.
The default for max-value-size is 64k so there should be no user
visible changes after this commit, unless the user was previously
printing very large strings. If that is the case then the user will
now need to increase max-value-size.
|
|
v6:
Fix comments.
Fix copyright
Remove unnecessary test suite stuff. save_var had to stay, as it mutates
some test suite state that otherwise fails.
v5:
Did what Tom Tromey requested in v4; which can be found here: https://pi.simark.ca/gdb-patches/87pmjm0xar.fsf@tromey.com/
v4:
Doc formatting fixed.
v3:
Eli:
Updated docs & NEWS to reflect new changes. Added
a reference from the .ptid attribute of the ThreadExitedEvent
to the ptid attribute of InferiorThread. To do this,
I've added an anchor to that attribute.
Tom:
Tom requested that I should probably just emit the thread object;
I ran into two issues for this, which I could not resolve in this patch;
1 - The Thread Object (the python type) checks it's own validity
by doing a comparison of it's `thread_info* thread` to nullptr. This
means that any access of it's attributes may (probably, since we are
in "async" land) throw Python exceptions because the thread has been
removed from the thread object. Therefore I've decided in v3 of this
patch to just emit most of the same fields that gdb.InferiorThread has, namely
global_num, name, num and ptid (the 3-attribute tuple provided by
gdb.InferiorThread.ptid).
2 - A python user can hold a global reference to an exiting thread. Thus
in order to have a ThreadExit event that can provide attribute access
reliably (both as a global reference, but also inside the thread exit
handler, as we can never guarantee that it's executed _before_ the
thread_info pointer is removed from the gdbpy thread object),
the `thread_info *` thread pointer must not be null. However, this
comes at the cost of gdb.InferiorThread believing it is "valid" - which means,
that if a user holds takes a global reference to that
exiting event thread object, they can some time later do `t.switch()` at which
point GDB will 'explode' so to speak.
v2:
Fixed white space issues and NULL/nullptr stuff,
as requested by Tom Tromey.
v1:
Currently no event is emitted for a thread exit.
This adds this functionality by emitting a new gdb.ThreadExitedEvent.
It currently provides four attributes:
- global_num: The GDB assigned global thread number
- num: the per-inferior thread number
- name: name of the thread or none if not set
- ptid: the PTID of the thread, a 3-attribute tuple, identical to
InferiorThread.ptid attribute
Added info to docs & the NEWS file as well.
Added test to test suite.
Fixed formatting.
Feedback wanted and appreciated.
|
|
This adds an 'assign' method to gdb.Value. This allows for assignment
without requiring the use of parse_and_eval.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds a new format for the printf and dprintf commands:
'%V'. This new format takes any GDB expression and formats it as a
string, just as GDB would for a 'print' command, e.g.:
(gdb) print a1
$a = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}
(gdb) printf "%V\n", a1
{2, 4, 6, 8, 10, 12, 14, 16, 18, 20}
(gdb)
It is also possible to pass the same options to %V as you might pass
to the print command, e.g.:
(gdb) print -elements 3 -- a1
$4 = {2, 4, 6...}
(gdb) printf "%V[-elements 3]\n", a1
{2, 4, 6...}
(gdb)
This new feature would effectively replace an existing feature of GDB,
the $_as_string builtin convenience function. However, the
$_as_string function has a few problems which this new feature solves:
1. $_as_string doesn't currently work when the inferior is not
running, e.g:
(gdb) printf "%s", $_as_string(a1)
You can't do that without a process to debug.
(gdb)
The reason for this is that $_as_string returns a value object with
string type. When we try to print this we call value_as_address,
which ends up trying to push the string into the inferior's address
space.
Clearly we could solve this problem, the string data exists in GDB, so
there's no reason why we have to push it into the inferior, but this
is an existing problem that would need solving.
2. $_as_string suffers from the fact that C degrades arrays to
pointers, e.g.:
(gdb) printf "%s\n", $_as_string(a1)
0x404260 <a1>
(gdb)
The implementation of $_as_string is passed a gdb.Value object that is
a pointer, it doesn't understand that it's actually an array. Solving
this would be harder than issue #1 I think. The whole array to
pointer transformation is part of our expression evaluation. And in
most cases this is exactly what we want. It's not clear to me how
we'd (easily) tell GDB that we didn't want this reduction in _some_
cases. But I'm sure this is solvable if we really wanted to.
3. $_as_string is a gdb.Function sub-class, and as such is passed
gdb.Value objects. There's no super convenient way to pass formatting
options to $_as_string. By this I mean that the new %V feature
supports print formatting options. Ideally, we might want to add this
feature to $_as_string, we might imagine it working something like:
(gdb) printf "%s\n", $_as_string(a1,
elements = 3,
array_indexes = True)
where the first item is the value to print, while the remaining
options are the print formatting options. However, this relies on
Python calling syntax, which isn't something that convenience
functions handle. We could possibly rely on strictly positional
arguments, like:
(gdb) printf "%s\n", $_as_string(a1, 3, 1)
But that's clearly terrible as there's far more print formatting
options, and if you needed to set the 9th option you'd need to fill in
all the previous options.
And right now, the only way to pass these options to a gdb.Function is
to have GDB first convert them all into gdb.Value objects, which is
really overkill for what we want.
The new %V format solves all these problems: the string is computed
and printed entirely on the GDB side, we are able to print arrays as
actual arrays rather than pointers, and we can pass named format
arguments.
Finally, the $_as_string is sold in the manual as allowing users to
print the string representation of flag enums, so given:
enum flags
{
FLAG_A = (1 << 0),
FLAG_B = (1 << 1),
FLAG_C = (1 << 1)
};
enum flags ff = FLAG_B;
We can:
(gdb) printf "%s\n", $_as_string(ff)
FLAG_B
This works just fine with %V too:
(gdb) printf "%V\n", ff
FLAG_B
So all functionality of $_as_string is replaced by %V. I'm not
proposing to remove $_as_string, there might be users currently
depending on it, but I am proposing that we don't push $_as_string in
the documentation.
As %V is a feature of printf, GDB's dprintf breakpoints naturally gain
access to this feature too. dprintf breakpoints can be operated in
three different styles 'gdb' (use GDB's printf), 'call' (call a
function in the inferior), or 'agent' (perform the dprintf on the
remote).
The use of '%V' will work just fine when dprintf-style is 'gdb'.
When dprintf-style is 'call' the format string and arguments are
passed to an inferior function (printf by default). In this case GDB
doesn't prevent use of '%V', but the documentation makes it clear that
support for '%V' will depend on the inferior function being called.
I chose this approach because the current implementation doesn't place
any restrictions on the format string when operating in 'call' style.
That is, the user might already be calling a function that supports
custom print format specifiers (maybe including '%V') so, I claim, it
would be wrong to block use of '%V' in this case. The documentation
does make it clear that users shouldn't expect this to "just work"
though.
When dprintf-style is 'agent' then GDB does no support the use of
'%V' (right now). This is handled at the point when GDB tries to
process the format string and send the dprintf command to the remote,
here's an example:
Reading symbols from /tmp/hello.x...
(gdb) dprintf call_me, "%V", a1
Dprintf 1 at 0x401152: file /tmp/hello.c, line 8.
(gdb) set sysroot /
(gdb) target remote | gdbserver --once - /tmp/hello.x
Remote debugging using | gdbserver --once - /tmp/hello.x
stdin/stdout redirected
Process /tmp/hello.x created; pid = 3088822
Remote debugging using stdio
Reading symbols from /lib64/ld-linux-x86-64.so.2...
(No debugging symbols found in /lib64/ld-linux-x86-64.so.2)
0x00007ffff7fd3110 in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) set dprintf-style agent
(gdb) c
Continuing.
Unrecognized format specifier 'V' in printf
Command aborted.
(gdb)
This is exactly how GDB would handle any other invalid format
specifier, for example:
Reading symbols from /tmp/hello.x...
(gdb) dprintf call_me, "%Q", a1
Dprintf 1 at 0x401152: file /tmp/hello.c, line 8.
(gdb) set sysroot /
(gdb) target remote | gdbserver --once - /tmp/hello.x
Remote debugging using | gdbserver --once - /tmp/hello.x
stdin/stdout redirected
Process /tmp/hello.x created; pid = 3089193
Remote debugging using stdio
Reading symbols from /lib64/ld-linux-x86-64.so.2...
(No debugging symbols found in /lib64/ld-linux-x86-64.so.2)
0x00007ffff7fd3110 in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) set dprintf-style agent
(gdb) c
Continuing.
Unrecognized format specifier 'Q' in printf
Command aborted.
(gdb)
The error message isn't the greatest, but improving that can be put
off for another day I hope.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This adds two new attributes and three new methods to gdb.Inferior.
The attributes let Python code see the command-line arguments and the
name of "main". Argument setting is also supported.
The methods let Python code manipulate the inferior's environment
variables.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This adds a 'global_context' parse_and_eval to gdb.parse_and_eval.
This lets users request a parse that is done at "global scope".
I considered letting callers pass in a block instead, with None
meaning "global" -- but then there didn't seem to be a clean way to
express the default for this parameter.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This adds a new Python function, gdb.execute_mi, that can be used to
invoke an MI command but get the output as a Python object, rather
than a string. This is done by implementing a new ui_out subclass
that builds a Python object.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=11688
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit extends the Python Disassembler API to allow for styling
of the instructions.
Before this commit the Python Disassembler API allowed the user to do
two things:
- They could intercept instruction disassembly requests and return a
string of their choosing, this string then became the disassembled
instruction, or
- They could call builtin_disassemble, which would call back into
libopcode to perform the disassembly. As libopcode printed the
instruction GDB would collect these print requests and build a
string. This string was then returned from the builtin_disassemble
call, and the user could modify or extend this string as needed.
Neither of these approaches allowed for, or preserved, disassembler
styling, which is now available within libopcodes for many of the more
popular architectures GDB supports.
This commit aims to fill this gap. After this commit a user will be
able to do the following things:
- Implement a custom instruction disassembler entirely in Python
without calling back into libopcodes, the custom disassembler will
be able to return styling information such that GDB will display
the instruction fully styled. All of GDB's existing style
settings will affect how instructions coming from the Python
disassembler are displayed in the expected manner.
- Call builtin_disassemble and receive a result that represents how
libopcode would like the instruction styled. The user can then
adjust or extend the disassembled instruction before returning the
result to GDB. Again, the instruction will be styled as expected.
To achieve this I will add two new classes to GDB,
DisassemblerTextPart and DisassemblerAddressPart.
Within builtin_disassemble, instead of capturing the print calls from
libopcodes and building a single string, we will now create either a
text part or address part and store these parts in a vector.
The DisassemblerTextPart will capture a small piece of text along with
the associated style that should be used to display the text. This
corresponds to the disassembler calling
disassemble_info::fprintf_styled_func, or for disassemblers that don't
support styling disassemble_info::fprintf_func.
The DisassemblerAddressPart is used when libopcodes requests that an
address be printed, and takes care of printing the address and
associated symbol, this corresponds to the disassembler calling
disassemble_info::print_address_func.
These parts are then placed within the DisassemblerResult when
builtin_disassemble returns.
Alternatively, the user can directly create parts by calling two new
methods on the DisassembleInfo class: DisassembleInfo.text_part and
DisassembleInfo.address_part.
Having created these parts the user can then pass these parts when
initializing a new DisassemblerResult object.
Finally, when we return from Python to gdbpy_print_insn, one way or
another, the result being returned will have a list of parts. Back in
GDB's C++ code we walk the list of parts and call back into GDB's core
to display the disassembled instruction with the correct styling.
The new API lives in parallel with the old API. Any existing code
that creates a DisassemblerResult using a single string immediately
creates a single DisassemblerTextPart containing the entire
instruction and gives this part the default text style. This is also
what happens if the user calls builtin_disassemble for an architecture
that doesn't (yet) support libopcode styling.
This matches up with what happens when the Python API is not involved,
an architecture without disassembler styling support uses the old
libopcodes printing API (the API that doesn't pass style info), and
GDB just prints everything using the default text style.
The reason that parts are created by calling methods on
DisassembleInfo, rather than calling the class constructor directly,
is DisassemblerAddressPart. Ideally this part would only hold the
address which the part represents, but in order to support backwards
compatibility we need to be able to convert the
DisassemblerAddressPart into a string. To do that we need to call
GDB's internal print_address function, and to do that we need an
gdbarch.
What this means is that the DisassemblerAddressPart needs to take a
gdb.Architecture object at creation time. The only valid place a user
can pull this from is from the DisassembleInfo object, so having the
DisassembleInfo act as a factory ensures that the correct gdbarch is
passed over each time. I implemented both solutions (the one
presented here, and an alternative where parts could be constructed
directly), and this felt like the cleanest solution.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
This commit is a refactor ahead of the next change which will make
disassembler styling available through the Python API.
Unfortunately, in order to make the styling support available, I think
the easiest solution is to make a very small change to the existing
API.
The current API relies on returning a DisassemblerResult object to
represent each disassembled instruction. Currently GDB allows the
DisassemblerResult class to be sub-classed, which could mean that a
user tries to override the various attributes that exist on the
DisassemblerResult object.
This commit removes this ability, effectively making the
DisassemblerResult class final.
Though this is a change to the existing API, I'm hoping this isn't
going to cause too many issues:
- The Python disassembler API was only added in the previous release
of GDB, so I don't expect it to be widely used yet, and
- It's not clear to me why a user would need to sub-class the
DisassemblerResult type, I allowed it in the original patch
because at the time I couldn't see any reason to NOT allow it.
Having prevented sub-classing I can now rework the tail end of the
gdbpy_print_insn function; instead of pulling the results out of the
DisassemblerResult object by calling back into Python, I now cast the
Python object back to its C++ type (disasm_result_object), and access
the fields directly from there. In later commits I will be reworking
the disasm_result_object type in order to hold information about the
styled disassembler output.
The tests that dealt with sub-classing DisassemblerResult have been
removed, and a new test that confirms that DisassemblerResult can't be
sub-classed has been added.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
SUMMARY
The '--simple-values' argument to '-stack-list-arguments' and similar
GDB/MI commands does not take reference types into account, so that
references to arbitrarily large structures are considered "simple" and
printed. This means that the '--simple-values' argument cannot be used
by IDEs when tracing the stack due to the time taken to print large
structures passed by reference.
DETAILS
Various GDB/MI commands ('-stack-list-arguments', '-stack-list-locals',
'-stack-list-variables' and so on) take a PRINT-VALUES argument which
may be '--no-values' (0), '--all-values' (1) or '--simple-values' (2).
In the '--simple-values' case, the command is supposed to print the
name, type, and value of variables with simple types, and print only the
name and type of variables with compound types.
The '--simple-values' argument ought to be suitable for IDEs that need
to update their user interface with the program's call stack every time
the program stops. However, it does not take C++ reference types into
account, and this makes the argument unsuitable for this purpose.
For example, consider the following C++ program:
struct s {
int v[10];
};
int
sum(const struct s &s)
{
int total = 0;
for (int i = 0; i < 10; ++i) total += s.v[i];
return total;
}
int
main(void)
{
struct s s = { { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 } };
return sum(s);
}
If we start GDB in MI mode and continue to 'sum', the behaviour of
'-stack-list-arguments' is as follows:
(gdb)
-stack-list-arguments --simple-values
^done,stack-args=[frame={level="0",args=[{name="s",type="const s &",value="@0x7fffffffe310: {v = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}"}]},frame={level="1",args=[]}]
Note that the value of the argument 's' was printed, even though 's' is
a reference to a structure, which is not a simple value.
See https://github.com/microsoft/MIEngine/pull/673 for a case where this
behaviour caused Microsoft to avoid the use of '--simple-values' in
their MIEngine debug adapter, because it caused Visual Studio Code to
take too long to refresh the call stack in the user interface.
SOLUTIONS
There are two ways we could fix this problem, depending on whether we
consider the current behaviour to be a bug.
1. If the current behaviour is a bug, then we can update the behaviour
of '--simple-values' so that it takes reference types into account:
that is, a value is simple if it is neither an array, struct, or
union, nor a reference to an array, struct or union.
In this case we must add a feature to the '-list-features' command so
that IDEs can detect that it is safe to use the '--simple-values'
argument when refreshing the call stack.
2. If the current behaviour is not a bug, then we can add a new option
for the PRINT-VALUES argument, for example, '--scalar-values' (3),
that would be suitable for use by IDEs.
In this case we must add a feature to the '-list-features' command
so that IDEs can detect that the '--scalar-values' argument is
available for use when refreshing the call stack.
PATCH
This patch implements solution (1) as I think the current behaviour of
not printing structures, but printing references to structures, is
contrary to reasonable expectation.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29554
|
|
I noticed the following behaviour:
$ gdb -q -i=mi /tmp/hello.x
=thread-group-added,id="i1"
=cmd-param-changed,param="print pretty",value="on"
~"Reading symbols from /tmp/hello.x...\n"
(gdb)
-break-insert -p 99 main
^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="0x0000000000401198",func="main",file="/tmp/hello.c",fullname="/tmp/hello.c",line="18",thread-groups=["i1"],thread="99",times="0",original-location="main"}
(gdb)
info breakpoints
&"info breakpoints\n"
~"Num Type Disp Enb Address What\n"
~"1 breakpoint keep y 0x0000000000401198 in main at /tmp/hello.c:18\n"
&"../../src/gdb/thread.c:1434: internal-error: print_thread_id: Assertion `thr != nullptr' failed.\nA problem internal to GDB has been detected,\nfurther debugging may prove unreliable."
&"\n"
&"----- Backtrace -----\n"
&"Backtrace unavailable\n"
&"---------------------\n"
&"\nThis is a bug, please report it."
&" For instructions, see:\n"
&"<https://www.gnu.org/software/gdb/bugs/>.\n\n"
Aborted (core dumped)
What we see here is that when using the MI a user can create
thread-specific breakpoints for non-existent threads. Then if we try
to use the CLI 'info breakpoints' command GDB throws an assertion.
The assert is a result of the print_thread_id call when trying to
build the 'stop only in thread xx.yy' line; print_thread_id requires a
valid thread_info pointer, which we can't have for a non-existent
thread.
In contrast, when using the CLI we see this behaviour:
$ gdb -q /tmp/hello.x
Reading symbols from /tmp/hello.x...
(gdb) break main thread 99
Unknown thread 99.
(gdb)
The CLI doesn't allow a breakpoint to be created for a non-existent
thread. So the 'info breakpoints' command is always fine.
Interestingly, the MI -break-info command doesn't crash, this is
because the MI uses global thread-ids, and so never calls
print_thread_id. However, GDB does support using CLI and MI in
parallel, so we need to solve this problem.
One option would be to change the CLI behaviour to allow printing
breakpoints for non-existent threads. This would preserve the current
MI behaviour.
The other option is to pull the MI into line with the CLI and prevent
breakpoints being created for non-existent threads. This is good for
consistency, but is a breaking change for the MI.
In the end I figured that it was probably better to retain the
consistent CLI behaviour, and just made the MI reject requests to
place a breakpoint on a non-existent thread. The only test we had
that depended on the old behaviour was
gdb.mi/mi-thread-specific-bp.exp, which was added by me in commit:
commit 2fd9a436c8d24eb0af85ccb3a2fbdf9a9c679a6c
Date: Fri Feb 17 10:48:06 2023 +0000
gdb: don't duplicate 'thread' field in MI breakpoint output
I certainly didn't intend for this test to rely on this feature of the
MI, so I propose to update this test to only create breakpoints for
threads that exist.
Actually, I've added a new test that checks the MI rejects creating a
breakpoint for a non-existent thread, and I've also extended the test
to run with the separate MI/CLI UIs, and then tested 'info
breakpoints' to ensure this command doesn't crash.
I've extended the documentation of the `-p` flag to explain the
constraints better.
I have also added a NEWS entry just in case someone runs into this
issue, at least then they'll know this change in behaviour was
intentional.
One thing that I did wonder about while writing this patch, is whether
we should treat requests like this, on both the MI and CLI, as another
form of pending breakpoint, something like:
(gdb) break foo thread 9
Thread 9 does not exist.
Make breakpoint pending on future thread creation? (y or [n]) y
Breakpoint 1 (foo thread 9) pending.
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y <PENDING> foo thread 9
Don't know if folk think that would be a useful idea or not? Either
way, I think that would be a separate patch from this one.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Older gdb's (9, 10, 11 and 12) have a bug that causes them to crash whenever
a target reports the pauth feature string in the target description and also
provide additional register outside of gdb's known and expected feature
strings.
This was fixed in gdb 13 onwards, but that means we're stuck with gdb's out
there that will crash on connection to the above targets.
QEMU has postponed inclusion of the pauth feature string in version 8, and
instead we agreed to use a new feature name to prevent crashing those older
gdb's.
Initially there was a plan to backport a trivial fix all the way to gdb 9, but
given QEMU's choice, this is no longer needed.
This new feature string is org.gnu.gdb.aarch64.pauth_v2, and should be used
by all targets going forward, except native linux gdb and gdbserver, for
backwards compatibility with older gdb's/gdbserver's.
gdb/gdbserver will still emit the old feature string for Linux since it doesn't
report additional system registers and thus doesn't cause a crash of older
gdb's. We can revisit this in the future once the problematic gdb's are likely
no longer in use.
I've added some documentation to explain the situation.
|
|
Allow consumers of GDB to extract the name of the main method. This is
most useful for Fortran programs which have a variable main method.
Used by both MAP and DDT e.g. it is used to detect the presence of debug
information.
Co-Authored-By: Maciej W. Rozycki <macro@embecosm.com>
|
|
When writing an unwinder it is necessary to create a new class to act
as a frame-id. This new class is almost certainly just going to set a
'sp' and 'pc' attribute within the instance.
This commit adds a little helper class gdb.unwinder.FrameId that does
this job. Users can make use of this to avoid having to write out
standard boilerplate code any time they write an unwinder.
Of course, if the user wants their FrameId class to be more
complicated in some way, then they can still write their own class,
just like they could before.
I've simplified the example code in the documentation to now use the
new helper class, and I've also made use of this helper within the
testsuite.
Any existing user code will continue to work just as it did before
after this change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Currently when creating a gdb.UnwindInfo object a user must call
gdb.PendingFrame.create_unwind_info and pass a frame-id object.
The frame-id object should have at least a 'sp' attribute, and
probably a 'pc' attribute too (it can also, in some cases have a
'special' attribute).
Currently all of these frame-id attributes need to be gdb.Value
objects, but the only reason for that requirement is that we have some
code in py-unwind.c that only handles gdb.Value objects.
If instead we switch to using get_addr_from_python in py-utils.c then
we will support both gdb.Value objects and also raw numbers, which
might make things simpler in some cases.
So, I started rewriting pyuw_object_attribute_to_pointer (in
py-unwind.c) to use get_addr_from_python. However, while looking at
the code I noticed a problem.
The pyuw_object_attribute_to_pointer function returns a boolean flag,
if everything goes OK we return true, but we return false in two
cases, (1) when the attribute is not present, which might be
acceptable, or might be an error, and (2) when we get an error trying
to extract the attribute value, in which case a Python error will have
been set.
Now in pending_framepy_create_unwind_info we have this code:
if (!pyuw_object_attribute_to_pointer (pyo_frame_id, "sp", &sp))
{
PyErr_SetString (PyExc_ValueError,
_("frame_id should have 'sp' attribute."));
return NULL;
}
Notice how we always set an error. This will override any error that
is already set.
So, if you create a frame-id object that has an 'sp' attribute, but
the attribute is not a gdb.Value, then currently we fail to extract
the attribute value (it's not a gdb.Value) and set this error in
pyuw_object_attribute_to_pointer:
rc = pyuw_value_obj_to_pointer (pyo_value.get (), addr);
if (!rc)
PyErr_Format (
PyExc_ValueError,
_("The value of the '%s' attribute is not a pointer."),
attr_name);
Then we return to pending_framepy_create_unwind_info and immediately
override this error with the error about 'sp' being missing.
This all feels very confused.
Here's my proposed solution: pyuw_object_attribute_to_pointer will now
return a tri-state enum, with states OK, MISSING, or ERROR. The
meanings of these states are:
OK - Attribute exists and was extracted fine,
MISSING - Attribute doesn't exist, no Python error was set.
ERROR - Attribute does exist, but there was an error while
extracting it, a Python error was set.
We need to update pending_framepy_create_unwind_info, the only user of
pyuw_object_attribute_to_pointer, but now I think things are much
clearer. Errors from lower levels are not blindly overridden with the
generic meaningless error message, but we still get the "missing 'sp'
attribute" error when appropriate.
This change also includes the switch to get_addr_from_python which was
what started this whole journey.
For well behaving user code there should be no visible changes after
this commit.
For user code that hits an error, hopefully the new errors should be
more helpful in figuring out what's gone wrong.
Additionally, users can now use integers for the 'sp' and 'pc'
attributes in their frame-id objects if that is useful.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
The gdb.Frame class has far more methods than gdb.PendingFrame. Given
that a PendingFrame hasn't yet been claimed by an unwinder, there is a
limit to which methods we can add to it, but many of the methods that
the Frame class has, the PendingFrame class could also support.
In this commit I've added those methods to PendingFrame that I believe
are safe.
In terms of implementation: if I was starting from scratch then I
would implement many of these (or most of these) as attributes rather
than methods. However, given both Frame and PendingFrame are just
different representation of a frame, I think there is value in keeping
the interface for the two classes the same. For this reason
everything here is a method -- that's what the Frame class does.
The new methods I've added are:
- gdb.PendingFrame.is_valid: Return True if the pending frame
object is valid.
- gdb.PendingFrame.name: Return the name for the frame's function,
or None.
- gdb.PendingFrame.pc: Return the $pc register value for this
frame.
- gdb.PendingFrame.language: Return a string containing the
language for this frame, or None.
- gdb.PendingFrame.find_sal: Return a gdb.Symtab_and_line object
for the current location within the pending frame, or None.
- gdb.PendingFrame.block: Return a gdb.Block for the current
pending frame, or None.
- gdb.PendingFrame.function: Return a gdb.Symbol for the current
pending frame, or None.
In every case I've just copied the implementation over from gdb.Frame
and cleaned the code slightly e.g. NULL to nullptr. Additionally each
function required a small update to reflect the PendingFrame type, but
that's pretty minor.
There are tests for all the new methods.
For more extensive testing, I added the following code to the file
gdb/python/lib/command/unwinders.py:
from gdb.unwinder import Unwinder
class TestUnwinder(Unwinder):
def __init__(self):
super().__init__("XXX_TestUnwinder_XXX")
def __call__(self,pending_frame):
lang = pending_frame.language()
try:
block = pending_frame.block()
assert isinstance(block, gdb.Block)
except RuntimeError as rte:
assert str(rte) == "Cannot locate block for frame."
function = pending_frame.function()
arch = pending_frame.architecture()
assert arch is None or isinstance(arch, gdb.Architecture)
name = pending_frame.name()
assert name is None or isinstance(name, str)
valid = pending_frame.is_valid()
pc = pending_frame.pc()
sal = pending_frame.find_sal()
assert sal is None or isinstance(sal, gdb.Symtab_and_line)
return None
gdb.unwinder.register_unwinder(None, TestUnwinder())
This registers a global unwinder that calls each of the new
PendingFrame methods and checks the result is of an acceptable type.
The unwinder never claims any frames though, so shouldn't change how
GDB actually behaves.
I then ran the testsuite. There was only a single regression, a test
that uses 'disable unwinder' and expects a single unwinder to be
disabled -- the extra unwinder is now disabled too, which changes the
test output. So I'm reasonably confident that the new methods are not
going to crash GDB.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
This commit makes a few related changes to the gdb.unwinder.Unwinder
class attributes:
1. The 'name' attribute is now a read-only attribute. This prevents
user code from changing the name after registering the unwinder. It
seems very unlikely that any user is actually trying to do this in
the wild, so I'm not very worried that this will upset anyone,
2. We now validate that the name is a string in the
Unwinder.__init__ method, and throw an error if this is not the
case. Hopefully nobody was doing this in the wild. This should
make it easier to ensure the 'info unwinder' command shows sane
output (how to display a non-string name for an unwinder?),
3. The 'enabled' attribute is now implemented with a getter and
setter. In the setter we ensure that the new value is a boolean,
but the real important change is that we call
'gdb.invalidate_cached_frames()'. This means that the backtrace
will be updated if a user manually disables an unwinder (rather than
calling the 'disable unwinder' command). It is not unreasonable to
think that a user might register multiple unwinders (relating to
some project) and have one command that disables/enables all the
related unwinders. This command might operate by poking the enabled
attribute of each unwinder object directly, after this commit, this
would now work correctly.
There's tests for all the changes, and lots of documentation updates
that both cover the new changes, but also further improve (I think)
the general documentation for GDB's Unwinder API.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
This changes gdb to use scalar arithmetic for expression evaluation.
I suspect this patch is not truly complete, as there may be code paths
that still don't correctly handle 128-bit integers. However, many
things do work now.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30190
|
|
AIX now supports vector register contents debugging for both VMX
VSX registers.
|
|
Now that index cache files are written in the background, one test in
index-cache.exp is racy -- it assumes that the cache file will have
been written during startup.
This patch fixes the problem by introducing a new maintenance command
to wait for all pending writes to the index cache.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
[ This is a simplified rewrite of an earlier submission "[RFC][gdb/symtab] Add
maint set symbol-read-order", submitted here (
https://sourceware.org/pipermail/gdb-patches/2022-September/192044.html
). ]
With the test-case included in this patch, we run into:
...
(gdb) file dwarf2-and-ctf
(gdb) print var_ctf^M
'var_ctf' has unknown type; cast it to its declared type^M
...
The problem is that the executable contains both ctf and dwarf2, so the ctf
info (which contains the type information about var_ctf) is ignored.
GDB has support for handling multiple debug formats, but the common use case
for ctf is to be used when dwarf2 is not present, and gdb reflects that,
assuming that by reading ctf in addition there won't be any extra information,
so it's not worth the additional cycles and memory.
Add a new command "set/show always-read-ctf on/off", that when on forces
unconditional reading of ctf, allowing us to do:
...
(gdb) set always-read-ctf on
(gdb) file dwarf2-and-ctf
(gdb) print var_ctf^M
$2 = 2^M
...
The setting is off by default, preserving current behaviour.
A bit of background on the relevance of reading order: the formats have a
priority relationship between them, where reading earlier means lower
priority. By reading the format with the most detail last, we ensure it has
the highest priority, which makes sure that in case there is overlapping info,
the most detailed info is found. This explains the current reading order of
mdebug, stabs and dwarf2.
Add the unconditional reading of ctf before dwarf2, because it's less detailed
than dwarf2. The conditional reading of ctf is still done after the attempt to
read dwarf2, necessarily so because we only know whether there's dwarf2 after
we've tried to read it.
The new command allow us to replace uses of -Wl,--strip-debug added in commit
908a926ec4e ("[gdb/testsuite] Fix ctf test-cases on openSUSE Tumbleweed") by
uses of "set always-read-ctf on", but I've left that for another commit.
Tested on x86_64-linux.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
When creating a thread-specific breakpoint with a single location, the
'thread' field would be repeated in the MI output. This can be seen
in two existing tests gdb.mi/mi-nsmoribund.exp and
gdb.mi/mi-pending.exp, e.g.:
(gdb)
-break-insert -p 1 bar
^done,bkpt={number="1",type="breakpoint",disp="keep",
enabled="y",
addr="0x000000000040110a",func="bar",
file="/tmp/mi-thread-specific-bp.c",
fullname="/tmp/mi-thread-specific-bp.c",
line="32",thread-groups=["i1"],
thread="1",thread="1", <================ DUPLICATION!
times="0",original-location="bar"}
I know we need to be careful when adjusting MI output, but I'm hopeful
in this case, as the field is duplicated, and the field contents are
always identical, that we might get away with removing one of the
duplicates.
The change in GDB is a fairly trivial condition change.
We did have a couple of tests that contained the duplicate fields in
their expected output, but given there was no comment pointing out
this oddity either in the GDB code, or in the test, I suspect this was
more a case of copying whatever output GDB produced and using that as
the expected results. I've updated these tests to remove the
duplication.
I've update lib/mi-support.exp to provide support for building
breakpoint patterns that contain the thread field, and I've made use
of this in a new test I've added that is just about creating
thread-specific breakpoints and checking the results. The two tests I
mentioned above as being updated could also use the new
lib/mi-support.exp functionality, but I'm going to do that in a later
patch, this way it is clear what changes I'm actually proposing to
make to the expected output.
As I said, I hope that frontends will be able to handle this change,
but I still think its worth adding a NEWS entry, that way, if someone
runs into problems, there's a chance they can figure out what's going
on.
This should not impact CLI output at all.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Pedro Alves <pedro@palves.net>
|
|
For testing a following patch, I wanted a way to send a SIGINT to GDB
from a breakpoint condition. And I didn't want to do it from a Python
breakpoint or Python function, as I wanted to exercise non-Python code
paths. So I thought I'd add a new $_shell internal function, that
runs a command under the shell, and returns the exit code. With this,
I could write:
(gdb) b foo if $_shell("kill -SIGINT $gdb_pid") != 0 || <other condition>
I think this is generally useful, hence I'm proposing it here.
Here's the new function in action:
(gdb) p $_shell("true")
$1 = 0
(gdb) p $_shell("false")
$2 = 1
(gdb) p $_shell("echo hello")
hello
$3 = 0
(gdb) p $_shell("foobar")
bash: line 1: foobar: command not found
$4 = 127
(gdb) help function _shell
$_shell - execute a shell command and returns the result.
Usage: $_shell (command)
Returns the command's exit code: zero on success, non-zero otherwise.
(gdb)
NEWS and manual changes included.
Approved-By: Andrew Burgess <aburgess@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Eli Zaretskii <eliz@gnu.org>
Change-Id: I7e36d451ee6b428cbf41fded415ae2d6b4efaa4e
|