Age | Commit message (Collapse) | Author | Files | Lines |
|
This adds a new gdb.Frame.static_link method to the gdb Python layer.
This can be used to find the static link frame for a given frame.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When resizing from a big to small terminal size, and you have a
TUI python window that would then be outside of the new size,
valgrind shows this error:
==3389== Invalid read of size 1
==3389== at 0xC3DFEE: wnoutrefresh (lib_refresh.c:167)
==3389== by 0xC3E3C9: wrefresh (lib_refresh.c:63)
==3389== by 0xA9766C: tui_unhighlight_win(tui_win_info*) (tui-wingeneral.c:134)
==3389== by 0x98921C: tui_py_window::rerender() (py-tui.c:183)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8C2A2: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1033)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8B1F8: tui_apply_current_layout(bool) (tui-layout.c:81)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== Address 0x115cc214 is 1,332 bytes inside a block of size 2,240 free'd
==3389== at 0x4A0A430: free (vg_replace_malloc.c:446)
==3389== by 0xC3CF7D: _nc_freewin (lib_newwin.c:121)
==3389== by 0xA8B1C6: tui_apply_current_layout(bool) (tui-layout.c:78)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== by 0x8E40E9: captured_command_loop() (main.c:407)
==3389== by 0x8E5E54: gdb_main(captured_main_args*) (main.c:1324)
==3389== by 0x62AC04: main (gdb.c:39)
It's because tui_py_window::m_inner_window still has the outside
coordinates, and wnoutrefresh then does an out-of-bounds access.
Fix this by resetting m_inner_window on every resize, it will anyways
be recreated in the next rerender call.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
I found a declaration in py-stopevent.h for which there is no
definition. This patch removes it.
|
|
This patch implements the DAP setVariable request.
setVariable is a bit odd in that it specifies the variable to modify
by passing in the variable's container and the name of the variable.
This approach can't handle variable shadowing (there are a couple of
open DAP bugs on this topic), so this patch renames duplicates to
avoid the problem.
|
|
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that gdb/python/python.c unconditionally includes
gdbsupport/selftest.h.
Make this conditional on GDB_SELF_TEST.
Tested on x86_64-linux.
|
|
A pretty-printer's 'children' method may return values other than a
gdb.Value -- it may return any value that can be converted to a
gdb.Value.
I noticed that this case did not work for DAP. This patch fixes the
problem.
|
|
Andry pointed out that the DAP code did not properly handle
gdb.LazyString results from a pretty-printer, yielding:
TypeError: Object of type LazyString is not JSON serializable
This patch fixes the problem, partly with a small patch in varref.py,
but mainly by implementing tp_str for LazyString.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Andry noticed that given a DAP setExpression request, where the
expression to set is a register, DAP will return the wrong value -- it
will return the old value, not the updated one.
This happens because gdb.Value.assign (which was recently added for
DAP) does not update the value.
In this patch, I chose to have the assign method update the Value
in-place. It's also possible to have it return a new value, but this
didn't seem very useful to me.
|
|
Andry Ogorodnik, a co-worker, noticed that multiple "scopes" requests
with the same frame would yield different variableReference values in
the response.
This patch adds a regression test for this, and adds a scope cache in
scopes.py, ensuring that multiple identical requests will get the same
response.
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
This commit replaces the architecture_changed observer with a
new_architecture observer.
Currently the only user of the architecture_changed observer is the
Python code, which uses this observer to register the Python unwinder
with the architecture.
The problem is that the architecture_changed observer is triggered
from inferior::set_arch(), which only sees the inferior-wide gdbarch
value. For targets that use thread-specific architectures, these
never trigger the architecture_changed observer, and so never have the
Python unwinder registered with them.
When it comes to unwinding GDB makes use of the frame's gdbarch, which
is based on the thread's regcache gdbarch, which is set in
get_thread_regcache to the value returned from
target_thread_architecture, which is not always the inferiors gdbarch
value, it might be a thread-specific gdbarch which has not passed
through inferior::set_arch().
The new_architecture observer will be triggered from
gdbarch_find_by_info, whenever a new gdbarch is created and
initialised. As GDB caches and reuses gdbarch values, we should
expect to see each new architecture trigger the new_architecture
observer just once.
After this commit, targets that make use of thread-specific
architectures should be able to make use of Python unwinders.
As I don't have access to a machine that makes use of thread-specific
architectures right now, I asked Luis to confirm that an AArch64
target that uses SVE/SME can't use the Python unwinders in threads
that are using a thread-specific architectures, and he confirmed that
this is indeed the case, see this discussion:
https://inbox.sourceware.org/gdb/87wmvsat8i.fsf@redhat.com
Tested-By: Lancelot Six <lancelot.six@amd.com>
Tested-By: Luis Machado <luis.machado@arm.com>
Reviewed-By: Luis Machado <luis.machado@arm.com>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This function is just a wrapper around the current inferior's gdbarch.
I find that having that wrapper just obscures where the arch is coming
from, and that it's often used as "I don't know which arch to use so
I'll use this magical target_gdbarch function that gets me an arch" when
the arch should in fact come from something in the context (a thread,
objfile, symbol, etc). I think that removing it and inlining
`current_inferior ()->arch ()` everywhere will make it a bit clearer
where that arch comes from and will trigger people into reflecting
whether this is the right place to get the arch or not.
Change-Id: I79f14b4e4934c88f91ca3a3155f5fc3ea2fadf6b
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This is to make it explicit which inferior's architecture just changed,
and that the callbacks should not assume it is the current inferior.
Update the only caller, pyuw_on_new_gdbarch, to add the parameter,
although it doesn't use it currently.
Change-Id: Ieb7f21377e4252cc6e7b1ce2cc812cd1a1840e0e
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Make the inferior's gdbarch field private, and add getters and setters.
This helped me by allowing putting breakpoints on set_arch to know when
the inferior's arch was set. A subsequent patch in this series also
adds more things in set_arch.
Change-Id: I0005bd1ef4cd6b612af501201cec44e457998eec
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit adds a new Python function, gdb.notify_mi, that can be used
to emit custom async notification to MI channel. This can be used, among
other things, to implement notifications about events MI does not support,
such as remote connection closed or register change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit generalizes serialize_mi_result() to make usable in
different contexts than printing result of custom MI command.
To do so, the check whether passed Python object is a dictionary has been
moved to the caller - at the very least, different uses require different
error messages. Also it has been renamed to serialize_mi_results() to better
match GDB/MI output syntax (see corresponding section in documentation,
in particular rules 'result-record' and 'async-output'.
Since it is now more generic function, it has been moved to py-mi.c.
This is a preparation for implementing Python support for sending custom
MI async events.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
The new_objfile observer is currently used to indicate both when a new
objfile is added to program space (when passed non-nullptr) and when all
objfiles of a program space were just removed (when passed nullptr).
I think this is confusing (and Andrew apparently thinks so too [1]).
Add a new "all_objfiles_removed" observer to remove the second role from
"new_objfile".
Some existing users of new_objfile do nothing if the passed objfile is
nullptr. For them, we can simply drop the nullptr check. For others,
add a new all_objfiles_removed callback, and refactor things a bit to
keep the existing behavior as much as possible.
Some callbacks relied on current_program_space, and following
the refactoring now use either objfile->pspace or the pspace passed to
all_objfiles_removed. I think this should be relatively safe, and in
general a step in the right direction.
On the notify side, I found only one call site to change from
new_objfile to all_objfiles_removed, in clear_symtab_users. It is not
entirely clear to me that this is entirely correct. clear_symtab_users
appears to be called in spots that don't remove all objfiles
(functions finish_new_objfile, remove_symbol_file_command, reread_symbols,
do_module_cleanups). But I think that this patch at least makes the
current code clearer.
[1] https://gitlab.com/gnutools/binutils-gdb/-/commit/a0a031bce0527b1521788b5dad640e7883b3a252
Change-Id: Icb648f72862e056267f30f44dd439bd4ec766f13
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Make the current_program_space references bubble up a bit.
Change-Id: Id047a48cc8d8a45504cdbb5960bafe3e7735d652
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add program_space space parameters to emit_clear_objfiles_event and
create_clear_objfiles_event_object, making the reference to
current_program_space bubble up a bit.
Change-Id: I5fde2071712781e5d45971fa0ab34d85d3a49a71
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Initially I just wanted a Python event for when GDB removes a program
space, I'm writing a Python extension that caches information for each
program space, and need to know when I should discard entries for a
particular program space.
But, it seemed easy enough to also add an event for when GDB adds a
new program space, so I went ahead and added both new events.
Of course, we don't currently have an observable for program space
addition or removal, so I first needed to add these. After that it's
pretty simple to add two new Python events and have these trigger.
The two new event registries are:
events.new_progspace
events.free_progspace
These emit NewProgspaceEvent and FreeProgspaceEvent objects
respectively, each of these new event types has a 'progspace'
attribute that contains the relevant gdb.Progspace object.
There's a couple of things to be mindful of.
First, it is not possible to catch the NewProgspaceEvent for the very
first program space, the one that is created when GDB first starts, as
this program space is created before any Python scripts are sourced.
In order to allow this event to be caught we would need to defer
creating the first program space, and as a consequence the first
inferior, until some later time. But, existing scripts could easily
depend on there being an initial inferior, so I really don't think we
should change that -- and so, we end up with the consequence that we
can't catch the event for the first program space.
The second, I think minor, issue, is that GDB doesn't clean up its
program spaces upon exit -- or at least, they are not cleaned up
before Python is shut down. As a result, any program spaces in use at
the time GDB exits don't generate a FreeProgspaceEvent. I'm not
particularly worried about this for my use case, I'm using the event
to ensure that a cache doesn't hold stale entries within a single GDB
session. It's also easy enough to add a Python at-exit callback which
can do any final cleanup if needed.
Finally, when testing, I did hit a slightly weird issue with some of
the remote boards (e.g. remote-stdio-gdbserver). As a consequence of
this issue I see some output like this in the gdb.log:
(gdb) PASS: gdb.python/py-progspace-events.exp: inferior 1
step
FreeProgspaceEvent: <gdb.Progspace object at 0x7fb7e1d19c10>
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/libc.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/ld-linux-x86-64.so.2": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
do_parent_stuff () at py-progspace-events.c:41
41 ++global_var;
(gdb) PASS: gdb.python/py-progspace-events.exp: step
The 'FreeProgspaceEvent ...' line is expected, that's my test Python
extension logging the event. What isn't expected are all the blocks
like:
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
It turns out that this has nothing to do with my changes, this is just
a consequence of reading files over the remote protocol. The test
forks a child process which GDB stays attached too. When the child
exits, GDB cleans up by calling prune_inferiors, which in turn can
result in GDB trying to close some files that are open because of the
inferior being deleted.
If the prune_inferiors call occurs when the remote target is
running (and in non-async mode) then GDB will try to send a fileio
packet while the remote target is waiting for a stop reply, and the
remote target will throw an error, see remote_target::putpkt_binary in
remote.c for details.
I'm going to look at fixing this, but, as I said, this is nothing to
do with this change, I just mention it because I ended up needing to
account for these warning messages in one of my tests, and it all
looks a bit weird.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit makes the executable_changed observable available through
the Python API as an event. There's nothing particularly interesting
going on here, it just follows the same pattern as many of the other
Python events we support.
The new event registry is called events.executable_changed, and this
emits an ExecutableChangedEvent object which has two attributes, a
gdb.Progspace called 'progspace', this is the program space in which
the executable changed, and a Boolean called 'reload', which is True
if the same executable changed on disk and has been reloaded, or is
False when a new executable has been loaded.
One interesting thing did come up during testing though, you'll notice
the test contains a setup_kfail call. During testing I observed that
the executable_changed event would trigger twice when GDB restarted an
inferior. However, the ExecutableChangedEvent object is identical for
both calls, so the wrong information is never sent out, we just see
one too many events.
I tracked this down to how the reload_symbols function (symfile.c)
takes care to also reload the executable, however, I've split fixing
this into a separate commit, so see the next commit for details.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.executable_filename attribute that contains the
path to the executable for this program space, or None if no
executable is set.
The path within this attribute will be set by the "exec-file" and/or
"file" commands.
Accessing this attribute for an invalid program space will raise an
exception.
This new attribute is similar too, but not the same as the existing
gdb.Progspace.filename attribute. If I could change the past, I'd
change the 'filename' attribute to 'symbol_filename', which is what it
actually represents. The old attribute will be set by the
'symbol-file' command, while the new attribute is set by the
'exec-file' command. Obviously the 'file' command sets both of these
attributes.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.symbol_file attribute. This attribute holds the
gdb.Objfile object that corresponds to Progspace.filename, or None if
there is no main symbol file currently set.
Currently, to get this gdb.Objfile, a user would need to use
Progspace.objfiles, and then search for the objfile with a name that
matches Progspace.filename -- which should work just fine, but having
direct access seems a little nicer.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Extend the description for Progspace.filename in the documentation to
mention what the returned string is actually the filename
for (e.g. that it is the filename passed to the 'symbol-file' or
'file' command).
Also document that this attribute will be None if no symbol file is
currently loaded.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
printing.py references "gdb.printing" in a few spots, but there's no
need for this. I think this is leftover from when this code was
(briefly) in some other module. This patch removes the unnecessary
qualifications. Tested on x86-64 Fedora 36.
|
|
This adds two new pretty-printer methods, to support random access to
children. The methods are implemented for the no-op array printer,
and DAP is updated to use this.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
There was an earlier thread about adding new methods to
pretty-printers:
https://sourceware.org/pipermail/gdb-patches/2023-June/200503.html
We've known about the need for printer extensibility for a while, but
have been hampered by backward-compatibilty concerns: gdb never
documented that printers might acquire new methods, and so existing
printers may have attribute name clashes.
To solve this problem, this patch adds a new pretty-printer tag class
that signals to gdb that the printer follows new extensibility rules.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30816
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
With any gdb.dap test and python 3.6 I run into:
...
Error occurred in Python: 'code' object has no attribute 'co_posonlyargcount'
ERROR: eof reading json header
...
The attribute is not supported before python 3.8, which introduced the
"Positional−only Parameters" concept.
Fix this by using try/except AttributeError.
Tested on x86_64-linux:
- openSUSE Leap 15.4 with python 3.6, and
- openSUSE Tumbleweed with python 3.11.5.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The buildbot pointed out that the last DAP series I checked in had an
issue. Looking into it, it seems there is a stray trailing "," in
breakpoint.py. This patch removes it.
This seems to point out a test suite deficiency. I will look into
fixing that.
|
|
I noticed a comment by an include and remembered that I think these
don't really provide much value -- sometimes they are just editorial,
and sometimes they are obsolete. I think it's better to just remove
them. Tested by rebuilding.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
According to the DAP specification if the "sourceReference" field is
included in a Source object, then the DAP client _must_ make a "source"
request to the debugger to retrieve file contents, even if the Source
object also includes path information.
If the Source's path field is a valid path that the DAP client is able
to read from the filesystem, having to make another request to the
debugger to get the file contents is wasteful and leads to incorrect
results (DAP clients will try to get the contents from the server and
display those contents as a file with the name in "source.path", but
this will conflict with the _acutal_ existing file at "source.path").
Instead, only set "sourceReference" if the source file path does not
exist.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
If the breakpoint has a fullname, use that as the source path when
resolving the breakpoint source information. This is consistent with
other callers of make_source which also use "fullname" if it exists (see
e.g. DAPFrameDecorator which returns the symtab's fullname).
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Some DAP clients may send additional parameters in the stepOut command
(e.g. "granularity") which are not used by GDB, but should nonetheless
be accepted without error.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Not all breakpoints have a source location. For example, a breakpoint
set on a raw address will have only the "address" field populated, but
"source" will be None, which leads to a RuntimeError when attempting to
unpack the filename and line number.
Before attempting to unpack the filename and line number from the
breakpoint, ensure that the source information is not None. Also
populate the source and line information separately from the
"instructionReference" field, so that breakpoints that include only an
address are still included.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The buildbot pointed out that I neglected to re-run 'black' after
making some changes. This patch fixes the oversight.
|
|
A user pointed out that the current DAP variable code does not let the
client deference a pointer. Oops!
Fixing this oversight is simple enough -- adding a new no-op
pretty-printer for pointers and references is quite simple.
However, doing this naive caused a regession in scopes.exp, which
expected there to be no children of a 'const char *' variable. This
problem was fixed by the preceding patches in the series, which ensure
that a C type of this kind is recognized as a string.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30821
|
|
This changes main_type to hold a language, and updates the debug
readers to set this field. This is done by adding the language to the
type-allocator object.
Note that the non-DWARF readers are changed on a "best effort" basis.
This patch also reimplements type::is_array_like to use the type's
language, and it adds a new type::is_string_like as well. This in
turn lets us change the Python implementation of these methods to
simply defer to the type.
|
|
This replaces some casts to 'watchpoint *' with checked_static_cast.
In one spot, an unnecessary block is also removed.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
A user pointed out that if a DAP setBreakpoints request has a 'source'
field in a SourceBreakpoint object, then the gdb DAP implementation
will throw an exception.
While SourceBreakpoint does not allow 'source' in the spec, it seems
better to me to accept it. I don't think we should fully go down the
"Postel's Law" path -- after all, we have the type-checker -- but at
the same time, if we do send errors, they should be intentional and
not artifacts of the implementation.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30820
|
|
When running test-case gdb.python/py-symbol.exp with target board
cc-with-dwz-m, we run into:
...
(gdb) python print (len (gdb.lookup_static_symbols ('rr')))^M
4^M
(gdb) FAIL: gdb.python/py-symbol.exp: \
print (len (gdb.lookup_static_symbols ('rr')))
...
while with target board unix we have instead:
...
(gdb) python print (len (gdb.lookup_static_symbols ('rr')))^M
2^M
(gdb) PASS: gdb.python/py-symbol.exp: \
print (len (gdb.lookup_static_symbols ('rr')))
...
The problem is that the loop in gdbpy_lookup_static_symbols loops over compunits
representing both CUs and PUs:
...
for (compunit_symtab *cust : objfile->compunits ())
...
When doing a lookup on a PU, the user link is followed until we end up at a CU,
and the lookup is done in that CU.
In other words, when doing a lookup in the loop for a PU we duplicate the
lookup for a CU that is already handled by the loop.
Fix this by skipping PUs in the loop in gdb.lookup_static_symbols.
Tested on x86_64-linux.
PR symtab/25261
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=25261
|
|
This changes the no-op pretty printers -- used by DAP -- to handle
array- and string-like objects known by the gdb core. Two new tests
are added, one for Ada and one for Rust.
|
|
gdb's language code may know how to display values specially. For
example, the Rust code understands that &str is a string-like type, or
Ada knows how to handle unconstrained arrays. This knowledge is
exposed via val-print, and via varobj -- but currently not via DAP.
This patch adds some support code to let DAP also handle these cases,
though in a somewhat more generic way.
Type.is_array_like and Value.to_array are added to make Python aware
of the cases where gdb knows that a structure type is really
"array-like".
Type.is_string_like is added to make Python aware of cases where gdb's
language code knows that a type is string-like.
Unlike Value.string, these cases are handled by the type's language,
rather than the current language.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Right now, if a program uses multiple languages, DAP value formatting
will always use the language of the innermost frame. However, it is
better to use the variable's defining frame instead. This patch does
this by selecting the frame first.
This also fixes a possibly latent bug in the "stepOut" command --
"finish" is sensitive to the selected frame, but the DAP code may
already select other frames when convenient. The DAP stepOut request
only works on the newest frame, so be sure to select it before
invoking "finish".
|
|
Ada has a few complexities when it comes to array handling. Currently
these are all handled in Ada-specific code -- but unfortunately that
means they aren't really accessible to Python.
This patch changes the Python code to defer to Ada when given an Ada
array. In order to make this work, one spot in ada-lang.c had to be
updated to set the "GNAT-specific" flag on an array type.
The test case for this will come in a later patch.
|
|
Replace with type::field + field::bitsize.
Change-Id: I2a24755a33683e4a2775a6d2a7b7a9ae7362e43a
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Replace with type::field + field::is_artificial.
Change-Id: Ie3bacae49d9bd02e83e504c1ce01470aba56a081
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In remote_target::thread_info_to_thread_handle we return a copy:
...
gdb::byte_vector
remote_target::thread_info_to_thread_handle (struct thread_info *tp)
{
remote_thread_info *priv = get_remote_thread_info (tp);
return priv->thread_handle;
}
...
Fix this by returning a gdb::array_view instead:
...
gdb::array_view<const gdb_byte>
remote_target::thread_info_to_thread_handle (struct thread_info *tp)
...
Tested on x86_64-linux.
This fixes the build when building with -std=c++20.
Approved-By: Pedro Alves <pedro@palves.net>
|
|
Currently, each target backend is responsible for printing "[Thread
...exited]" before deleting a thread. This leads to unnecessary
differences between targets, like e.g. with the remote target, we
never print such messages, even though we do print "[New Thread ...]".
E.g., debugging the gdb.threads/attach-many-short-lived-threads.exp
with gdbserver, letting it run for a bit, and then pressing Ctrl-C, we
currently see:
(gdb) c
Continuing.
^C[New Thread 3850398.3887449]
[New Thread 3850398.3887500]
[New Thread 3850398.3887551]
[New Thread 3850398.3887602]
[New Thread 3850398.3887653]
...
Thread 1 "attach-many-sho" received signal SIGINT, Interrupt.
0x00007ffff7e6a23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0, req=req@entry=0x7fffffffda80, rem=rem@entry=0x7fffffffda80)
at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb)
Above, we only see "New Thread" notifications, even though threads
were deleted.
After this patch, we'll see:
(gdb) c
Continuing.
^C[Thread 3558643.3577053 exited]
[Thread 3558643.3577104 exited]
[Thread 3558643.3577155 exited]
[Thread 3558643.3579603 exited]
...
[New Thread 3558643.3597415]
[New Thread 3558643.3600015]
[New Thread 3558643.3599965]
...
Thread 1 "attach-many-sho" received signal SIGINT, Interrupt.
0x00007ffff7e6a23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0, req=req@entry=0x7fffffffda80, rem=rem@entry=0x7fffffffda80)
at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) q
This commit fixes this by moving the thread exit printing to common
code instead, triggered from within delete_thread (or rather,
set_thread_exited).
There's one wrinkle, though. While most targest want to print:
[Thread ... exited]
the Windows target wants to print:
[Thread ... exited with code <exit_code>]
... and sometimes wants to suppress the notification for the main
thread. To address that, this commits adds a delete_thread_with_code
function, only used by that target (so far).
This fix was originally posted as part of a larger series:
https://inbox.sourceware.org/gdb-patches/20221212203101.1034916-1-pedro@palves.net/
But didn't really need to be part of that series. In order to get
this fix merged sooner, I (Andrew Burgess) have rebased this commit
outside of the original series. Any bugs introduced while splitting
this patch out and rebasing, are entirely my own.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30129
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
|
|
Remove the static mi_parse::make functions, and instead use the
mi_parse constructor.
This is a partial revert of the commit:
commit fde3f93adb50c9937cd2e1c93561aea2fd167156
Date: Mon Mar 20 10:56:55 2023 -0600
Introduce "static constructor" for mi_parse
which introduced the mi_parse::make functions, though after discussion
on the list the reasons for seem to have been lost[1]. Given there
are no test regressions when moving back to using the constructors, I
propose we should do that for now.
There should be no user visible changes after this commit.
[1] https://inbox.sourceware.org/gdb-patches/20230404-dap-loaded-sources-v2-5-93f229095e03@adacore.com/
Approved-By: Tom Tromey <tom@tromey.com>
|