Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit makes the gdb.Command.complete methods more verbose when
it comes to error handling.
Previous to this commit if any commands implemented in Python
implemented the complete method, and if there were any errors
encountered when calling that complete method, then GDB would silently
hide the error and continue as if there were no completions.
This makes is difficult to debug any errors encountered when writing
completion methods, and encourages the idea that Python extensions can
be broken, and GDB will just silently work around them.
I don't think this is a good idea. GDB should encourage extensions to
be written correctly, and robustly, and one way in which GDB can (I
think) support this, is by pointing out when an extension goes wrong.
In this commit I've gone through the Python command completion code,
and added calls to gdbpy_print_stack() or gdbpy_print_stack_or_quit()
in places where we were either clearing the Python error, or, in some
cases, just not handling the error at all.
One thing I have not changed is in cmdpy_completer (py-cmd.c) where we
process the list of completions returned from the Command.complete
method; this routine includes a call to gdbpy_is_string to check a
possible completion is a string, if not the completion is ignored.
I was tempted to remove this check, attempt to complete each result to
a string, and display an error if the conversion fails. After all,
returning anything but a string is surely a mistake by the extension
author.
However, the docs clearly say that only strings within the returned
list will be considered as completions. Anything else is ignored. As
such, and to avoid (what I think is pretty unlikely) breakage of
existing code, I've retained the gdbpy_is_string check.
After the gdbpy_is_string check we call python_string_to_host_string,
if this call fails then I do now print the error, where before we
ignored the error. I think this is OK; if GDB thinks something is a
string, but still can't convert it to a string, then I think it's OK
to display the error in that case.
Another case which I was a little unsure about was in
cmdpy_completer_helper, and the call to PyObject_CallMethodObjArgs,
which is when we actually call Command.complete. Previously, if this
call resulted in an exception then we would ignore this and just
pretend there were no completions.
Of all the changes, this is possibly the one with the biggest
potential for breaking existing scripts, but also, is, I think, the
most useful change. If the user code is wrong in some way, such that
an exception is raised, then previously the user would have no obvious
feedback about this breakage. Now GDB will print the exception for
them, making it, I think, much easier to debug their extension. But,
if there is user code in the wild that relies on raising an exception
as a means to indicate there are no completions .... well, that code
is going to break after this commit. I think we can live with this
though, the exceptions means no completions thing was never documented
behaviour.
I also added a new error() call if the PyObject_CallMethodObjArgs call
raises an exception. This causes the completion mechanism within GDB
to stop. Within GDB the completion code is called twice, the first
time to compute the work break characters, and then a second time to
compute the actual completions.
If PyObject_CallMethodObjArgs raises an exception when computing the
word break character, and we print it by calling
gdbpy_print_stack_or_quit(), but then carry on as if
PyObject_CallMethodObjArgs had returns no completions, GDB will
call the Python completion code again, which results in another call
to PyObject_CallMethodObjArgs, which might raise the same exception
again. This results in the Python exception being printed twice.
By throwing a C++ exception after the failed
PyObject_CallMethodObjArgs call, the completion mechanism is aborted,
and no completions are offered. But importantly, the Python exception
is only printed once. I think this gives a much better user
experience.
I've added some tests to cover this case, as I think this is the most
likely case that a user will run into.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
DAP specifies a "process" event that is sent when a process is started
or attached to. gdb was not emitting this (several DAP clients appear
to ignore it entirely), but it looked easy and harmless to implement.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30473
|
|
While working on cancellation, I noticed that a DAP 'pause' request
would set the "do not emit the continue" flag. This meant that a
subsequent request that should provoke a 'continue' event would
instead suppress the event.
I then tried writing a more obvious test case for this, involving an
inferior call -- and discovered that gdb.events.cont does not fire for
an inferior call.
This patch installs a new event listener for gdb.events.inferior_call
and arranges for this to emit continue and stop events when
appropriate. It also fixes the original bug, by adding a check to
exec_and_expect_stop.
|
|
GDB's Python API documentation for gdb.Command.complete() says:
The 'complete' method can return several values:
* If the return value is a sequence, the contents of the
sequence are used as the completions. It is up to 'complete'
to ensure that the contents actually do complete the word. A
zero-length sequence is allowed, it means that there were no
completions available. Only string elements of the sequence
are used; other elements in the sequence are ignored.
* If the return value is one of the 'COMPLETE_' constants
defined below, then the corresponding GDB-internal completion
function is invoked, and its result is used.
* All other results are treated as though there were no
available completions.
So, returning a non-sequence, and non-integer from a complete method
should be fine; it should just be treated as though there are no
completions.
However, if I write a complete method that returns None, I see this
behaviour:
(gdb) complete completefilenone x
Python Exception <class 'TypeError'>: 'NoneType' object is not iterable
warning: internal error: Unhandled Python exception
(gdb)
Which is caused because we currently assume that anything that is not
an integer must be iterable, and we call PyObject_GetIter on it. When
this call fails a Python exception is set, but instead of
clearing (and therefore ignoring) this exception as we do everywhere
else in the Python completion code, we instead just return with the
exception set.
In this commit I add a PySequence_Check call. If this call returns
false (and we've already checked the integer case) then we can assume
there are no completion results.
I've added a test which checks returning a non-sequence.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Reformat gdb/python/lib/gdb/missing_debug.py with black after commit
e8c3dafa5f5 ("[gdb/python] Don't import curses.ascii module unless necessary").
|
|
I ran into a failure in test-case gdb.python/py-missing-debug.exp with python
3.6, which was fixed by commit 7db795bc67a ("gdb/python: remove use of
str.isascii()").
However, I subsequently ran into a failure with python 3.11:
...
(gdb) PASS: $exp: initial checks: debug info no longer found
source py-missing-debug.py^M
Traceback (most recent call last):^M
File "py-missing-debug.py", line 17, in <module>^M
from gdb.missing_debug import MissingDebugHandler^M
File "missing_debug.py", line 21, in <module>^M
from curses.ascii import isascii, isalnum^M
File "/usr/lib64/python3.11/_import_failed/curses.py", line 16, in <module>^M
raise ImportError(f"""Module '{failed_name}' is not installed.^M
ImportError: Module 'curses' is not installed.^M
Use:^M
sudo zypper install python311-curses^M
to install it.^M
(gdb) FAIL: $exp: source python script
...
Apparently I have the curses module installed for 3.6, but not 3.11.
I could just install it, but the test-case worked fine with 3.11 before commit
7db795bc67a.
Fix this by only using the curses module when necessary, for python <= 3.7.
Tested on x86_64-linux, with both python 3.6 and 3.11.
|
|
A couple of spots in the DAP code use the same workaround for the
absence of queue.SimpleQueue before Python 3.6. This patch
consolidates these into a single spot.
|
|
Since GDB now requires C++17, we don't need the internally maintained
gdb::optional implementation. This patch does the following replacing:
- gdb::optional -> std::optional
- gdb::in_place -> std::in_place
- #include "gdbsupport/gdb_optional.h" -> #include <optional>
This change has mostly been done automatically. One exception is
gdbsupport/thread-pool.* which did not use the gdb:: prefix as it
already lives in the gdb namespace.
Change-Id: I19a92fa03e89637bab136c72e34fd351524f65e9
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
|
|
gdb::make_unique is a wrapper around std::make_unique when compiled with
C++17. Now that C++17 is required, use std::make_unique directly in the
codebase, and remove gdb::make_unique.
Change-Id: I80b615e46e4b7c097f09d78e579a9bdce00254ab
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net
|
|
Hannes' patch to show local variables in the TUI pointed out that
NoOpStructPrinter should ignore static members. This patch implements
this.
|
|
DAP specifies that a request can fail with the "notStopped" message if
the inferior is running but the request requires that it first be
stopped.
This patch implements this for gdb. Most requests are assumed to
require a stopped inferior, and the exceptions are noted by a new
'request' parameter.
You may notice that the implementation is a bit racy. I think this is
inherent -- unless the client waits for a stop event before sending a
request, the request may be processed at any time relative to a stop.
https://sourceware.org/bugzilla/show_bug.cgi?id=31037
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
ExecutionInvoker is no longer really needed, due to the previous DAP
refactoring. This patch removes it in favor of an ordinary function.
One spot (the 'continue' request) could still have used it, but is
more succinctly expressed as a lambda.
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
Nearly every DAP request implementation forwards its work to the gdb
thread, using send_gdb_with_response. This patch refactors the
'request' decorator to make this automatic, and to provide some
parameters so that the unusual requests can express their needs as
well.
In a few spots this simplifies the code by removing an unnecessary
helper function. This could be done in more places as well if we
wanted.
The main motivation for this patch is that I thought it would be
helpful for cancellation. I am still working on that, but meanwhile
the parameterization of 'request' makes it easy to handle the
'notStopped' response as well.
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP specifies a StackFrameFormat object that can be used to change how
the "name" part of a stack frame is constructed. While this output
can already be done in a nicer way (and also letting the client choose
the formatting), nevertheless it is in the spec, so I figured I'd
implement it.
While implementing this, I discovered that the current code does not
correctly preserve frame IDs across requests. I rewrote frame
iteration to preserve this, and it turned out to be simpler to combine
these patches.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30475
|
|
This commit:
commit 8f6c452b5a4e50fbb55ff1d13328b392ad1fd416
Date: Sun Oct 15 22:48:42 2023 +0100
gdb: implement missing debug handler hook for Python
introduced a use of str.isascii(), which was only added in Python 3.7.
This commit switches to use curses.ascii.isascii(), as this was
available in 3.6.
The same is true for str.isalnum(), which is replaced with
curses.ascii.isalnum().
There should be no user visible changes after this commit.
|
|
If you run gdb in the build tree without --data-directory, on a
program that does not have debug info, it will crash, because
gdbpy_handle_missing_debuginfo unconditionally uses gdb_python_module.
Other code in gdb using gdb_python_module checks it first and it
seemes harmless to do the same thing here. (gdb_python_initialized
does not cover this case so that python can be partially initialized
and still somewhat work.)
|
|
A co-worker requested that the DAP scope for a nested function's frame
also show the variables from outer frames. DAP doesn't directly
support this notion, so this patch arranges to put these variables
into the inner frames "Locals" scope.
I chose to do this only for DAP. For CLI and MI, gdb currently does
not do this, so this preserves the behavior.
Note that an earlier patch (see commit 4a1311ba) removed some code
that seemed to do something similar. However, that code did not
actually work.
|
|
While working on static links, I noticed that the DAP scopes code does
not handle the scenario where a frame decorator returns None. This
situation should be handled identically to a frame decorator returning
an empty iterator.
|
|
This adds a new gdb.Frame.static_link method to the gdb Python layer.
This can be used to find the static link frame for a given frame.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When resizing from a big to small terminal size, and you have a
TUI python window that would then be outside of the new size,
valgrind shows this error:
==3389== Invalid read of size 1
==3389== at 0xC3DFEE: wnoutrefresh (lib_refresh.c:167)
==3389== by 0xC3E3C9: wrefresh (lib_refresh.c:63)
==3389== by 0xA9766C: tui_unhighlight_win(tui_win_info*) (tui-wingeneral.c:134)
==3389== by 0x98921C: tui_py_window::rerender() (py-tui.c:183)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8C2A2: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1033)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8B1F8: tui_apply_current_layout(bool) (tui-layout.c:81)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== Address 0x115cc214 is 1,332 bytes inside a block of size 2,240 free'd
==3389== at 0x4A0A430: free (vg_replace_malloc.c:446)
==3389== by 0xC3CF7D: _nc_freewin (lib_newwin.c:121)
==3389== by 0xA8B1C6: tui_apply_current_layout(bool) (tui-layout.c:78)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== by 0x8E40E9: captured_command_loop() (main.c:407)
==3389== by 0x8E5E54: gdb_main(captured_main_args*) (main.c:1324)
==3389== by 0x62AC04: main (gdb.c:39)
It's because tui_py_window::m_inner_window still has the outside
coordinates, and wnoutrefresh then does an out-of-bounds access.
Fix this by resetting m_inner_window on every resize, it will anyways
be recreated in the next rerender call.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
I found a declaration in py-stopevent.h for which there is no
definition. This patch removes it.
|
|
This patch implements the DAP setVariable request.
setVariable is a bit odd in that it specifies the variable to modify
by passing in the variable's container and the name of the variable.
This approach can't handle variable shadowing (there are a couple of
open DAP bugs on this topic), so this patch renames duplicates to
avoid the problem.
|
|
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that gdb/python/python.c unconditionally includes
gdbsupport/selftest.h.
Make this conditional on GDB_SELF_TEST.
Tested on x86_64-linux.
|
|
A pretty-printer's 'children' method may return values other than a
gdb.Value -- it may return any value that can be converted to a
gdb.Value.
I noticed that this case did not work for DAP. This patch fixes the
problem.
|
|
Andry pointed out that the DAP code did not properly handle
gdb.LazyString results from a pretty-printer, yielding:
TypeError: Object of type LazyString is not JSON serializable
This patch fixes the problem, partly with a small patch in varref.py,
but mainly by implementing tp_str for LazyString.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Andry noticed that given a DAP setExpression request, where the
expression to set is a register, DAP will return the wrong value -- it
will return the old value, not the updated one.
This happens because gdb.Value.assign (which was recently added for
DAP) does not update the value.
In this patch, I chose to have the assign method update the Value
in-place. It's also possible to have it return a new value, but this
didn't seem very useful to me.
|
|
Andry Ogorodnik, a co-worker, noticed that multiple "scopes" requests
with the same frame would yield different variableReference values in
the response.
This patch adds a regression test for this, and adds a scope cache in
scopes.py, ensuring that multiple identical requests will get the same
response.
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
This commit replaces the architecture_changed observer with a
new_architecture observer.
Currently the only user of the architecture_changed observer is the
Python code, which uses this observer to register the Python unwinder
with the architecture.
The problem is that the architecture_changed observer is triggered
from inferior::set_arch(), which only sees the inferior-wide gdbarch
value. For targets that use thread-specific architectures, these
never trigger the architecture_changed observer, and so never have the
Python unwinder registered with them.
When it comes to unwinding GDB makes use of the frame's gdbarch, which
is based on the thread's regcache gdbarch, which is set in
get_thread_regcache to the value returned from
target_thread_architecture, which is not always the inferiors gdbarch
value, it might be a thread-specific gdbarch which has not passed
through inferior::set_arch().
The new_architecture observer will be triggered from
gdbarch_find_by_info, whenever a new gdbarch is created and
initialised. As GDB caches and reuses gdbarch values, we should
expect to see each new architecture trigger the new_architecture
observer just once.
After this commit, targets that make use of thread-specific
architectures should be able to make use of Python unwinders.
As I don't have access to a machine that makes use of thread-specific
architectures right now, I asked Luis to confirm that an AArch64
target that uses SVE/SME can't use the Python unwinders in threads
that are using a thread-specific architectures, and he confirmed that
this is indeed the case, see this discussion:
https://inbox.sourceware.org/gdb/87wmvsat8i.fsf@redhat.com
Tested-By: Lancelot Six <lancelot.six@amd.com>
Tested-By: Luis Machado <luis.machado@arm.com>
Reviewed-By: Luis Machado <luis.machado@arm.com>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This function is just a wrapper around the current inferior's gdbarch.
I find that having that wrapper just obscures where the arch is coming
from, and that it's often used as "I don't know which arch to use so
I'll use this magical target_gdbarch function that gets me an arch" when
the arch should in fact come from something in the context (a thread,
objfile, symbol, etc). I think that removing it and inlining
`current_inferior ()->arch ()` everywhere will make it a bit clearer
where that arch comes from and will trigger people into reflecting
whether this is the right place to get the arch or not.
Change-Id: I79f14b4e4934c88f91ca3a3155f5fc3ea2fadf6b
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This is to make it explicit which inferior's architecture just changed,
and that the callbacks should not assume it is the current inferior.
Update the only caller, pyuw_on_new_gdbarch, to add the parameter,
although it doesn't use it currently.
Change-Id: Ieb7f21377e4252cc6e7b1ce2cc812cd1a1840e0e
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Make the inferior's gdbarch field private, and add getters and setters.
This helped me by allowing putting breakpoints on set_arch to know when
the inferior's arch was set. A subsequent patch in this series also
adds more things in set_arch.
Change-Id: I0005bd1ef4cd6b612af501201cec44e457998eec
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit adds a new Python function, gdb.notify_mi, that can be used
to emit custom async notification to MI channel. This can be used, among
other things, to implement notifications about events MI does not support,
such as remote connection closed or register change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit generalizes serialize_mi_result() to make usable in
different contexts than printing result of custom MI command.
To do so, the check whether passed Python object is a dictionary has been
moved to the caller - at the very least, different uses require different
error messages. Also it has been renamed to serialize_mi_results() to better
match GDB/MI output syntax (see corresponding section in documentation,
in particular rules 'result-record' and 'async-output'.
Since it is now more generic function, it has been moved to py-mi.c.
This is a preparation for implementing Python support for sending custom
MI async events.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
The new_objfile observer is currently used to indicate both when a new
objfile is added to program space (when passed non-nullptr) and when all
objfiles of a program space were just removed (when passed nullptr).
I think this is confusing (and Andrew apparently thinks so too [1]).
Add a new "all_objfiles_removed" observer to remove the second role from
"new_objfile".
Some existing users of new_objfile do nothing if the passed objfile is
nullptr. For them, we can simply drop the nullptr check. For others,
add a new all_objfiles_removed callback, and refactor things a bit to
keep the existing behavior as much as possible.
Some callbacks relied on current_program_space, and following
the refactoring now use either objfile->pspace or the pspace passed to
all_objfiles_removed. I think this should be relatively safe, and in
general a step in the right direction.
On the notify side, I found only one call site to change from
new_objfile to all_objfiles_removed, in clear_symtab_users. It is not
entirely clear to me that this is entirely correct. clear_symtab_users
appears to be called in spots that don't remove all objfiles
(functions finish_new_objfile, remove_symbol_file_command, reread_symbols,
do_module_cleanups). But I think that this patch at least makes the
current code clearer.
[1] https://gitlab.com/gnutools/binutils-gdb/-/commit/a0a031bce0527b1521788b5dad640e7883b3a252
Change-Id: Icb648f72862e056267f30f44dd439bd4ec766f13
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Make the current_program_space references bubble up a bit.
Change-Id: Id047a48cc8d8a45504cdbb5960bafe3e7735d652
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add program_space space parameters to emit_clear_objfiles_event and
create_clear_objfiles_event_object, making the reference to
current_program_space bubble up a bit.
Change-Id: I5fde2071712781e5d45971fa0ab34d85d3a49a71
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Initially I just wanted a Python event for when GDB removes a program
space, I'm writing a Python extension that caches information for each
program space, and need to know when I should discard entries for a
particular program space.
But, it seemed easy enough to also add an event for when GDB adds a
new program space, so I went ahead and added both new events.
Of course, we don't currently have an observable for program space
addition or removal, so I first needed to add these. After that it's
pretty simple to add two new Python events and have these trigger.
The two new event registries are:
events.new_progspace
events.free_progspace
These emit NewProgspaceEvent and FreeProgspaceEvent objects
respectively, each of these new event types has a 'progspace'
attribute that contains the relevant gdb.Progspace object.
There's a couple of things to be mindful of.
First, it is not possible to catch the NewProgspaceEvent for the very
first program space, the one that is created when GDB first starts, as
this program space is created before any Python scripts are sourced.
In order to allow this event to be caught we would need to defer
creating the first program space, and as a consequence the first
inferior, until some later time. But, existing scripts could easily
depend on there being an initial inferior, so I really don't think we
should change that -- and so, we end up with the consequence that we
can't catch the event for the first program space.
The second, I think minor, issue, is that GDB doesn't clean up its
program spaces upon exit -- or at least, they are not cleaned up
before Python is shut down. As a result, any program spaces in use at
the time GDB exits don't generate a FreeProgspaceEvent. I'm not
particularly worried about this for my use case, I'm using the event
to ensure that a cache doesn't hold stale entries within a single GDB
session. It's also easy enough to add a Python at-exit callback which
can do any final cleanup if needed.
Finally, when testing, I did hit a slightly weird issue with some of
the remote boards (e.g. remote-stdio-gdbserver). As a consequence of
this issue I see some output like this in the gdb.log:
(gdb) PASS: gdb.python/py-progspace-events.exp: inferior 1
step
FreeProgspaceEvent: <gdb.Progspace object at 0x7fb7e1d19c10>
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/libc.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/ld-linux-x86-64.so.2": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
do_parent_stuff () at py-progspace-events.c:41
41 ++global_var;
(gdb) PASS: gdb.python/py-progspace-events.exp: step
The 'FreeProgspaceEvent ...' line is expected, that's my test Python
extension logging the event. What isn't expected are all the blocks
like:
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
It turns out that this has nothing to do with my changes, this is just
a consequence of reading files over the remote protocol. The test
forks a child process which GDB stays attached too. When the child
exits, GDB cleans up by calling prune_inferiors, which in turn can
result in GDB trying to close some files that are open because of the
inferior being deleted.
If the prune_inferiors call occurs when the remote target is
running (and in non-async mode) then GDB will try to send a fileio
packet while the remote target is waiting for a stop reply, and the
remote target will throw an error, see remote_target::putpkt_binary in
remote.c for details.
I'm going to look at fixing this, but, as I said, this is nothing to
do with this change, I just mention it because I ended up needing to
account for these warning messages in one of my tests, and it all
looks a bit weird.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit makes the executable_changed observable available through
the Python API as an event. There's nothing particularly interesting
going on here, it just follows the same pattern as many of the other
Python events we support.
The new event registry is called events.executable_changed, and this
emits an ExecutableChangedEvent object which has two attributes, a
gdb.Progspace called 'progspace', this is the program space in which
the executable changed, and a Boolean called 'reload', which is True
if the same executable changed on disk and has been reloaded, or is
False when a new executable has been loaded.
One interesting thing did come up during testing though, you'll notice
the test contains a setup_kfail call. During testing I observed that
the executable_changed event would trigger twice when GDB restarted an
inferior. However, the ExecutableChangedEvent object is identical for
both calls, so the wrong information is never sent out, we just see
one too many events.
I tracked this down to how the reload_symbols function (symfile.c)
takes care to also reload the executable, however, I've split fixing
this into a separate commit, so see the next commit for details.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.executable_filename attribute that contains the
path to the executable for this program space, or None if no
executable is set.
The path within this attribute will be set by the "exec-file" and/or
"file" commands.
Accessing this attribute for an invalid program space will raise an
exception.
This new attribute is similar too, but not the same as the existing
gdb.Progspace.filename attribute. If I could change the past, I'd
change the 'filename' attribute to 'symbol_filename', which is what it
actually represents. The old attribute will be set by the
'symbol-file' command, while the new attribute is set by the
'exec-file' command. Obviously the 'file' command sets both of these
attributes.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.symbol_file attribute. This attribute holds the
gdb.Objfile object that corresponds to Progspace.filename, or None if
there is no main symbol file currently set.
Currently, to get this gdb.Objfile, a user would need to use
Progspace.objfiles, and then search for the objfile with a name that
matches Progspace.filename -- which should work just fine, but having
direct access seems a little nicer.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Extend the description for Progspace.filename in the documentation to
mention what the returned string is actually the filename
for (e.g. that it is the filename passed to the 'symbol-file' or
'file' command).
Also document that this attribute will be None if no symbol file is
currently loaded.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
printing.py references "gdb.printing" in a few spots, but there's no
need for this. I think this is leftover from when this code was
(briefly) in some other module. This patch removes the unnecessary
qualifications. Tested on x86-64 Fedora 36.
|
|
This adds two new pretty-printer methods, to support random access to
children. The methods are implemented for the no-op array printer,
and DAP is updated to use this.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
There was an earlier thread about adding new methods to
pretty-printers:
https://sourceware.org/pipermail/gdb-patches/2023-June/200503.html
We've known about the need for printer extensibility for a while, but
have been hampered by backward-compatibilty concerns: gdb never
documented that printers might acquire new methods, and so existing
printers may have attribute name clashes.
To solve this problem, this patch adds a new pretty-printer tag class
that signals to gdb that the printer follows new extensibility rules.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30816
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
With any gdb.dap test and python 3.6 I run into:
...
Error occurred in Python: 'code' object has no attribute 'co_posonlyargcount'
ERROR: eof reading json header
...
The attribute is not supported before python 3.8, which introduced the
"Positional−only Parameters" concept.
Fix this by using try/except AttributeError.
Tested on x86_64-linux:
- openSUSE Leap 15.4 with python 3.6, and
- openSUSE Tumbleweed with python 3.11.5.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The buildbot pointed out that the last DAP series I checked in had an
issue. Looking into it, it seems there is a stray trailing "," in
breakpoint.py. This patch removes it.
This seems to point out a test suite deficiency. I will look into
fixing that.
|
|
I noticed a comment by an include and remembered that I think these
don't really provide much value -- sometimes they are just editorial,
and sometimes they are obsolete. I think it's better to just remove
them. Tested by rebuilding.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
According to the DAP specification if the "sourceReference" field is
included in a Source object, then the DAP client _must_ make a "source"
request to the debugger to retrieve file contents, even if the Source
object also includes path information.
If the Source's path field is a valid path that the DAP client is able
to read from the filesystem, having to make another request to the
debugger to get the file contents is wasteful and leads to incorrect
results (DAP clients will try to get the contents from the server and
display those contents as a file with the name in "source.path", but
this will conflict with the _acutal_ existing file at "source.path").
Instead, only set "sourceReference" if the source file path does not
exist.
Approved-By: Tom Tromey <tom@tromey.com>
|