Age | Commit message (Collapse) | Author | Files | Lines |
|
Result of:
...
$ search="GDB_PY_HANDLE_EXCEPTION ("
$ replace="return gdbpy_handle_gdb_exception (nullptr, "
$ sed -i \
"s/$search/$replace/" \
gdb/python/*.c
...
Also remove the now unused GDB_PY_HANDLE_EXCEPTION.
No functional changes.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I've recently committed two patches:
- commit 2f8cd40c37a ("[gdb/python] Use GDB_PY_HANDLE_EXCEPTION more often")
- commit fbf8e4c35c2 ("[gdb/python] Use GDB_PY_SET_HANDLE_EXCEPTION more often")
which use the macros GDB_PY_HANDLE_EXCEPTION and GDB_PY_SET_HANDLE_EXCEPTION
more often, with the goal of making things more consistent.
Having done that, I wondered if a better approach could be possible.
Consider GDB_PY_HANDLE_EXCEPTION:
...
/* Use this in a 'catch' block to convert the exception to a Python
exception and return nullptr. */
#define GDB_PY_HANDLE_EXCEPTION(Exception) \
do { \
gdbpy_convert_exception (Exception); \
return nullptr; \
} while (0)
...
The macro nicely codifies how python handles exceptions:
- setting an error condition using some PyErr_Set* variant, and
- returning a value implying that something went wrong
presumably with the goal that using the macro will mean not accidentally:
- forgetting to return on error, or
- returning the wrong value on error.
The problems are that:
- the macro hides control flow, specifically the return statement, and
- the macro hides the return value.
For example, when reading somewhere:
...
catch (const gdb_exception &except)
{
GDB_PY_HANDLE_EXCEPTION (except);
}
...
in order to understand what this does, you have to know that the macro
returns, and that it returns nullptr.
Add a template gdbpy_handle_gdb_exception:
...
template<typename T>
[[nodiscard]] T
gdbpy_handle_gdb_exception (T val, const gdb_exception &e)
{
gdbpy_convert_exception (e);
return val;
}
...
which can be used instead:
...
catch (const gdb_exception &except)
{
return gdbpy_handle_gdb_exception (nullptr, except);
}
...
[ Initially I tried this:
...
template<auto val>
[[nodiscard]] auto
gdbpy_handle_gdb_exception (const gdb_exception &e)
{
gdbpy_convert_exception (e);
return val;
}
...
with which the usage is slightly better looking:
...
catch (const gdb_exception &except)
{
return gdbpy_handle_gdb_exception<nullptr> (except);
}
...
but I ran into trouble with older gcc compilers. ]
While still a single statement, we now have it clear:
- that the statement returns,
- what value the statement returns.
[ FWIW, this could also be handled by say:
...
- GDB_PY_HANDLE_EXCEPTION (except);
+ GDB_PY_HANDLE_EXCEPTION_AND_RETURN_VAL (except, nullptr);
...
but I still didn't find the fact that it returns easy to spot.
Alternatively, this is the simplest form we could use:
...
return gdbpy_convert_exception (e), nullptr;
...
but the pairing would not necessarily survive a copy/paste/edit cycle. ]
Also note how making the value explicit makes it easier to check for
consistency:
...
catch (const gdb_exception &except)
{
return gdbpy_handle_gdb_exception (-1, except);
}
if (PyErr_Occurred ())
return -1;
...
given that we do use the explicit constants almost everywhere else.
Compared to using GDB_PY_HANDLE_EXCEPTION, there is the burden now to specify
the return value, but I assume that this will be generally copy-pasted and
therefore present no problem.
Also, there's no longer a guarantee that there's an immediate return, but I
assume that nodiscard making sure that the return value is not silently
ignored is sufficient mitigation.
For now, re-implement GDB_PY_HANDLE_EXCEPTION and GDB_PY_SET_HANDLE_EXCEPTION
in terms of gdbpy_handle_gdb_exception.
Follow-up patches will eliminate the macros.
No functional changes.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
PR python/32163 points out that various types provided by gdb are not
added to the gdb module, so they aren't available for interactive
inspection. I think this is just an oversight.
This patch fixes the problem by introducing a new helper function that
both readies the type and then adds it to the appropriate module. The
patch also poisons PyType_Ready, the idea being to avoid this bug in
the future.
v2:
* Fixed a bug in original patch in gdb.Architecture registration
* Added regression test for the types mentioned in the bug
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32163
Reviewed-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
By default GDB will be printing the hex payload of the ptwrite package as
auxiliary information. To customize this, the user can register a ptwrite
filter function in python, that takes the payload and the PC as arguments and
returns a string which will be printed instead. Registering the filter
function is done using a factory pattern to make per-thread filtering easier.
Approved-By: Markus Metzger <markus.t.metzger@intel.com>
|
|
C++ 11 has a built-in attribute for this, no need to use a compat macro.
Change-Id: I90e4220d26e8f3949d91761f8a13cd9c37da3875
Reviewed-by: Lancelot Six <lancelot.six@amd.com>
|
|
In commit 764af878259 ("[gdb/python] Add typesafe wrapper around
PyObject_CallMethod") I added poisoning of PyObject_CallMethod:
...
/* Poison PyObject_CallMethod. The typesafe wrapper gdbpy_call_method should be
used instead. */
template<typename... Args>
PyObject *
PyObject_CallMethod (Args...);
...
The idea was that subsequent code would be forced to use gdbpy_call_method
instead of PyObject_CallMethod.
However, that caused build issues with gcc 14 and python 3.13:
...
/usr/bin/ld: python/py-disasm.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<unsigned int, long long>(_object*, char const*, unsigned int, long long)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x384f): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, unsigned int, long long>(_object*, char*, char*, unsigned int, long long)'
/usr/bin/ld: python/py-tui.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<int>(_object*, char const*, int)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x1235): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, int>(_object*, char*, char*, int)'
/usr/bin/ld: python/py-tui.o: in function `gdb::ref_ptr<_object, gdbpy_ref_policy<_object> > gdbpy_call_method<int, int, int>(_object*, char const*, int, int, int)':
/data/vries/gdb/src/gdb/python/python-internal.h:207:(.text+0x12b0): undefined reference to `_object* PyObject_CallMethod<_object*, char*, char*, int, int, int>(_object*, char*, char*, int, int, int)'
collect2: error: ld returned 1 exit status
...
Fix this by poisoning without using templates.
Tested on x86_64-linux.
|
|
The following recent change introduced a regression when building using
clang++:
commit 764af878259768bb70c65bdf3f3285c2d6409bbd
Date: Wed Jun 12 18:58:49 2024 +0200
[gdb/python] Add typesafe wrapper around PyObject_CallMethod
The error message is:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
CXX python/py-block.o
1 error generated.
make[2]: *** [Makefile:1959: python/py-arch.o] Error 1
make[2]: *** Waiting for unfinished jobs....
In file included from ../../gdb/python/py-auto-load.c:25:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
1 error generated.
make[2]: *** [Makefile:1959: python/py-auto-load.o] Error 1
In file included from ../../gdb/python/py-block.c:23:
../../gdb/python/python-internal.h:151:16: error: default initialization of an object of const type 'const char'
constexpr char gdbpy_method_format;
^
= '\0'
1 error generated.
This patch fixes this by changing gdbpy_method_format to be a templated
struct, and only have its specializations define the static constexpr
member "format". This way, we avoid having an uninitialized constexpr
expression, regardless of it being instantiated or not.
Reviewed-By: Tom de Vries <tdevries@suse.de>
Change-Id: I5bec241144f13500ef78daea30f00d01e373692d
|
|
This adds an overload of gdbpy_call_method that accepts a gdbpy_ref<>.
This is just a small convenience.
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
This changes gdbpy_call_method to return a gdbpy_ref<>. This is
slightly safer because it makes it simpler to correctly handle
reference counts.
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
In gdb/python/py-tui.c we have code like this:
...
gdbpy_ref<> result (PyObject_CallMethod (m_window.get(), "hscroll",
"i", num_to_scroll, nullptr));
...
The nullptr is superfluous, the format string already indicates that there's
only one method argument.
OTOH, passing no method args does use a nullptr:
...
gdbpy_ref<> result (PyObject_CallMethod (m_window.get (), "render",
nullptr));
...
Furthermore, choosing the right format string chars can be tricky.
Add a typesafe wrapper around PyObject_CallMethod that hides these
details, such that we can use the more intuitive:
...
gdbpy_ref<> result (gdbpy_call_method (m_window.get(), "hscroll",
num_to_scroll));
...
and:
...
gdbpy_ref<> result (gdbpy_call_method (m_window.get (), "render"));
...
Tested on x86_64-linux.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
If in gdb/python/python-internal.h, we pretend to have a platform that doesn't
support long long:
...
-#ifdef HAVE_LONG_LONG
+#if 0
...
I get on arm-linux:
...
(gdb) placement_candidate()
disassemble test^M
Dump of assembler code for function test:^M
0x004004d8 <+0>: push {r11} @ (str r11, [sp, #-4]!)^M
0x004004dc <+4>: Python Exception <class 'ValueError'>: \
Buffer returned from read_memory is sized 0 instead of the expected 4^M
^M
unknown disassembler error (error = -1)^M
(gdb) FAIL: $exp: memory source api: second disassembler pass
...
The problem is that gdb_py_longest is typedef-ed to long, but the
corresponding format character GDB_PY_LL_ARG is defined to "L", meaning
"long long" [1].
Fix this by using "l", meaning long instead. Likewise for GDB_PY_LLU_ARG.
Tested on arm-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR python/31845
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31845
[1] https://docs.python.org/3/c-api/arg.html
|
|
Starting with python 3.6, support for platforms without long long support
has been removed [1].
HAVE_LONG_LONG and PY_LONG_LONG are still defined, but only for compatibility,
so stop relying on them.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
[1] https://github.com/python/cpython/issues/72148
|
|
This removes the embedded 'if' from GDB_PY_HANDLE_EXCEPTION and
GDB_PY_SET_HANDLE_EXCEPTION. I believe this 'if' was necessary with
the old gdb try/catch macros, but it no longer is: these should only
ever be called from a 'catch' block, where it's already known that an
exception was thrown.
Simon pointed out, though, that in a few spots, these were in facts
called outside of 'catch' blocks. This patch cleans up these spots.
I also found one spot where a redundant 'return nullptr' could be
removed.
|
|
Starting python version 3.12, PyErr_Fetch and PyErr_Restore are deprecated.
Use PyErr_GetRaisedException and PyErr_SetRaisedException instead, for
python >= 3.12.
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
With python 3.12, I run into:
...
(gdb) PASS: gdb.python/py-block.exp: check variable access
python print (block['nonexistent'])^M
Python Exception <class 'KeyError'>: 'nonexistent'^M
Error occurred in Python: 'nonexistent'^M
(gdb) FAIL: gdb.python/py-block.exp: check nonexistent variable
...
The problem is that that PyErr_Fetch returns a normalized exception, while the
test-case matches the output for an unnormalized exception.
With python 3.6, PyErr_Fetch returns an unnormalized exception, and the
test passes.
Fix this by:
- updating the test-case to match the output for a normalized exception, and
- lazily forcing normalized exceptions using PyErr_NormalizeException.
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Similar to gdbpy_err_fetch::value, add a getter gdbpy_err_fetch::type, and use
both consistently to get gdbpy_err_fetch members m_error_value and
m_error_type.
Tested on aarch64-linux.
|
|
We currently pass frames to function by value, as `frame_info_ptr`.
This is somewhat expensive:
- the size of `frame_info_ptr` is 64 bytes, which is a bit big to pass
by value
- the constructors and destructor link/unlink the object in the global
`frame_info_ptr::frame_list` list. This is an `intrusive_list`, so
it's not so bad: it's just assigning a few points, there's no memory
allocation as if it was `std::list`, but still it's useless to do
that over and over.
As suggested by Tom Tromey, change many function signatures to accept
`const frame_info_ptr &` instead of `frame_info_ptr`.
Some functions reassign their `frame_info_ptr` parameter, like:
void
the_func (frame_info_ptr frame)
{
for (; frame != nullptr; frame = get_prev_frame (frame))
{
...
}
}
I wondered what to do about them, do I leave them as-is or change them
(and need to introduce a separate local variable that can be
re-assigned). I opted for the later for consistency. It might not be
clear why some functions take `const frame_info_ptr &` while others take
`frame_info_ptr`. Also, if a function took a `frame_info_ptr` because
it did re-assign its parameter, I doubt that we would think to change it
to `const frame_info_ptr &` should the implementation change such that
it doesn't need to take `frame_info_ptr` anymore. It seems better to
have a simple rule and apply it everywhere.
Change-Id: I59d10addef687d157f82ccf4d54f5dde9a963fd0
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit is the result of the following actions:
- Running gdb/copyright.py to update all of the copyright headers to
include 2024,
- Manually updating a few files the copyright.py script told me to
update, these files had copyright headers embedded within the
file,
- Regenerating gdbsupport/Makefile.in to refresh it's copyright
date,
- Using grep to find other files that still mentioned 2023. If
these files were updated last year from 2022 to 2023 then I've
updated them this year to 2024.
I'm sure I've probably missed some dates. Feel free to fix them up as
you spot them.
|
|
The gdb.Objfile, gdb.Progspace, gdb.Type, and gdb.Inferior Python
types already have a __dict__ attribute, which allows users to create
user defined attributes within the objects. This is useful if the
user wants to cache information within an object.
This commit adds the same functionality to the gdb.InferiorThread
type.
After this commit there is a new gdb.InferiorThread.__dict__
attribute, which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> t = gdb.selected_thread()
>>> t._user_attribute = 123
>>> t._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Many object types now have a __repr__() function implementation. A
common pattern is that, if an object is invalid, we print its
representation as: <TYPENAME (invalid)>.
I thought it might be a good idea to move the formatting of this
specific representation into a utility function, and then update all
of our existing code to call the new function.
The only place where I haven't made use of the new function is in
unwind_infopy_repr, where we currently return a different string.
This case is a little different as the UnwindInfo is invalid because
it references a frame, and it is the frame itself which is invalid.
That said, I think it would be fine to switch to using the standard
format; if the UnwindInfo references an invalid frame, then the
UnwindInfo is itself invalid. But changing this would be an actual
change in behaviour, while all the other changes in this commit are
just refactoring.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Move gdbpy_gil class into python-internal.h, the next
commit wants to make use of this class from a file other
than python.c.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Since GDB now requires C++17, we don't need the internally maintained
gdb::optional implementation. This patch does the following replacing:
- gdb::optional -> std::optional
- gdb::in_place -> std::in_place
- #include "gdbsupport/gdb_optional.h" -> #include <optional>
This change has mostly been done automatically. One exception is
gdbsupport/thread-pool.* which did not use the gdb:: prefix as it
already lives in the gdb namespace.
Change-Id: I19a92fa03e89637bab136c72e34fd351524f65e9
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
|
|
This commit adds a new Python function, gdb.notify_mi, that can be used
to emit custom async notification to MI channel. This can be used, among
other things, to implement notifications about events MI does not support,
such as remote connection closed or register change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit generalizes serialize_mi_result() to make usable in
different contexts than printing result of custom MI command.
To do so, the check whether passed Python object is a dictionary has been
moved to the caller - at the very least, different uses require different
error messages. Also it has been renamed to serialize_mi_results() to better
match GDB/MI output syntax (see corresponding section in documentation,
in particular rules 'result-record' and 'async-output'.
Since it is now more generic function, it has been moved to py-mi.c.
This is a preparation for implementing Python support for sending custom
MI async events.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This adds a new Python function, gdb.execute_mi, that can be used to
invoke an MI command but get the output as a Python object, rather
than a string. This is done by implementing a new ui_out subclass
that builds a Python object.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=11688
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
If an MI command written in Python includes a number in its output,
currently that is simply emitted as a string. However, it's
convenient for a later patch if these are emitted using field_signed.
This does not make a difference to ordinary MI clients.
|
|
Currently, when we add a new python sub-system to GDB,
e.g. py-inferior.c, we end up having to create a new function like
gdbpy_initialize_inferior, which then has to be called from the
function do_start_initialization in python.c.
In some cases (py-micmd.c and py-tui.c), we have two functions
gdbpy_initialize_*, and gdbpy_finalize_*, with the second being called
from finalize_python which is also in python.c.
This commit proposes a mechanism to manage these initialization and
finalization calls, this means that adding a new Python subsystem will
no longer require changes to python.c or python-internal.h, instead,
the initialization and finalization functions will be registered
directly from the sub-system file, e.g. py-inferior.c, or py-micmd.c.
The initialization and finalization functions are managed through a
new class gdbpy_initialize_file in python-internal.h. This class
contains a single global vector of all the initialization and
finalization functions.
In each Python sub-system we create a new gdbpy_initialize_file
object, the object constructor takes care of registering the two
callback functions.
Now from python.c we can call static functions on the
gdbpy_initialize_file class which take care of walking the callback
list and invoking each callback in turn.
To slightly simplify the Python sub-system files I added a new macro
GDBPY_INITIALIZE_FILE, which hides the need to create an object. We
can now just do this:
GDBPY_INITIALIZE_FILE (gdbpy_initialize_registers);
One possible problem with this change is that there is now no
guaranteed ordering of how the various sub-systems are initialized (or
finalized). To try and avoid dependencies creeping in I have added a
use of the environment variable GDB_REVERSE_INIT_FUNCTIONS, this is
the same environment variable used in the generated init.c file.
Just like with init.c, when this environment variable is set we
reverse the list of Python initialization (and finalization)
functions. As there is already a test that starts GDB with the
environment variable set then this should offer some level of
protection against dependencies creeping in - though for full
protection I guess we'd need to run all gdb.python/*.exp tests with
the variable set.
I have tested this patch with the environment variable set, and saw no
regressions, so I think we are fine right now.
One other change of note was for gdbpy_initialize_gdb_readline, this
function previously returned void. In order to make this function
have the correct signature I've updated its return type to int, and we
now return 0 to indicate success.
All of the other initialize (and finalize) functions have been made
static within their respective sub-system files.
There should be no user visible changes after this commit.
|
|
Background
----------
When a thread-specific breakpoint is deleted as a result of the
specific thread exiting the function remove_threaded_breakpoints is
called which sets the disposition of the breakpoint to
disp_del_at_next_stop and sets the breakpoint number to 0. Setting
the breakpoint number to zero has the effect of hiding the breakpoint
from the user. We also print a message indicating that the breakpoint
has been deleted.
It was brought to my attention during a review of another patch[1]
that setting a breakpoints number to zero will suppress the MI
breakpoint-deleted notification for that breakpoint, and indeed, this
can be seen to be true, in delete_breakpoint, if the breakpoint number
is zero, then GDB will not notify the breakpoint_deleted observer.
It seems wrong that a user created, thread-specific breakpoint, will
have a =breakpoint-created notification, but will not have a
=breakpoint-deleted notification. I suspect that this is a bug.
[1] https://sourceware.org/pipermail/gdb-patches/2023-February/196560.html
The First Problem
-----------------
During my initial testing I wanted to see how GDB handled the
breakpoint after it's number was set to zero. To do this I created
the testcase gdb.threads/thread-bp-deleted.exp. This test creates a
worker thread, which immediately exits. After the worker thread has
exited the main thread spins in a loop.
In GDB I break once the worker thread has been created and place a
thread-specific breakpoint, then use 'continue&' to resume the
inferior in non-stop mode. The worker thread then exits, but the main
thread never stops - instead it sits in the spin. I then tried to use
'maint info breakpoints' to see what GDB thought of the
thread-specific breakpoint.
Unfortunately, GDB crashed like this:
(gdb) continue&
Continuing.
(gdb) [Thread 0x7ffff7c5d700 (LWP 1202458) exited]
Thread-specific breakpoint 3 deleted - thread 2 no longer in the thread list.
maint info breakpoints
... snip some output ...
Fatal signal: Segmentation fault
----- Backtrace -----
0x5ffb62 gdb_internal_backtrace_1
../../src/gdb/bt-utils.c:122
0x5ffc05 _Z22gdb_internal_backtracev
../../src/gdb/bt-utils.c:168
0x89965e handle_fatal_signal
../../src/gdb/event-top.c:964
0x8997ca handle_sigsegv
../../src/gdb/event-top.c:1037
0x7f96f5971b1f ???
/usr/src/debug/glibc-2.30-2-gd74461fa34/nptl/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0xe602b0 _Z15print_thread_idP11thread_info
../../src/gdb/thread.c:1439
0x5b3d05 print_one_breakpoint_location
../../src/gdb/breakpoint.c:6542
0x5b462e print_one_breakpoint
../../src/gdb/breakpoint.c:6702
0x5b5354 breakpoint_1
../../src/gdb/breakpoint.c:6924
0x5b58b8 maintenance_info_breakpoints
../../src/gdb/breakpoint.c:7009
... etc ...
As the thread-specific breakpoint is set to disp_del_at_next_stop, and
GDB hasn't stopped yet, then the breakpoint still exists in the global
breakpoint list.
The breakpoint will not show in 'info breakpoints' as its number is
zero, but it will show in 'maint info breakpoints'.
As GDB prints the breakpoint, the thread-id for the breakpoint is
printed as part of the 'stop only in thread ...' line. Printing the
thread-id involves calling find_thread_global_id to convert the global
thread-id into a thread_info*. Then calling print_thread_id to
convert the thread_info* into a string.
The problem is that find_thread_global_id returns nullptr as the
thread for the thread-specific breakpoint has exited. The
print_thread_id assumes it will be passed a non-nullptr. As a result
GDB crashes.
In this commit I've added an assert to print_thread_id (gdb/thread.c)
to check that the pointed passed in is not nullptr. This assert would
have triggered in the above case before GDB crashed.
MI Notifications: The Dangers Of Changing A Breakpoint's Number
---------------------------------------------------------------
Currently the delete_breakpoint function doesn't trigger the
breakpoint_deleted observer for any breakpoint with the number zero.
There is a comment explaining why this is the case in the code; it's
something about watchpoints. But I did consider just removing the 'is
the number zero' guard and always triggering the breakpoint_deleted
observer, figuring that I'd then fix the watchpoint issue some other
way.
But I realised this wasn't going to be good enough. When the MI
notification was delivered the number would be zero, so any frontend
parsing the notifications would not be able to match
=breakpoint-deleted notification to the earlier =breakpoint-created
notification.
What this means is that, at the point the breakpoint_deleted observer
is called, the breakpoint's number must be correct.
MI Notifications: The Dangers Of Delaying Deletion
--------------------------------------------------
The test I used to expose the above crash also brought another problem
to my attention. In the above test we used 'continue&' to resume,
after which a thread exited, but the inferior didn't stop. Recreating
the same test in the MI looks like this:
-break-insert -p 2 main
^done,bkpt={number="2",type="breakpoint",disp="keep",...<snip>...}
(gdb)
-exec-continue
^running
*running,thread-id="all"
(gdb)
~"[Thread 0x7ffff7c5d700 (LWP 987038) exited]\n"
=thread-exited,id="2",group-id="i1"
~"Thread-specific breakpoint 2 deleted - thread 2 no longer in the thread list.\n"
At this point the we have a single thread left, which is still
running:
-thread-info
^done,threads=[{id="1",target-id="Thread 0x7ffff7c5eb80 (LWP 987035)",name="thread-bp-delet",state="running",core="4"}],current-thread-id="1"
(gdb)
Notice that we got the =thread-exited notification from GDB as soon as
the thread exited. We also saw the CLI line from GDB, the line
explaining that breakpoint 2 was deleted. But, as expected, we didn't
see the =breakpoint-deleted notification.
I say "as expected" because the number was set to zero. But, even if
the number was not set to zero we still wouldn't see the
notification. The MI notification is driven by the breakpoint_deleted
observer, which is only called when we actually delete the breakpoint,
which is only done the next time GDB stops.
Now, maybe this is fine. The notification is delivered a little
late. But remember, by setting the number to zero the breakpoint will
be hidden from the user, for example, the breakpoint is removed from
the MI's -break-info command output.
This means that GDB is in a position where the breakpoint doesn't show
up in the breakpoint table, but a =breakpoint-deleted notification has
not yet been sent out. This doesn't seem right to me.
What this means is that, when the thread exits, we should immediately
be sending out the =breakpoint-deleted notification. We should not
wait for GDB to next stop before sending the notification.
The Solution
------------
My proposed solution is this; in remove_threaded_breakpoints, instead
of setting the disposition to disp_del_at_next_stop and setting the
number to zero, we now just call delete_breakpoint directly.
The notification will now be sent out immediately; as soon as the
thread exits.
As the number has not changed when delete_breakpoint is called, the
notification will have the correct number.
And as the breakpoint is immediately removed from the breakpoint list,
we no longer need to worry about 'maint info breakpoints' trying to
print the thread-id for an exited thread.
My only concern is that calling delete_breakpoint directly seems so
obvious that I wonder why the original patch (that added
remove_threaded_breakpoints) didn't take this approach. This code was
added in commit 49fa26b0411d, but the commit message offers no clues
to why this approach was taken, and the original email thread offers
no insights either[2]. There are no test regressions after making
this change, so I'm hopeful that this is going to be fine.
[2] https://sourceware.org/pipermail/gdb-patches/2013-September/106493.html
The Complication
----------------
Of course, it couldn't be that simple.
The script gdb.python/py-finish-breakpoint.exp had some regressions
during testing.
The problem was with the FinishBreakpoint.out_of_scope callback
implementation. This callback is supposed to trigger whenever the
FinishBreakpoint goes out of scope; and this includes when the thread
for the breakpoint exits.
The problem I ran into is the Python FinishBreakpoint implementation.
Specifically, after this change I was loosing some of the out_of_scope
calls.
The problem is that the out_of_scope call (of which I'm interested) is
triggered from the inferior_exit observer. Before my change the
observers were called in this order:
thread_exit
inferior_exit
breakpoint_deleted
The inferior_exit would trigger the out_of_scope call.
After my change the breakpoint_deleted notification (for
thread-specific breakpoints) occurs earlier, as soon as the
thread-exits, so now the order is:
thread_exit
breakpoint_deleted
inferior_exit
Currently, after the breakpoint_deleted call the Python object
associated with the breakpoint is released, so, when we get to the
inferior_exit observer, there's no longer a Python object to call the
out_of_scope method on.
My solution is to follow the model for how bpfinishpy_pre_stop_hook
and bpfinishpy_post_stop_hook are called, this is done from
gdbpy_breakpoint_cond_says_stop in py-breakpoint.c.
I've now added a new bpfinishpy_pre_delete_hook
gdbpy_breakpoint_deleted in py-breakpoint.c, and from this new hook
function I check and where needed call the out_of_scope method.
With this fix in place I now see the
gdb.python/py-finish-breakpoint.exp test fully passing again.
Testing
-------
Tested on x86-64/Linux with unix, native-gdbserver, and
native-extended-gdbserver boards.
New tests added to covers all the cases I've discussed above.
Approved-By: Pedro Alves <pedro@palves.net>
|
|
Hannes filed a bug showing a crash, where a pretty-printer written in
Python could cause a use-after-free. He sent a patch, but I thought a
different approach was needed.
In a much earlier patch (see bug #12533), we changed the Python code
to release new values from the value chain when constructing a
gdb.Value. The rationale for this is that if you write a command that
does a lot of computations in a loop, all the values will be kept live
by the value chain, resulting in gdb using a large amount of memory.
However, suppose a value is passed to Python from some code in gdb
that needs to use the value after the call into Python. In this
scenario, value_to_value_object will still release the value -- and
because gdb code doesn't generally keep strong references to values (a
consequence of the ancient decision to use the value chain to avoid
memory management), this will result in a use-after-free.
This scenario can happen, as it turns out, when a value is passed to
Python for pretty-printing. Now, normally this route boxes the value
via value_to_value_object_no_release, avoiding the problematic release
from the value chain. However, if you then call Value.cast, the
underlying value API might return the same value, when is then
released from the chain.
This patch fixes the problem by changing how value boxing is done.
value_to_value_object no longer removes a value from the chain.
Instead, every spot in gdb that might construct new values uses a
scoped_value_mark to ensure that the requirements of bug #12533 are
met. And, because incoming values aren't ever released from the chain
(the Value.cast one comes earlier on the chain than the
scoped_value_mark), the bug can no longer occur. (Note that many
spots in the Python layer already take this approach, so not many
places needed to be touched.)
In the future I think we should replace the use of raw "value *" with
value_ref_ptr pretty much everywhere. This will ensure lifetime
safety throughout gdb.
The test case in this patch comes from Hannes' original patch. I only
made a trivial ("require") change to it. However, while this fails
for him, I can't make it fail on this machine; nevertheless, he tried
my patch and reported the bug as being fixed.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30044
|
|
The previous commit relied on spotting when a Python defined TUI
window factory was deleted. I spotted that the window factories are
not deleted when GDB shuts down its Python environment, they are only
deleted when one window factory replaces another. Consider this
example Python script:
class TestWindowFactory:
def __init__(self, msg):
self.msg = msg
print("Entering TestWindowFactory.__init__: %s" % self.msg)
def __call__(self, tui_win):
print("Entering TestWindowFactory.__call__: %s" % self.msg)
return TestWindow(tui_win, self.msg)
def __del__(self):
print("Entering TestWindowFactory.__del__: %s" % self.msg)
gdb.register_window_type("test_window", TestWindowFactory("A"))
gdb.register_window_type("test_window", TestWindowFactory("B"))
And this GDB session:
(gdb) source tui.py
Entering TestWindowFactory.__init__: A
Entering TestWindowFactory.__init__: B
Entering TestWindowFactory.__del__: B
(gdb) quit
Notice that when the 'B' window replaces the 'A' window we see the 'A'
object being deleted. But, when Python is shut down (after the
'quit') the 'B' object is never deleted.
Instead, GDB retains a reference to the window factory object, which
forces the Python object to remain live even after the Python
interpreter itself has been shut down.
The references themselves are held in a dynamically allocated
std::unordered_map (in tui/tui-layout.c) which is never deallocated,
thus the underlying Python references are never decremented to zero,
and so GDB never tries to delete these Python objects.
This commit is the first half of the work to clean up this edge case.
All gdbpy_tui_window_maker objects (the objects that implement the
TUI window factory callback for Python defined TUI windows), are now
linked together into a global list using the intrusive list mechanism.
When GDB shuts down the Python interpreter we can now walk this global
list and release the reference that is held to the underlying Python
object. By releasing this reference the Python object will now be
deleted.
I've added a new assert in gdbpy_tui_window_maker::operator(), this
will catch the case where we somehow end up in here after having
reset the reference to the underlying Python object. I don't think
this should ever happen though as we only clear the references when
shutting down the Python interpreter, and the ::operator() function is
only called when trying to apply a new TUI layout - something that
shouldn't happen while GDB itself is shutting down.
This commit does not update the std::unordered_map in tui-layout.c,
that will be done in the next commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
In a later commit in this series I will propose removing all of the
explicit gdbpy_initialize_* calls from python.c and replace these
calls with a more generic mechanism.
One of the side effects of this generic mechanism is that the order in
which the various Python sub-systems within GDB are initialized is no
longer guaranteed.
On the whole I don't think this matters, most of the sub-systems are
independent of each other, though testing did reveal a few places
where we did have dependencies, though I don't think those
dependencies were explicitly documented in comment anywhere.
This commit is similar to the previous one, and fixes the second
dependency issue that I found.
In this case the finish_breakpoint_object_type uses the
breakpoint_object_type as its tp_base, this means that
breakpoint_object_type must have been initialized with a call to
PyType_Ready before finish_breakpoint_object_type can be initialized.
Previously we depended on the ordering of calls to
gdbpy_initialize_breakpoints and gdbpy_initialize_finishbreakpoints in
python.c.
After this commit a new function gdbpy_breakpoint_init_breakpoint_type
exists, this function ensures that breakpoint_object_type has been
initialized, and can be called from any gdbpy_initialize_* function.
I feel that this change makes the dependency explicit, which I think
is a good thing.
There should be no user visible changes after this commit.
|
|
This changes GDB to use frame_info_ptr instead of frame_info *
The substitution was done with multiple sequential `sed` commands:
sed 's/^struct frame_info;/class frame_info_ptr;/'
sed 's/struct frame_info \*/frame_info_ptr /g' - which left some
issues in a few files, that were manually fixed.
sed 's/\<frame_info \*/frame_info_ptr /g'
sed 's/frame_info_ptr $/frame_info_ptr/g' - used to remove whitespace
problems.
The changed files were then manually checked and some 'sed' changes
undone, some constructors and some gets were added, according to what
made sense, and what Tromey originally did
Co-Authored-By: Bruno Larsen <blarsen@redhat.com>
Approved-by: Tom Tomey <tom@tromey.com>
|
|
I noticed that gdbpy_parse_register_id would assert if passed a Python
object of a type it was not expecting. The included test case shows
this crash. This patch fixes the problem and also changes
gdbpy_parse_register_id to be more "Python-like" -- it always ensures
the Python error is set when it fails, and the callers now simply
propagate the existing exception.
|
|
PR python/18385
v7:
This version addresses the issues pointed out by Tom.
Added nullchecks for Python object creations.
Changed from using PyLong_FromLong to the gdb_py-versions.
Re-factored some code to make it look more cohesive.
Also added the more safe Python reference count decrement PY_XDECREF,
even though the BreakpointLocation type is never instantiated by the
user (explicitly documented in the docs) decrementing < 0 is made
impossible with the safe call.
Tom pointed out that using the policy class explicitly to decrement a
reference counted object was not the way to go, so this has instead been
wrapped in a ref_ptr that handles that for us in blocpy_dealloc.
Moved macro from py-internal to py-breakpoint.c.
Renamed section at the bottom of commit message "Patch Description".
v6:
This version addresses the points Pedro gave in review to this patch.
Added the attributes `function`, `fullname` and `thread_groups`
as per request by Pedro with the argument that it more resembles the
output of the MI-command "-break-list". Added documentation for these attributes.
Cleaned up left overs from copy+paste in test suite, removed hard coding
of line numbers where possible.
Refactored some code to use more c++-y style range for loops
wrt to breakpoint locations.
Changed terminology, naming was very inconsistent. Used a variety of "parent",
"owner". Now "owner" is the only term used, and the field in the
gdb_breakpoint_location_object now also called "owner".
v5:
Changes in response to review by Tom Tromey:
- Replaced manual INCREF/DECREF calls with
gdbpy_ref ptrs in places where possible.
- Fixed non-gdb style conforming formatting
- Get parent of bploc increases ref count of parent.
- moved bploc Python definition to py-breakpoint.c
The INCREF of self in bppy_get_locations is due
to the individual locations holding a reference to
it's owner. This is decremented at de-alloc time.
The reason why this needs to be here is, if the user writes
for instance;
py loc = gdb.breakpoints()[X].locations[Y]
The breakpoint owner object is immediately going
out of scope (GC'd/dealloced), and the location
object requires it to be alive for as long as it is alive.
Thanks for your review, Tom!
v4:
Fixed remaining doc issues as per request
by Eli.
v3:
Rewritten commit message, shortened + reworded,
added tests.
Patch Description
Currently, the Python API lacks the ability to
query breakpoints for their installed locations,
and subsequently, can't query any information about them, or
enable/disable individual locations.
This patch solves this by adding Python type gdb.BreakpointLocation.
The type is never instantiated by the user of the Python API directly,
but is produced by the gdb.Breakpoint.locations attribute returning
a list of gdb.BreakpointLocation.
gdb.Breakpoint.locations:
The attribute for retrieving the currently installed breakpoint
locations for gdb.Breakpoint. Matches behavior of
the "info breakpoints" command in that it only
returns the last known or currently inserted breakpoint locations.
BreakpointLocation contains 7 attributes
6 read-only attributes:
owner: location owner's Python companion object
source: file path and line number tuple: (string, long) / None
address: installed address of the location
function: function name where location was set
fullname: fullname where location was set
thread_groups: thread groups (inferiors) where location was set.
1 writeable attribute:
enabled: get/set enable/disable this location (bool)
Access/calls to these, can all throw Python exceptions (documented in
the online documentation), and that's due to the nature
of how breakpoint locations can be invalidated
"behind the scenes", either by them being removed
from the original breakpoint or changed,
like for instance when a new symbol file is loaded, at
which point all breakpoint locations are re-created by GDB.
Therefore this patch has chosen to be non-intrusive:
it's up to the Python user to re-request the locations if
they become invalid.
Also there's event handlers that handle new object files etc, if a Python
user is storing breakpoint locations in some larger state they've
built up, refreshing the locations is easy and it only comes
with runtime overhead when the Python user wants to use them.
gdb.BreakpointLocation Python type
struct "gdbpy_breakpoint_location_object" is found in python-internal.h
Its definition, layout, methods and functions
are found in the same file as gdb.Breakpoint (py-breakpoint.c)
1 change was also made to breakpoint.h/c to make it possible
to enable and disable a bp_location* specifically,
without having its LOC_NUM, as this number
also can change arbitrarily behind the scenes.
Updated docs & news file as per request.
Testsuite: tests the .source attribute and the disabling of
individual locations.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=18385
Change-Id: I302c1c50a557ad59d5d18c88ca19014731d736b0
|
|
Python 3.11 deprecates PySys_SetPath and Py_SetProgramName. The
PyConfig API replaces these and other functions. This commit uses the
PyConfig API to provide equivalent functionality while also preserving
support for older versions of Python, i.e. those before Python 3.8.
A beta version of Python 3.11 is available in Fedora Rawhide. Both
Fedora 35 and Fedora 36 use Python 3.10, while Fedora 34 still used
Python 3.9. I've tested these changes on Fedora 34, Fedora 36, and
rawhide, though complete testing was not possible on rawhide due to
a kernel bug. That being the case, I decided to enable the newer
PyConfig API by testing PY_VERSION_HEX against 0x030a0000. This
corresponds to Python 3.10.
We could try to use the PyConfig API for Python versions as early as 3.8,
but I'm reluctant to do this as there may have been PyConfig related
bugs in earlier versions which have since been fixed. Recent linux
distributions should have support for Python 3.10. This should be
more than adequate for testing the new Python initialization code in
GDB.
Information about the PyConfig API as well as the motivation behind
deprecating the old interface can be found at these links:
https://github.com/python/cpython/issues/88279
https://peps.python.org/pep-0587/
https://docs.python.org/3.11/c-api/init_config.html
The v2 commit also addresses several problems that Simon found in
the v1 version.
In v1, I had used Py_DontWriteBytecodeFlag in the new initialization
code, but Simon pointed out that this global configuration variable
will be deprecated in Python 3.12. This version of the patch no longer
uses Py_DontWriteBytecodeFlag in the new initialization code.
Additionally, both Py_DontWriteBytecodeFlag and Py_IgnoreEnvironmentFlag
will no longer be used when building GDB against Python 3.10 or higher.
While it's true that both of these global configuration variables are
deprecated in Python 3.12, it makes sense to disable their use for
gdb builds against 3.10 and higher since those are the versions for
which the PyConfig API is now being used by GDB. (The PyConfig API
includes different mechanisms for making the same settings afforded
by use of the soon-to-be deprecated global configuration variables.)
Simon also noted that PyConfig_Clear() would not have be called for
one of the failure paths. I've fixed that problem and also made the
rest of the "bail out" code more direct. In particular,
PyConfig_Clear() will always be called, both for success and failure.
The v3 patch addresses some rebase conflicts related to module
initialization . Commit 3acd9a692dd ("Make 'import gdb.events' work")
uses PyImport_ExtendInittab instead of PyImport_AppendInittab. That
commit also initializes a struct for each module to import. Both the
initialization and the call to were moved ahead of the ifdefs to avoid
having to replicate (at least some of) the code three times in various
portions of the ifdefs.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28668
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29287
|
|
PR python/17291 asks for access to the current print options. While I
think this need is largely satisfied by the existence of
Value.format_string, it seemed to me that a bit more could be done.
First, while Value.format_string uses the user's settings, it does not
react to temporary settings such as "print/x". This patch changes
this.
Second, there is no good way to examine the current settings (in
particular the temporary ones in effect for just a single "print").
This patch adds this as well.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=17291
|
|
Pierre-Marie noticed that, while gdb.events is a Python module, it
can't be imported. This patch changes how this module is created, so
that it can be imported, while also ensuring that the module is always
visible, just as it was in the past.
This new approach required one non-obvious change -- when running
gdb.base/warning.exp, where --data-directory is intentionally not
found, the event registries can now be nullptr. Consequently, this
patch probably also requires
https://sourceware.org/pipermail/gdb-patches/2022-June/189796.html
Note that this patch obsoletes
https://sourceware.org/pipermail/gdb-patches/2022-June/189797.html
|
|
This commit extends the Python API to include disassembler support.
The motivation for this commit was to provide an API by which the user
could write Python scripts that would augment the output of the
disassembler.
To achieve this I have followed the model of the existing libopcodes
disassembler, that is, instructions are disassembled one by one. This
does restrict the type of things that it is possible to do from a
Python script, i.e. all additional output has to fit on a single line,
but this was all I needed, and creating something more complex would,
I think, require greater changes to how GDB's internal disassembler
operates.
The disassembler API is contained in the new gdb.disassembler module,
which defines the following classes:
DisassembleInfo
Similar to libopcodes disassemble_info structure, has read-only
properties: address, architecture, and progspace. And has methods:
__init__, read_memory, and is_valid.
Each time GDB wants an instruction disassembled, an instance of
this class is passed to a user written disassembler function, by
reading the properties, and calling the methods (and other support
methods in the gdb.disassembler module) the user can perform and
return the disassembly.
Disassembler
This is a base-class which user written disassemblers should
inherit from. This base class provides base implementations of
__init__ and __call__ which the user written disassembler should
override.
DisassemblerResult
This class can be used to hold the result of a call to the
disassembler, it's really just a wrapper around a string (the text
of the disassembled instruction) and a length (in bytes). The user
can return an instance of this class from Disassembler.__call__ to
represent the newly disassembled instruction.
The gdb.disassembler module also provides the following functions:
register_disassembler
This function registers an instance of a Disassembler sub-class
as a disassembler, either for one specific architecture, or, as a
global disassembler for all architectures.
builtin_disassemble
This provides access to GDB's builtin disassembler. A common
use case that I see is augmenting the existing disassembler output.
The user code can call this function to have GDB disassemble the
instruction in the normal way. The user gets back a
DisassemblerResult object, which they can then read in order to
augment the disassembler output in any way they wish.
This function also provides a mechanism to intercept the
disassemblers reads of memory, thus the user can adjust what GDB
sees when it is disassembling.
The included documentation provides a more detailed description of the
API.
There is also a new CLI command added:
maint info python-disassemblers
This command is defined in the Python gdb.disassemblers module, and
can be used to list the currently registered Python disassemblers.
|
|
Convert the gdbpy_err_fetch class to make use of gdbpy_ref, this
removes the need for manual reference count management, and allows the
destructor to be removed.
There should be no functional change after this commit.
I think this cleanup is worth doing on its own, however, in a later
commit I will want to copy instances of gdbpy_err_fetch, and switching
to using gdbpy_ref means that I can rely on the default copy
constructor, without having to add one that handles the reference
counts, so this is good preparation for that upcoming change.
|
|
Consider this command defined in Python (in the file test-cmd.py):
class test_cmd (gdb.Command):
"""
This is the first line.
Indented second line.
This is the third line.
"""
def __init__ (self):
super ().__init__ ("test-cmd", gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
print ("In test-cmd")
test_cmd()
Now, within a GDB session:
(gdb) source test-cmd.py
(gdb) help test-cmd
This is the first line.
Indented second line.
This is the third line.
(gdb)
I think there's three things wrong here:
1. The leading blank line,
2. The trailing blank line, and
3. Every line is indented from the left edge slightly.
The problem of course, is that GDB is using the Python doc string
verbatim as its help text. While the user has formatted the help text
so that it appears clear within the .py file, this means that the text
appear less well formatted when displayed in the "help" output.
The same problem can be observed for gdb.Parameter objects in their
set/show output.
In this commit I aim to improve the "help" output for commands and
parameters.
To do this I have added gdbpy_fix_doc_string_indentation, a new
function that rewrites the doc string text following the following
rules:
1. Leading blank lines are removed,
2. Trailing blank lines are removed, and
3. Leading whitespace is removed in a "smart" way such that the
relative indentation of lines is retained.
With this commit in place the above example now looks like this:
(gdb) source ~/tmp/test-cmd.py
(gdb) help test-cmd
This is the first line.
Indented second line.
This is the third line.
(gdb)
Which I think is much neater. Notice that the indentation of the
second line is retained. Any blank lines within the help text (not
leading or trailing) will be retained.
I've added a NEWS entry to note that there has been a change in
behaviour, but I didn't update the manual. The existing manual is
suitably vague about how the doc string is used, so I think the new
behaviour is covered just as well by the existing text.
|
|
New in this version:
- Rebase on master, fix a few more issues that appeared.
python-internal.h contains a number of macros that helped make the code
work with both Python 2 and 3. Remove them and adjust the code to use
the Python 3 functions.
Change-Id: I99a3d80067fb2d65de4f69f6473ba6ffd16efb2d
|
|
New in this version:
- Add a PY_MAJOR_VERSION check in configure.ac / AC_TRY_LIBPYTHON. If
the user passes --with-python=python2, this will cause a configure
failure saying that GDB only supports Python 3.
Support for Python 2 is a maintenance burden for any patches touching
Python support. Among others, the differences between Python 2 and 3
string and integer types are subtle. It requires a lot of effort and
thinking to get something that behaves correctly on both. And that's if
the author and reviewer of the patch even remember to test with Python
2.
See this thread for an example:
https://sourceware.org/pipermail/gdb-patches/2021-December/184260.html
So, remove Python 2 support. Update the documentation to state that GDB
can be built against Python 3 (as opposed to Python 2 or 3).
Update all the spots that use:
- sys.version_info
- IS_PY3K
- PY_MAJOR_VERSION
- gdb_py_is_py3k
... to only keep the Python 3 portions and drop the use of some
now-removed compatibility macros.
I did not update the configure script more than just removing the
explicit references to Python 2. We could maybe do more there, like
check the Python version and reject it if that version is not
supported. Otherwise (with this patch), things will only fail at
compile time, so it won't really be clear to the user that they are
trying to use an unsupported Python version. But I'm a bit lost in the
configure code that checks for Python, so I kept that for later.
Change-Id: I75b0f79c148afbe3c07ac664cfa9cade052c0c62
|
|
Add a new function, gdb.format_address, which is a wrapper around
GDB's print_address function.
This method takes an address, and returns a string with the format:
ADDRESS <SYMBOL+OFFSET>
Where, ADDRESS is the original address, formatted as hexadecimal,
SYMBOL is a symbol with an address lower than ADDRESS, and OFFSET is
the offset from SYMBOL to ADDRESS in decimal.
If there's no SYMBOL suitably close to ADDRESS then the
<SYMBOL+OFFSET> part is not included.
This is useful if a user wants to write a Python script that
pretty-prints addresses, the user no longer needs to do manual symbol
lookup, or worry about correctly formatting addresses.
Additionally, there are some settings that effect how GDB picks
SYMBOL, and whether the file name and line number should be included
with the SYMBOL name, the gdb.format_address function ensures that the
users Python script also benefits from these settings.
The gdb.format_address by default selects SYMBOL from the current
inferiors program space, and address is formatted using the
architecture for the current inferior. However, a user can also
explicitly pass a program space and architecture like this:
gdb.format_address(ADDRESS, PROGRAM_SPACE, ARCHITECTURE)
In order to format an address for a different inferior.
Notes on the implementation:
In py-arch.c I extended arch_object_to_gdbarch to add an assertion for
the type of the PyObject being worked on. Prior to this commit all
uses of arch_object_to_gdbarch were guaranteed to pass this function a
gdb.Architecture object, but, with this commit, this might not be the
case.
So, with this commit I've made it a requirement that the PyObject be a
gdb.Architecture, and this is checked with the assert. And in order
that callers from other files can check if they have a
gdb.Architecture object, I've added the new function
gdbpy_is_architecture.
In py-progspace.c I've added two new function, the first
progspace_object_to_program_space, converts a PyObject of type
gdb.Progspace to the associated program_space pointer, and
gdbpy_is_progspace checks if a PyObject is a gdb.Progspace or not.
|
|
The motivation for this patch is the fact that py-micmd.c doesn't build
with Python 2, due to PyDict_GetItemWithError being a Python 3-only
function:
CXX python/py-micmd.o
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c: In function ‘int micmdpy_uninstall_command(micmdpy_object*)’:
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c:430:20: error: ‘PyDict_GetItemWithError’ was not declared in this scope; did you mean ‘PyDict_GetItemString’?
430 | PyObject *curr = PyDict_GetItemWithError (mi_cmd_dict.get (),
| ^~~~~~~~~~~~~~~~~~~~~~~
| PyDict_GetItemString
A first solution to fix this would be to try to replace
PyDict_GetItemWithError equivalent Python 2 code. But I looked at why
we are doing this in the first place: it is to maintain the
`gdb._mi_commands` Python dictionary that we use as a `name ->
gdb.MICommand object` map. Since the `gdb._mi_commands` dictionary is
never actually used in Python, it seems like a lot of trouble to use a
Python object for this.
My first idea was to replace it with a C++ map
(std::unordered_map<std::string, gdbpy_ref<micmdpy_object>>). While
implementing this, I realized we don't really need this map at all. The
mi_command_py objects registered in the main MI command table can own
their backing micmdpy_object (that's a gdb.MICommand, but seen from the
C++ code). To know whether an mi_command is an mi_command_py, we can
use a dynamic cast. Since there's one less data structure to maintain,
there are less chances of messing things up.
- Change mi_command_py::m_pyobj to a gdbpy_ref, the mi_command_py is
now what keeps the MICommand alive.
- Set micmdpy_object::mi_command in the constructor of mi_command_py.
If mi_command_py manages setting/clearing that field in
swap_python_object, I think it makes sense that it also takes care of
setting it initially.
- Move a bunch of checks from micmdpy_install_command to
swap_python_object, and make them gdb_asserts.
- In micmdpy_install_command, start by doing an mi_cmd_lookup. This is
needed to know whether there's a Python MI command already registered
with that name. But we can already tell if there's a non-Python
command registered with that name. Return an error if that happens,
rather than waiting for insert_mi_cmd_entry to fail. Change the
error message to "name is already in use" rather than "may already be
in use", since it's more precise.
I asked Andrew about the original intent of using a Python dictionary
object to hold the command objects. The reason was to make sure the
objects get destroyed when the Python runtime gets finalized, not later.
Holding the objects in global C++ data structures and not doing anything
more means that the held Python objects will be decref'd after the
Python interpreter has been finalized. That's not desirable. I tried
it and it indeed segfaults.
Handle this by adding a gdbpy_finalize_micommands function called in
finalize_python. This is the mirror of gdbpy_initialize_micommands
called in do_start_initialization. In there, delete all Python MI
commands. I think it makes sense to do it this way: if it was somehow
possible to unload Python support from GDB in the middle of a session
we'd want to unregister any Python MI command. Otherwise, these MI
commands would be backed with a stale PyObject or simply nothing.
Delete tests that were related to `gdb._mi_commands`.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I060d5ebc7a096c67487998a8a4ca1e8e56f12cd3
|
|
This commit allows a user to create custom MI commands using Python
similarly to what is possible for Python CLI commands.
A new subclass of mi_command is defined for Python MI commands,
mi_command_py. A new file, gdb/python/py-micmd.c contains the logic
for Python MI commands.
This commit is based on work linked too from this mailing list thread:
https://sourceware.org/pipermail/gdb/2021-November/049774.html
Which has also been previously posted to the mailing list here:
https://sourceware.org/pipermail/gdb-patches/2019-May/158010.html
And was recently reposted here:
https://sourceware.org/pipermail/gdb-patches/2022-January/185190.html
The version in this patch takes some core code from the previously
posted patches, but also has some significant differences, especially
after the feedback given here:
https://sourceware.org/pipermail/gdb-patches/2022-February/185767.html
A new MI command can be implemented in Python like this:
class echo_args(gdb.MICommand):
def invoke(self, args):
return { 'args': args }
echo_args("-echo-args")
The 'args' parameter (to the invoke method) is a list
containing (almost) all command line arguments passed to the MI
command (--thread and --frame are handled before the Python code is
called, and removed from the args list). This list can be empty if
the MI command was passed no arguments.
When used within gdb the above command produced output like this:
(gdb)
-echo-args a b c
^done,args=["a","b","c"]
(gdb)
The 'invoke' method of the new command must return a dictionary. The
keys of this dictionary are then used as the field names in the mi
command output (e.g. 'args' in the above).
The values of the result returned by invoke can be dictionaries,
lists, iterators, or an object that can be converted to a string.
These are processed recursively to create the mi output. And so, this
is valid:
class new_command(gdb.MICommand):
def invoke(self,args):
return { 'result_one': { 'abc': 123, 'def': 'Hello' },
'result_two': [ { 'a': 1, 'b': 2 },
{ 'c': 3, 'd': 4 } ] }
Which produces output like:
(gdb)
-new-command
^done,result_one={abc="123",def="Hello"},result_two=[{a="1",b="2"},{c="3",d="4"}]
(gdb)
I have required that the fields names used in mi result output must
match the regexp: "^[a-zA-Z][-_a-zA-Z0-9]*$" (without the quotes).
This restriction was never written down anywhere before, but seems
sensible to me, and we can always loosen this rule later if it proves
to be a problem. Much harder to try and add a restriction later, once
people are already using the API.
What follows are some details about how this implementation differs
from the original patch that was posted to the mailing list.
In this patch, I have changed how the lifetime of the Python
gdb.MICommand objects is managed. In the original patch, these object
were kept alive by an owned reference within the mi_command_py object.
As such, the Python object would not be deleted until the
mi_command_py object itself was deleted.
This caused a problem, the mi_command_py were held in the global mi
command table (in mi/mi-cmds.c), which, as a global, was not cleared
until program shutdown. By this point the Python interpreter has
already been shutdown. Attempting to delete the mi_command_py object
at this point was causing GDB to try and invoke Python code after
finalising the Python interpreter, and we would crash.
To work around this problem, the original patch added code in
python/python.c that would search the mi command table, and delete the
mi_command_py objects before the Python environment was finalised.
In contrast, in this patch, I have added a new global dictionary to
the gdb module, gdb._mi_commands. We already have several such global
data stores related to pretty printers, and frame unwinders.
The MICommand objects are placed into the new gdb.mi_commands
dictionary, and it is this reference that keeps the objects alive.
When GDB's Python interpreter is shut down gdb._mi_commands is deleted,
and any MICommand objects within it are deleted at this point.
This change avoids having to make the mi_cmd_table global, and walk
over it from within GDB's python related code.
This patch handles command redefinition entirely within GDB's python
code, though this does impose one small restriction which is not
present in the original code (detailed below), I don't think this is a
big issue. However, the original patch relied on being able to
finish executing the mi_command::do_invoke member function after the
mi_command object had been deleted. Though continuing to execute a
member function after an object is deleted is well defined, it is
also (IMHO) risky, its too easy for someone to later add a use of the
object without realising that the object might sometimes, have been
deleted. The new patch avoids this issue.
The one restriction that is added to avoid this, is that an MICommand
object can't be reinitialised with a different command name, so:
(gdb) python cmd = MyMICommand("-abc")
(gdb) python cmd.__init__("-def")
can't reinitialize object with a different command name
This feels like a pretty weird edge case, and I'm happy to live with
this restriction.
I have also changed how the memory is managed for the command name.
In the most recently posted patch series, the command name is moved
into a subclass of mi_command, the python mi_command_py, which
inherits from mi_command is then free to use a smart pointer to manage
the memory for the name.
In this patch, I leave the mi_command class unchanged, and instead
hold the memory for the name within the Python object, as the lifetime
of the Python object always exceeds the c++ object stored in the
mi_cmd_table. This adds a little more complexity in py-micmd.c, but
leaves the mi_command class nice and simple.
Next, this patch adds some extra functionality, there's a
MICommand.name read-only attribute containing the name of the command,
and a read-write MICommand.installed attribute that can be used to
install (make the command available for use) and uninstall (remove the
command from the mi_cmd_table so it can't be used) the command. This
attribute will be automatically updated if a second command replaces
an earlier command.
This patch adds additional error handling, and makes more use the
gdbpy_handle_exception function.
Co-Authored-By: Jan Vrany <jan.vrany@labware.com>
|
|
Add a new function gdb.history_count to the Python api, this function
returns an integer, the number of items in GDB's value history.
This is useful if you want to pull items from the history by their
absolute number, for example, if you wanted to show a complete history
list. Previously we could figure out how many items are in the
history list by trying to fetch the items, and then catching the
exception when the item is not available, but having this function
seems nicer.
|
|
Currently, gdb's Python layer captures the current architecture and
language when "entering" Python code. This has some undesirable
effects, and so this series changes how this is handled.
First, there is code like this:
gdbpy_enter enter_py (python_gdbarch, python_language);
This is incorrect, because both of these are NULL when not otherwise
assigned. This can cause crashes in some cases -- I've added one to
the test suite. (Note that this crasher is just an example, other
ones along the same lines are possible.)
Second, when the language is captured in this way, it means that
Python code cannot affect the current language for its own purposes.
It's reasonable to want to write code like this:
gdb.execute('set language mumble')
... stuff using the current language
gdb.execute('set language previous-value')
However, this won't actually work, because the language is captured on
entry. I've added a test to show this as well.
This patch changes gdb to try to avoid capturing the current values.
The Python concept of the current gdbarch is only set in those few
cases where a non-default value is computed or needed; and the
language is not captured at all -- instead, in the cases where it's
required, the current language is temporarily changed.
|
|
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
This commit adds a new object type gdb.TargetConnection. This new
type represents a connection within GDB (a connection as displayed by
'info connections').
There's three ways to find a gdb.TargetConnection, there's a new
'gdb.connections()' function, which returns a list of all currently
active connections.
Or you can read the new 'connection' property on the gdb.Inferior
object type, this contains the connection for that inferior (or None
if the inferior has no connection, for example, it is exited).
Finally, there's a new gdb.events.connection_removed event registry,
this emits a new gdb.ConnectionEvent whenever a connection is removed
from GDB (this can happen when all inferiors using a connection exit,
though this is not always the case, depending on the connection type).
The gdb.ConnectionEvent has a 'connection' property, which is the
gdb.TargetConnection being removed from GDB.
The gdb.TargetConnection has an 'is_valid()' method. A connection
object becomes invalid when the underlying connection is removed from
GDB (as discussed above, this might be when all inferiors using a
connection exit, or it might be when the user explicitly replaces a
connection in GDB by issuing another 'target' command).
The gdb.TargetConnection has the following read-only properties:
'num': The number for this connection,
'type': e.g. 'native', 'remote', 'sim', etc
'description': The longer description as seen in the 'info
connections' command output.
'details': A string or None. Extra details for the connection, for
example, a remote connection's details might be
'hostname:port'.
|