Age | Commit message (Collapse) | Author | Files | Lines |
|
Like the previous commit, I discovered that in micmdpy_set_installed
we were calling PyObject_IsTrue, but not checking for a possible error
value being returned.
The micmdpy_set_installed function implements the
gdb.MICommand.installed attribute, and the documentation indicates that
this attribute should only be assigned a bool:
This attribute is read-write, setting this attribute to 'False'
will uninstall the command, removing it from the set of available
commands. Setting this attribute to 'True' will install the
command for use.
So I propose that instead of using PyObject_IsTrue we use
PyBool_Check, and if the new value fails this check we raise an
error. We can then compare the new value to Py_True directly instead
of calling PyObject_IsTrue.
This is a potentially breaking change to the Python API, but hopefully
this will not impact too many people, and the fix is pretty
easy (switch to using a bool). I've added a NEWS entry to draw
attention to this change.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
PR python/32163 points out that various types provided by gdb are not
added to the gdb module, so they aren't available for interactive
inspection. I think this is just an oversight.
This patch fixes the problem by introducing a new helper function that
both readies the type and then adds it to the appropriate module. The
patch also poisons PyType_Ready, the idea being to avoid this bug in
the future.
v2:
* Fixed a bug in original patch in gdb.Architecture registration
* Added regression test for the types mentioned in the bug
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32163
Reviewed-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
Now that defs.h, server.h and common-defs.h are included via the
`-include` option, it is no longer necessary for source files to include
them. Remove all the inclusions of these files I could find. Update
the generation scripts where relevant.
Change-Id: Ia026cff269c1b7ae7386dd3619bc9bb6a5332837
Approved-By: Pedro Alves <pedro@palves.net>
|
|
This commit is the result of the following actions:
- Running gdb/copyright.py to update all of the copyright headers to
include 2024,
- Manually updating a few files the copyright.py script told me to
update, these files had copyright headers embedded within the
file,
- Regenerating gdbsupport/Makefile.in to refresh it's copyright
date,
- Using grep to find other files that still mentioned 2023. If
these files were updated last year from 2022 to 2023 then I've
updated them this year to 2024.
I'm sure I've probably missed some dates. Feel free to fix them up as
you spot them.
|
|
This commit generalizes serialize_mi_result() to make usable in
different contexts than printing result of custom MI command.
To do so, the check whether passed Python object is a dictionary has been
moved to the caller - at the very least, different uses require different
error messages. Also it has been renamed to serialize_mi_results() to better
match GDB/MI output syntax (see corresponding section in documentation,
in particular rules 'result-record' and 'async-output'.
Since it is now more generic function, it has been moved to py-mi.c.
This is a preparation for implementing Python support for sending custom
MI async events.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This changes mi_parse::command to be a unique_xmalloc_ptr and fixes up
all the uses. This avoids some manual memory management. std::string
is not used here due to how the Python API works -- this approach
avoids an extra copy there.
Reviewed-by: Keith Seitz <keiths@redhat.com>
|
|
This changes mi_parse_argv to be a method of mi_parse. This is just a
minor cleanup.
|
|
This changes mi_parse::args to be a private member, retrieved via
accessor. It also changes this member to be a std::string. This
makes it simpler for a subsequent patch to implement different
behavior for argument parsing.
|
|
If an MI command written in Python includes a number in its output,
currently that is simply emitted as a string. However, it's
convenient for a later patch if these are emitted using field_signed.
This does not make a difference to ordinary MI clients.
|
|
Currently, when we add a new python sub-system to GDB,
e.g. py-inferior.c, we end up having to create a new function like
gdbpy_initialize_inferior, which then has to be called from the
function do_start_initialization in python.c.
In some cases (py-micmd.c and py-tui.c), we have two functions
gdbpy_initialize_*, and gdbpy_finalize_*, with the second being called
from finalize_python which is also in python.c.
This commit proposes a mechanism to manage these initialization and
finalization calls, this means that adding a new Python subsystem will
no longer require changes to python.c or python-internal.h, instead,
the initialization and finalization functions will be registered
directly from the sub-system file, e.g. py-inferior.c, or py-micmd.c.
The initialization and finalization functions are managed through a
new class gdbpy_initialize_file in python-internal.h. This class
contains a single global vector of all the initialization and
finalization functions.
In each Python sub-system we create a new gdbpy_initialize_file
object, the object constructor takes care of registering the two
callback functions.
Now from python.c we can call static functions on the
gdbpy_initialize_file class which take care of walking the callback
list and invoking each callback in turn.
To slightly simplify the Python sub-system files I added a new macro
GDBPY_INITIALIZE_FILE, which hides the need to create an object. We
can now just do this:
GDBPY_INITIALIZE_FILE (gdbpy_initialize_registers);
One possible problem with this change is that there is now no
guaranteed ordering of how the various sub-systems are initialized (or
finalized). To try and avoid dependencies creeping in I have added a
use of the environment variable GDB_REVERSE_INIT_FUNCTIONS, this is
the same environment variable used in the generated init.c file.
Just like with init.c, when this environment variable is set we
reverse the list of Python initialization (and finalization)
functions. As there is already a test that starts GDB with the
environment variable set then this should offer some level of
protection against dependencies creeping in - though for full
protection I guess we'd need to run all gdb.python/*.exp tests with
the variable set.
I have tested this patch with the environment variable set, and saw no
regressions, so I think we are fine right now.
One other change of note was for gdbpy_initialize_gdb_readline, this
function previously returned void. In order to make this function
have the correct signature I've updated its return type to int, and we
now return 0 to indicate success.
All of the other initialize (and finalize) functions have been made
static within their respective sub-system files.
There should be no user visible changes after this commit.
|
|
Replace spaces with tabs in a bunch of places.
Change-Id: If0f87180f1d13028dc178e5a8af7882a067868b0
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
Now that filtered and unfiltered output can be treated identically, we
can unify the printf family of functions. This is done under the name
"gdb_printf". Most of this patch was written by script.
|
|
New in this version:
- Rebase on master, fix a few more issues that appeared.
python-internal.h contains a number of macros that helped make the code
work with both Python 2 and 3. Remove them and adjust the code to use
the Python 3 functions.
Change-Id: I99a3d80067fb2d65de4f69f6473ba6ffd16efb2d
|
|
The motivation for this patch is the fact that py-micmd.c doesn't build
with Python 2, due to PyDict_GetItemWithError being a Python 3-only
function:
CXX python/py-micmd.o
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c: In function ‘int micmdpy_uninstall_command(micmdpy_object*)’:
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c:430:20: error: ‘PyDict_GetItemWithError’ was not declared in this scope; did you mean ‘PyDict_GetItemString’?
430 | PyObject *curr = PyDict_GetItemWithError (mi_cmd_dict.get (),
| ^~~~~~~~~~~~~~~~~~~~~~~
| PyDict_GetItemString
A first solution to fix this would be to try to replace
PyDict_GetItemWithError equivalent Python 2 code. But I looked at why
we are doing this in the first place: it is to maintain the
`gdb._mi_commands` Python dictionary that we use as a `name ->
gdb.MICommand object` map. Since the `gdb._mi_commands` dictionary is
never actually used in Python, it seems like a lot of trouble to use a
Python object for this.
My first idea was to replace it with a C++ map
(std::unordered_map<std::string, gdbpy_ref<micmdpy_object>>). While
implementing this, I realized we don't really need this map at all. The
mi_command_py objects registered in the main MI command table can own
their backing micmdpy_object (that's a gdb.MICommand, but seen from the
C++ code). To know whether an mi_command is an mi_command_py, we can
use a dynamic cast. Since there's one less data structure to maintain,
there are less chances of messing things up.
- Change mi_command_py::m_pyobj to a gdbpy_ref, the mi_command_py is
now what keeps the MICommand alive.
- Set micmdpy_object::mi_command in the constructor of mi_command_py.
If mi_command_py manages setting/clearing that field in
swap_python_object, I think it makes sense that it also takes care of
setting it initially.
- Move a bunch of checks from micmdpy_install_command to
swap_python_object, and make them gdb_asserts.
- In micmdpy_install_command, start by doing an mi_cmd_lookup. This is
needed to know whether there's a Python MI command already registered
with that name. But we can already tell if there's a non-Python
command registered with that name. Return an error if that happens,
rather than waiting for insert_mi_cmd_entry to fail. Change the
error message to "name is already in use" rather than "may already be
in use", since it's more precise.
I asked Andrew about the original intent of using a Python dictionary
object to hold the command objects. The reason was to make sure the
objects get destroyed when the Python runtime gets finalized, not later.
Holding the objects in global C++ data structures and not doing anything
more means that the held Python objects will be decref'd after the
Python interpreter has been finalized. That's not desirable. I tried
it and it indeed segfaults.
Handle this by adding a gdbpy_finalize_micommands function called in
finalize_python. This is the mirror of gdbpy_initialize_micommands
called in do_start_initialization. In there, delete all Python MI
commands. I think it makes sense to do it this way: if it was somehow
possible to unload Python support from GDB in the middle of a session
we'd want to unregister any Python MI command. Otherwise, these MI
commands would be backed with a stale PyObject or simply nothing.
Delete tests that were related to `gdb._mi_commands`.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I060d5ebc7a096c67487998a8a4ca1e8e56f12cd3
|
|
GDB notifies users about user selected thread changes somewhat
inconsistently as mentioned on gdb-patches mailing list here:
https://sourceware.org/pipermail/gdb-patches/2022-February/185989.html
Consider GDB debugging a multi-threaded inferior with both CLI and GDB/MI
interfaces connected to separate terminals.
Assuming inferior is stopped and thread 1 is selected, when a thread
2 is selected using '-thread-select 2' command on GDB/MI terminal:
-thread-select 2
^done,new-thread-id="2",frame={level="0",addr="0x00005555555551cd",func="child_sub_function",args=[],file="/home/jv/Projects/gdb/users_jv_patches/gdb/testsuite/gdb.mi/user-selected-context-sync.c",fullname="/home/uuu/gdb/gdb/testsuite/gdb.mi/user-selected-context-sync.c",line="30",arch="i386:x86-64"}
(gdb)
and on CLI terminal we get the notification (as expected):
[Switching to thread 2 (Thread 0x7ffff7daa640 (LWP 389659))]
#0 child_sub_function () at /home/uuu/gdb/gdb/testsuite/gdb.mi/user-selected-context-sync.c:30
30 volatile int dummy = 0;
However, now that thread 2 is selected, if thread 1 is selected
using 'thread-select --thread 1 1' command on GDB/MI terminal
terminal:
-thread-select --thread 1 1
^done,new-thread-id="1",frame={level="0",addr="0x0000555555555294",func="main",args=[],file="/home/jv/Projects/gdb/users_jv_patches/gdb/testsuite/gdb.mi/user-selected-context-sync.c",fullname="/home/jv/Projects/gdb/users_jv_patches/gdb/testsuite/gdb.mi/user-selected-context-sync.c",line="66",arch="i386:x86-64"}
(gdb)
but no notification is printed on CLI terminal, despite the fact
that user selected thread has changed.
The problem is that when `-thread-select --thread 1 1` is executed
then thread is switched to thread 1 before mi_cmd_thread_select () is
called, therefore the condition "inferior_ptid != previous_ptid"
there does not hold.
To address this problem, we have to move notification logic up to
mi_cmd_execute () where --thread option is processed and notify
user selected contents observers there if context changes.
However, this in itself breaks GDB/MI because it would cause context
notification to be sent on MI channel. This is because by the time
we notify, MI notification suppression is already restored (done in
mi_command::invoke(). Therefore we had to lift notification suppression
logic also up to mi_cmd_execute (). This change in made distinction
between mi_command::invoke() and mi_command::do_invoke() unnecessary
as all mi_command::invoke() did (after the change) was to call
do_invoke(). So this patches removes do_invoke() and moves the command
execution logic directly to invoke().
With this change, all gdb.mi tests pass, tested on x86_64-linux.
Co-authored-by: Andrew Burgess <aburgess@redhat.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=20631
|
|
This commit allows a user to create custom MI commands using Python
similarly to what is possible for Python CLI commands.
A new subclass of mi_command is defined for Python MI commands,
mi_command_py. A new file, gdb/python/py-micmd.c contains the logic
for Python MI commands.
This commit is based on work linked too from this mailing list thread:
https://sourceware.org/pipermail/gdb/2021-November/049774.html
Which has also been previously posted to the mailing list here:
https://sourceware.org/pipermail/gdb-patches/2019-May/158010.html
And was recently reposted here:
https://sourceware.org/pipermail/gdb-patches/2022-January/185190.html
The version in this patch takes some core code from the previously
posted patches, but also has some significant differences, especially
after the feedback given here:
https://sourceware.org/pipermail/gdb-patches/2022-February/185767.html
A new MI command can be implemented in Python like this:
class echo_args(gdb.MICommand):
def invoke(self, args):
return { 'args': args }
echo_args("-echo-args")
The 'args' parameter (to the invoke method) is a list
containing (almost) all command line arguments passed to the MI
command (--thread and --frame are handled before the Python code is
called, and removed from the args list). This list can be empty if
the MI command was passed no arguments.
When used within gdb the above command produced output like this:
(gdb)
-echo-args a b c
^done,args=["a","b","c"]
(gdb)
The 'invoke' method of the new command must return a dictionary. The
keys of this dictionary are then used as the field names in the mi
command output (e.g. 'args' in the above).
The values of the result returned by invoke can be dictionaries,
lists, iterators, or an object that can be converted to a string.
These are processed recursively to create the mi output. And so, this
is valid:
class new_command(gdb.MICommand):
def invoke(self,args):
return { 'result_one': { 'abc': 123, 'def': 'Hello' },
'result_two': [ { 'a': 1, 'b': 2 },
{ 'c': 3, 'd': 4 } ] }
Which produces output like:
(gdb)
-new-command
^done,result_one={abc="123",def="Hello"},result_two=[{a="1",b="2"},{c="3",d="4"}]
(gdb)
I have required that the fields names used in mi result output must
match the regexp: "^[a-zA-Z][-_a-zA-Z0-9]*$" (without the quotes).
This restriction was never written down anywhere before, but seems
sensible to me, and we can always loosen this rule later if it proves
to be a problem. Much harder to try and add a restriction later, once
people are already using the API.
What follows are some details about how this implementation differs
from the original patch that was posted to the mailing list.
In this patch, I have changed how the lifetime of the Python
gdb.MICommand objects is managed. In the original patch, these object
were kept alive by an owned reference within the mi_command_py object.
As such, the Python object would not be deleted until the
mi_command_py object itself was deleted.
This caused a problem, the mi_command_py were held in the global mi
command table (in mi/mi-cmds.c), which, as a global, was not cleared
until program shutdown. By this point the Python interpreter has
already been shutdown. Attempting to delete the mi_command_py object
at this point was causing GDB to try and invoke Python code after
finalising the Python interpreter, and we would crash.
To work around this problem, the original patch added code in
python/python.c that would search the mi command table, and delete the
mi_command_py objects before the Python environment was finalised.
In contrast, in this patch, I have added a new global dictionary to
the gdb module, gdb._mi_commands. We already have several such global
data stores related to pretty printers, and frame unwinders.
The MICommand objects are placed into the new gdb.mi_commands
dictionary, and it is this reference that keeps the objects alive.
When GDB's Python interpreter is shut down gdb._mi_commands is deleted,
and any MICommand objects within it are deleted at this point.
This change avoids having to make the mi_cmd_table global, and walk
over it from within GDB's python related code.
This patch handles command redefinition entirely within GDB's python
code, though this does impose one small restriction which is not
present in the original code (detailed below), I don't think this is a
big issue. However, the original patch relied on being able to
finish executing the mi_command::do_invoke member function after the
mi_command object had been deleted. Though continuing to execute a
member function after an object is deleted is well defined, it is
also (IMHO) risky, its too easy for someone to later add a use of the
object without realising that the object might sometimes, have been
deleted. The new patch avoids this issue.
The one restriction that is added to avoid this, is that an MICommand
object can't be reinitialised with a different command name, so:
(gdb) python cmd = MyMICommand("-abc")
(gdb) python cmd.__init__("-def")
can't reinitialize object with a different command name
This feels like a pretty weird edge case, and I'm happy to live with
this restriction.
I have also changed how the memory is managed for the command name.
In the most recently posted patch series, the command name is moved
into a subclass of mi_command, the python mi_command_py, which
inherits from mi_command is then free to use a smart pointer to manage
the memory for the name.
In this patch, I leave the mi_command class unchanged, and instead
hold the memory for the name within the Python object, as the lifetime
of the Python object always exceeds the c++ object stored in the
mi_cmd_table. This adds a little more complexity in py-micmd.c, but
leaves the mi_command class nice and simple.
Next, this patch adds some extra functionality, there's a
MICommand.name read-only attribute containing the name of the command,
and a read-write MICommand.installed attribute that can be used to
install (make the command available for use) and uninstall (remove the
command from the mi_cmd_table so it can't be used) the command. This
attribute will be automatically updated if a second command replaces
an earlier command.
This patch adds additional error handling, and makes more use the
gdbpy_handle_exception function.
Co-Authored-By: Jan Vrany <jan.vrany@labware.com>
|