Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch introduces a new macro, INIT_GDB_FILE. This is used to
replace the current "_initialize_" idiom when introducing a per-file
initialization function. That is, rather than write:
void _initialize_something ();
void
_initialize_something ()
{
...
}
... now you would write:
INIT_GDB_FILE (something)
{
...
}
The macro handles both the declaration and definition of the function.
The point of this approach is that it makes it harder to accidentally
cause an initializer to be omitted; see commit 2711e475 ("Ensure
cooked_index_entry self-tests are run"). Specifically, the regexp now
used by make-init-c seems harder to trick.
New in v2: un-did some erroneous changes made by the script.
The bulk of this patch was written by script.
Regression tested on x86-64 Fedora 41.
|
|
A user pointed out that DAP allows the "threads" request to work when
the inferior is running. This is documented in the overview, not the
specification.
While looking into this, I found a few other issues:
* The _thread_name function was not marked @in_gdb_thread.
This isn't very important but is still an oversight.
* DAP requires all threads to have a name -- the field is not optional
in the "Thread" type.
* There was no test examining events resulting from the inferior
printing to stdout.
This patch fixes all these problems.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33080
|
|
This commit adds a new gdb.warning() function. This function takes a
string and then calls GDB's internal warning() function. This will
display the string as a warning.
Using gdb.warning() means that the message will get the new emoji
prefix if the user has that feature turned on. Also, the message will
be sent to gdb.STDERR without the user having to remember to print to
the correct stream.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Makes it possible to set and remove other types of breakpoints while the
process is running. Makes debugging more convenient.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed a minor grammer issue in a comment in DAP.
|
|
In gdbpy_parse_command_name (python/py-cmd.c) there is a call to
xmalloc that can easily be replaced with a call to
make_unique_xstrndup, which makes the code easier to read (I think).
In gdbscm_parse_command_name (guile/scm-cmd.c) the same fix can be
applied to remove an identical xmalloc call. And there is an
additional xmalloc call, which can also be replaced with
make_unique_xstrndup in the same way.
The second xmalloc call in gdbscm_parse_command_name was also present
in gdbpy_parse_command_name at one point, but was replaced with a use
of std::string by this commit:
commit 075c55e0cc0a68eeab777027213c2f545618e844
Date: Wed Dec 26 11:05:57 2018 -0700
Remove more calls to xfree from Python
I haven't changed the gdbscm_parse_command_name to use std::string
though, as that doesn't work well with the guile exception model.
Guile exceptions work by performing a longjmp from the function that
raises the exception, back to the guile run-time. The consequence of
this is that destructors are not run. For example, if
gdbscm_parse_command_name calls gdbscm_out_of_range_error, then any
function local objects in gdbscm_parse_command_name will not have
their destructors called.
What this means is that, for the existing `result` and `prefix_text`
locals, any allocated memory managed by these objects will be leaked
if an exception is called. However, fixing this is pretty easy, one
way is to just assign nullptr to these locals before raising the
exception, this would cause the allocated memory to be released.
But for std::string it is harder to ensure that the managed memory has
actually been released. We can call std::string::clear() and then
maybe std::string::shrink_to_fit(), but this is still not guaranteed
to release any managed memory. In fact, I believe the only way to
ensure all managed memory is released, is to call the std::string
destructor.
And so, for functions that can throw a guile exception, it is easier
to just avoid std::string.
As for the memory leak that I identify above; I'll fix that in a
follow on commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A commit I recently pushed:
commit 0b5023cc71d3af8b18e10e6599a3f9381bc15265
Date: Sat Apr 12 09:15:53 2025 +0100
gdb/python/guile: user created prefix commands get help list
can trigger a segfault if a user tries to create nested prefix
commands. For example, this will trigger a crash:
(gdb) python gdb.ParameterPrefix("prefix-1", gdb.COMMAND_NONE)
(gdb) python gdb.ParameterPrefix("prefix-1 prefix-2", gdb.COMMAND_NONE)
Fatal signal: Segmentation fault
... etc ...
If the user adds an actual parameter under 'prefix-1' before creating
'prefix-2', then everything is fine:
(gdb) python gdb.ParameterPrefix("prefix-1", gdb.COMMAND_NONE)
(gdb) python gdb.Parameter('prefix-1 param-1', gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
(gdb) python gdb.ParameterPrefix("prefix-1 prefix-2", gdb.COMMAND_NONE)
The mistake in the above patch is in how gdbpy_parse_command_name is
used. The BASE_LIST output argument from this function points to the
list of commands for the prefix, not to the prefix command itself.
So when gdbpy_parse_command_name is called for 'prefix-1 prefix-2',
BASE_LIST points to the list of commands associated with 'prefix-1',
not to the actual 'prefix-1' cmd_list_element.
Back in cmdpy_init, from where gdbpy_parse_command_name was called, I
was walking back from the first entry in BASE_LIST to figure out if
this was a "show" prefix command or not. However, if BASE_LIST is
empty then there is no first item, and this would trigger the
segfault.
The solution it to extend gdbpy_parse_command_name to also return the
prefix cmd_list_element in addition to the existing values. With this
done, and cmdpy_init updated, the segfault is now avoided.
There's a new test that would trigger the crash without the patch.
And, of course, the above commit also broke guile in the exact same
way. And the fix is exactly the same. And there's a guile test too.
NOTE: We should investigate possibly sharing some of this boiler plate
helper code between Python and Guile. But not in this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Consider GDB's builtin prefix set/show prefix sub-commands, if they
are invoked with no sub-command name then they work like this:
(gdb) show print
print address: Printing of addresses is on.
print array: Pretty formatting of arrays is off.
print array-indexes: Printing of array indexes is off.
print asm-demangle: Demangling of C++/ObjC names in disassembly listings is off.
... cut lots of lines ...
(gdb) set print
List of set print subcommands:
set print address -- Set printing of addresses.
set print array -- Set pretty formatting of arrays.
set print array-indexes -- Set printing of array indexes.
set print asm-demangle -- Set demangling of C++/ObjC names in disassembly listings.
... cut lots of lines ...
Type "help set print" followed by set print subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Type "apropos -v word" for full documentation of commands related to "word".
Command name abbreviations are allowed if unambiguous.
(gdb)
That is 'show print' lists the values of all settings under the
'print' prefix, and 'set print' lists the help text for all settings
under the 'set print' prefix.
Now, if we try to create something similar using the Python API:
(gdb) python gdb.ParameterPrefix("my-prefix", gdb.COMMAND_NONE)
(gdb) python gdb.Parameter("my-prefix foo", gdb.COMMAND_OBSCURE, gdb.PARAM_BOOLEAN)
(gdb) show my-prefix
(gdb) set my-prefix
Neither 'show my-prefix' or 'set my-prefix' gives us the same details
relating to the sub-commands that we get with the builtin prefix
commands.
This commit aims to address this.
Currently, in cmdpy_init, when a new command is created, we always set
the commands callback function to cmdpy_function. It is within
cmdpy_function that we spot that the command is a prefix command, and
that there is no gdb.Command.invoke method, and so return early.
This commit changes things so that the rules are now:
1. For NON prefix commands, we continue to use cmdpy_function.
2. For prefix commands that do have a gdb.Command.invoke
method (i.e. can handle unknown sub-commands), continue to use
cmdpy_function.
3. For all other prefix commands, don't use cmdpy_function, instead
use GDB's normal callback function for set/show prefixes.
This requires splitting the current call to add_prefix_cmd into either
a call to add_prefix_cmd, add_show_prefix_cmd, or
add_basic_prefix_cmd, as appropriate.
After these changes, we now see this:
(gdb) python gdb.ParameterPrefix("my-prefix", gdb.COMMAND_NONE) │
(gdb) python gdb.Parameter("my-prefix foo", gdb.COMMAND_OBSCURE, gdb.PARAM_BOOLEAN)
(gdb) show my-prefix │
my-prefix foo: The current value of 'my-prefix foo' is "off".
(gdb) set my-prefix
List of "set my-prefix" subcommands:
set my-prefix foo -- Set the current value of 'my-prefix foo'.
Type "help set my-prefix" followed by subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Type "apropos -v word" for full documentation of commands related to "word".
Command name abbreviations are allowed if unambiguous.
(gdb)
Which matches how a prefix defined within GDB would act.
I have made the same changes to the Guile API.
|
|
In Ada, a field can have a dynamic bit offset in its enclosing record.
In DWARF 3, this was handled using a dynamic
DW_AT_data_member_location, combined with a DW_AT_bit_offset -- this
combination worked out ok because in practice GNAT only needs a
dynamic byte offset with a fixed offset within the byte.
However, this approach was deprecated in DWARF 4 and then removed in
DWARF 5. No replacement approach was given, meaning that in strict
mode there is no way to express this.
This is a DWARF bug, see
https://dwarfstd.org/issues/250501.1.html
In a discussion on the DWARF mailing list, a couple people mentioned
that compilers could use the obvious extension of a dynamic
DW_AT_data_bit_offset. I've implemented this for LLVM:
https://github.com/llvm/llvm-project/pull/141106
In preparation for that landing, this patch implements support for
this construct in gdb.
New in v2: renamed some constants and added a helper method, per
Simon's review.
New in v3: more renamings.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
DAP requests have a "defer_stop_events" option that is intended to
defer the emission of any "stopped" event until after the current
request completes. This was needed to handle async continues like
"finish &".
However, I noticed that sometimes DAP tests can fail, because a stop
event does arrive before the response to the "stepOut" request. I've
only noticed this when the machine is fairly loaded -- for instance
when I'm regression-testing a series, it may occur in some of the
tests mid-series.
I believe the problem is that the implementation in the "request"
function is incorrect -- the flag is set when "request" is invoked,
but instead it must be deferred until the request itself is run. That
is, the setting must be captured in one of the wrapper functions.
Following up on this, Simon pointed out that introducing a delay
before sending a request's response will cause test case failures.
That is, there's a race here that is normally hidden.
Investigation showed that that deferred requests can't force event
deferral. This patch implements this; but more testing showed many
more race failures. Some of these are due to how the test suite is
written.
Anyway, in the end I took the radical approach of deferring all events
by default. Most DAP requests are asynchronous by nature, so this
seemed ok. The only case I found that really required this is
pause.exp, where the test (rightly) expects to see a 'continued' event
while performing an inferior function call.
I went through all events and all requests and tried to convince
myself that this patch will cause acceptable behavior in every case.
However, it's hard to be completely sure about this approach. Maybe
there are cases that do still need an event before the response, but
we just don't have tests for them.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32685
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
At commit 34b0776fd73^, flake8 reports the following F405 warnings:
...
$ pre-commit run flake8 --file gdb/python/lib/gdb/__init__.py
flake8...................................................................Failed
- hook id: flake8
- exit code: 1
F405 'flush' may be undefined, or defined from star imports: _gdb
F405 'write' may be undefined, or defined from star imports: _gdb
F405 'STDOUT' may be undefined, or defined from star imports: _gdb
F405 'STDERR' may be undefined, or defined from star imports: _gdb
...
F405 'selected_inferior' may be undefined, or defined from star imports: _gdb
F405 'execute' may be undefined, or defined from star imports: _gdb
F405 'parameter' may be undefined, or defined from star imports: _gdb
...
The F405s are addressed by commit 34b0776fd73 ('Suppress some "undefined"
warnings from flake8').
The problem indicated by the first F405 is that the use of flush here:
...
class _GdbFile(object):
...
def flush(self):
flush(stream=self.stream)
...
cannot be verified by flake8. It concludes that either, flush is undefined,
or it is defined by this "star import":
...
from _gdb import * # noqa: F401,F403
...
In this particular case, indeed flush is defined by the star import.
This can be addressed by simply adding:
...
flush(stream=self.stream) # noqa: F405
...
but that has only effect for flake8, so other analyzers may report the same
problem.
The commit 34b0776fd73 addresses it instead by adding an "import _gdb" and
adding a "_gdb." prefix:
...
_gdb.flush(stream=self.stream)
...
This introduces a second way to specify _gdb names, but the first one still
remains, and occasionally someone will use the first one, which then requires
fixing once flake8 is run [1].
While this works to silence the warnings, there is a problem: if a developer
makes a typo:
...
_gdb.flash(stream=self.stream)
...
this is not detected by flake8.
This matters because although the python import already complains:
...
$ gdb -q -batch -ex "python import gdb"
Exception ignored in: <gdb._GdbFile object at 0x7f6186d4d7f0>
Traceback (most recent call last):
File "__init__.py", line 63, in flush
_gdb.flash(stream=self.stream)
AttributeError: module '_gdb' has no attribute 'flash'
...
that doesn't trigger if the code is hidden behind some control flow:
...
if _var_mostly_false:
flash(stream=self.stream)
...
Instead, fix the F405s by reverting commit 34b0776fd73 and adding a second
import of _gdb alongside the star import which lists the names used locally:
...
from _gdb import * # noqa: F401,F403
+from _gdb import (
+ STDERR,
+ STDOUT,
+ Command,
+ execute,
+ flush,
+ parameter,
+ selected_inferior,
+ write,
+)
...
This gives the following warnings for the flash typo:
...
31:1: F401 '_gdb.flush' imported but unused
70:5: F811 redefinition of unused 'flush' from line 31
71:9: F405 'flash' may be undefined, or defined from star imports: _gdb
...
The benefits of this approach compared to the previous one are that:
- the typo is noticed, and
- when using a new name, the F405 fix needs to be done once (by adding it to
the explicit import list), while previously the fix had to be applied to
each use (by adding the "_gdb." prefix).
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
[1] Commit 475799b692e ("Fix some pre-commit nits in gdb/__init__.py")
|
|
I believe we previously agreed that the minimum supported Python
version should be 3.4. This patch makes this change, harmonizing the
documentation (which was inconsistent about the minimum version) and
the code.
New in v2: rebased, and removed a pre-3.4 workaround from __init__.py.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
Acked-By: Tom de Vries <tdevries@suse.de>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31870
|
|
When DAP completion requests receives empty string to complete,
the script crashes due trying to access element -1 from list
being a result of `text.splitlines()` (which for `text == ""`
evaluates into empty list).
This patch adds simple check if `text` is populated, and when it
is not, skips transformations and assigns correct result directly.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It conflicts with the ldirname function that will be added in the next
libiberty sync.
|
|
I noticed that pre-commit has some complaints (flake8 and codespell)
about gdb/__init__.py. This patch fixes these.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
This commit adds a new gdb.ParameterPrefix class to GDB's Python API.
When creating multiple gdb.Parameters, it is often desirable to group
these together under a sub-command, for example, 'set print' has lots
of parameters nested under it, like 'set print address', and 'set
print symbol'. In the Python API the 'print' part of these commands
are called prefix commands, and are created using gdb.Command objects.
However, as parameters are set via the 'set ....' command list, and
shown through the 'show ....' command list, creating a prefix for a
parameter usually requires two prefix commands to be created, one for
the 'set' command, and one for the 'show' command.
This often leads to some duplication, or at the very least, each user
will end up creating their own helper class to simplify creation of
the two prefix commands.
This commit adds a new gdb.ParameterPrefix class. Creating a single
instance of this class will create both the 'set' and 'show' prefix
commands, which can then be used while creating the gdb.Parameter.
Here is an example of it in use:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE)
This adds 'set my-prefix' and 'show my-prefix', both of which are
prefix commands. The user can then add gdb.Parameter objects under
these prefixes.
The gdb.ParameterPrefix initialise method also supports documentation
strings, so we can write:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE,
"Configuration setting relating to my special extension.")
which will set the documentation string for the prefix command.
Also, it is possible to support prefix commands that use the `invoke`
functionality to handle unknown sub-commands. This is done by
sub-classing gdb.ParameterPrefix and overriding either 'invoke_set' or
'invoke_show' to handle the 'set' or 'show' prefix command
respectively.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
I was recently attempting to create some parameters via the Python
API. I wanted these parameters to appear similar to how GDB handles
the existing 'style' parameters.
Specifically, I was interested in this behaviour:
(gdb) help show style filename foreground
Show the foreground color for this property.
(gdb) help set style filename foreground
Set the foreground color for this property.
(gdb)
Notice how each 'help' command only gets a single line of output.
I tried to reproduce this behaviour via the Python API and was unable.
The problem is that, in order to get just a single line of output like
this, the style parameters are registered with a call to
add_setshow_color_cmd with the 'help_doc' being passed as nullptr.
On the Python side, when parameters are created, the 'help_doc' is
obtained with a call to get_doc_string (python/py-param.c). This
function either returns the __doc__ string, or a default string: "This
command is not documented.".
To avoid returning the default we could try setting __doc__ to an
empty string, but setting this field to any string means that GDB
prints a line for that string, like this:
class test_param(gdb.Parameter):
__doc__ = ""
def __init__(self, name):
super ().__init__(name, gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
self.value = True
test_param('print test')
Then in GDB:
(gdb) help set print test
Set the current value of 'print test'.
(gdb)
The blank line is the problem I'd like to solve.
This commit makes a couple of changes to how parameter doc strings are
handled.
If the doc string is set to an empty string, then GDB now converts
this to nullptr, which removes the blank line problem, the new
behaviour in GDB (for the above `test_param`) is:
(gdb) help set print test
Set the current value of 'print test'.
(gdb)
Next, I noticed that if the set/show docs are set to empty strings,
then the results are less than ideal:
class test_param(gdb.Parameter):
set_doc = ""
def __init__(self, name):
super ().__init__(name, gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
self.value = True
test_param('print test')
And in GDB:
(gdb) help set print test
This command is not documented.
(gdb)
So, if the set/show docs are the empty string, GDB now forces these to
be the default string instead, the new behaviour in GDB is:
(gdb) help set print test
Set the current value of 'print test'.
This command is not documented.
(gdb)
I've added some additional asserts; the set/show docs should always be
non-empty strings, which I believe is the case after this commit. And
the 'doc' string returned from get_doc_string should never nullptr,
but could be empty.
There are new tests to cover all these changes.
|
|
The manual for gdb.Parameter says:
If NAME consists of multiple words, and no prefix parameter group
can be found, an exception is raised.
This makes sense; we cannot create a parameter within a prefix group,
if the prefix doesn't exist. And this almost works, so:
(gdb) python gdb.Parameter("xxx foo", gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
Python Exception <class 'RuntimeError'>: Could not find command prefix xxx.
Error occurred in Python: Could not find command prefix xxx.
The prefix 'xxx' doesn't exist, and we get an error. But, if we try
multiple levels of prefix:
(gdb) python gdb.Parameter("print xxx foo", gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
This completes without error, however, we didn't get what we were
maybe expecting:
(gdb) show print xxx foo
Undefined show print command: "xxx foo". Try "help show print".
But we did get:
(gdb) show print foo
The current value of 'print foo' is "off".
GDB stopped scanning the prefix string at the unknown 'xxx', and just
created the parameter there. I don't think this makes sense, nor is
it inline with the manual.
An identical problem exists with gdb.Command creation; GDB stops
parsing the prefix at the first unknown prefix, and just creates the
command there. The manual for gdb.Command says:
NAME is the name of the command. If NAME consists of multiple
words, then the initial words are looked for as prefix commands.
In this case, if one of the prefix commands does not exist, an
exception is raised.
So again, the correct action is, I believe, to raise an exception.
The problem is in gdbpy_parse_command_name (python/py-cmd.c), GDB
calls lookup_cmd_1 to look through the prefix string and return the
last prefix group. If the very first prefix word is invalid then
lookup_cmd_1 returns NULL, and this case is handled. However, if
there is a valid prefix, followed by an invalid prefix, then
lookup_cmd_1 will return a pointer to the last valid prefix list, and
will update the input argument to point to the start of the invalid
prefix word. This final case, where the input is left pointing to an
unknown prefix, was previously not handled.
I've fixed gdbpy_parse_command_name, and added tests for command and
parameter creation to cover this case.
The exact same error is present in the guile API too. The guile
documentation for make-parameter and make-command says the same things
about unknown prefixes resulting in an exception, but the same error
is present in gdbscm_parse_command_name (guile/scm-cmd.c), so I've
fixed that too, and added some tests.
|
|
The documentation for the Source interface says
* The path of the source to be shown in the UI.
* It is only used to locate and load the content of the source if no
* `sourceReference` is specified (or its value is 0).
but the code used `path` first. I fixed it to use `sourceReference` first.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that the gdb.Color.escape_sequence() method would produce an
escape sequence even when styling is disabled.
I think this is the wrong choice. Ideally, when styling is
disabled (e.g. with 'set style enabled off'), GDB should not be
producing styled output.
If a GDB extension is using gdb.Color to apply styling to the output,
then currently, the extension should be checking 'show style enabled'
any time Color.escape_sequence() is used. This means lots of code
duplication, and the possibility that some locations will be missed,
which means disabling styling no longer does what it says.
I propose that Color.escape_sequence() should return the empty string
if styling is disabled. A Python extension can then do:
python
c_none = gdb.Color('none')
c_red = gdb.Color('red')
print(c_red.escape_sequence(True)
+ "Text in red."
+ c_none.escape_sequence(True))
end
If styling is enable this will print some red text. And if styling is
disabled, then it will print text in the terminal's default color.
I have applied the same fix to the guile API.
I have extended the tests to cover this case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Using the trigger patch described in the previous commit, I get:
...
$ gdb
(gdb) <q>error detected on stdin
Fatal signal: Segmentation fault
----- Backtrace -----
0x64c7b3 gdb_internal_backtrace_1
/data/vries/gdb/src/gdb/bt-utils.c:127
0x64c937 _Z22gdb_internal_backtracev
/data/vries/gdb/src/gdb/bt-utils.c:196
0x94db83 handle_fatal_signal
/data/vries/gdb/src/gdb/event-top.c:1021
0x94dd48 handle_sigsegv
/data/vries/gdb/src/gdb/event-top.c:1098
0x7f372be578ff ???
0x10b7c0a _Z9gdb_flushP7ui_file
/data/vries/gdb/src/gdb/utils.c:1527
0xd4b938 gdbpy_flush
/data/vries/gdb/src/gdb/python/python.c:1624
0x7f372d73b276 _PyCFunction_FastCallDict
Objects/methodobject.c:231
0x7f372d73b276 _PyCFunction_FastCallKeywords
Objects/methodobject.c:294
0x7f372d794a09 call_function
Python/ceval.c:4851
0x7f372d78e838 _PyEval_EvalFrameDefault
Python/ceval.c:3351
0x7f372d796e6e PyEval_EvalFrameEx
Python/ceval.c:754
0x7f372d796e6e _PyFunction_FastCall
Python/ceval.c:4933
0x7f372d796e6e _PyFunction_FastCallDict
Python/ceval.c:5035
0x7f372d6fefc8 _PyObject_FastCallDict
Objects/abstract.c:2310
0x7f372d6fefc8 _PyObject_Call_Prepend
Objects/abstract.c:2373
0x7f372d6fe162 _PyObject_FastCallDict
Objects/abstract.c:2331
0x7f372d700705 callmethod
Objects/abstract.c:2583
0x7f372d700705 _PyObject_CallMethodId
Objects/abstract.c:2640
0x7f372d812a41 flush_std_files
Python/pylifecycle.c:699
0x7f372d81281d Py_FinalizeEx
Python/pylifecycle.c:768
0xd4d49b finalize_python
/data/vries/gdb/src/gdb/python/python.c:2308
0x9587eb _Z17ext_lang_shutdownv
/data/vries/gdb/src/gdb/extension.c:330
0xfd98df _Z10quit_forcePii
/data/vries/gdb/src/gdb/top.c:1817
0x6b3080 _Z12quit_commandPKci
/data/vries/gdb/src/gdb/cli/cli-cmds.c:483
0x1056577 stdin_event_handler
/data/vries/gdb/src/gdb/ui.c:131
0x1986970 handle_file_event
/data/vries/gdb/src/gdbsupport/event-loop.cc:551
0x1986f4b gdb_wait_for_event
/data/vries/gdb/src/gdbsupport/event-loop.cc:672
0x1985e0c _Z16gdb_do_one_eventi
/data/vries/gdb/src/gdbsupport/event-loop.cc:263
0xb66f2e start_event_loop
/data/vries/gdb/src/gdb/main.c:402
0xb670ba captured_command_loop
/data/vries/gdb/src/gdb/main.c:466
0xb68b9b captured_main
/data/vries/gdb/src/gdb/main.c:1344
0xb68c44 _Z8gdb_mainP18captured_main_args
/data/vries/gdb/src/gdb/main.c:1363
0x41a3b1 main
/data/vries/gdb/src/gdb/gdb.c:38
---------------------
A fatal error internal to GDB has been detected, further
debugging is not possible. GDB will now terminate.
This is a bug, please report it. For instructions, see:
<https://www.gnu.org/software/gdb/bugs/>.
Segmentation fault (core dumped)
$ q
...
Fix this in gdbpy_flush by checking for nullptr gdb_stdout/gdb_stderr (and
likewise in ioscm_flush) such that we get instead:
...
$ gdb
(gdb) <q>error detected on stdin
$ q
...
Tested on x86_64-linux.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
flake8 7.2.0 appears to have this new warning:
F824: global name / nonlocal name is unused: name is never assigned in scope
It points out a few places in our code base where "global" is not
necessary, fix them.
Change-Id: Ia6fb08686977559726fefe2a5bb95d8dcb298bb0
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python documentation does make it clear that keywords arguments
are supported for functions that take 2 or more arguments. The
documentation makes no promise for keyword argument support on
functions that only take a single argument.
That said, I'm a fan of keyword arguments, I think they help document
the code, and make intentions clearer, even for single argument
functions.
As I'm changing gdb.Color anyway (see previous commit), I'd like to
add keyword argument support to gdb.Color.escape_sequence, even though
this is a single argument method. This should be harmless for anyone
who doesn't want to use keywords, but adds the option for those of us
that do.
I've also removed a redundant check that the 'self' argument was a
gdb.Color object; Python already ensures this is the case.
And I have folded the check that the single argument is a bool into
the gdb_PyArg_ParseTupleAndKeywords call, this means that the error
message will include the incorrect type name now, which should make
debugging issues easier.
Tests have been extended to cover both cases -- it appears the
incorrect argument type error was not previously tested, so it is
now.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python API documentation is clear:
Functions and methods which have two or more optional arguments allow
them to be specified using keyword syntax.
The gdb.Color.__init__ method matches this description, but doesn't
support keyword arguments.
This commit fixes this by adding keyword argument support.
There's a new test to cover this functionality.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I've been reviewing all uses of PyObject_IsInstance, and I believe
that the use of PyObject_IsInstance in py-unwind.c is not entirely
correct. The use of PyObject_IsInstance is in this code in
frame_unwind_python::sniff:
if (PyObject_IsInstance (pyo_unwind_info,
(PyObject *) &unwind_info_object_type) <= 0)
error (_("A Unwinder should return gdb.UnwindInfo instance."));
The problem is that PyObject_IsInstance can return -1 to indicate an
error, in which case a Python error will have been set. Now, the
above code appears to handle this case, it checks for '<= 0', however,
frame_unwind_python::sniff has this near the start:
gdbpy_enter enter_py (gdbarch);
And looking in python.c at 'gdbpy_enter::~gdbpy_enter ()', you'll
notice that if an error is set then the error is printed, but also, we
get a warning about an unhandled Python exception. Clearly, all
exceptions should have been handled by the time the gdbpy_enter
destructor is called.
I've added a test as part of this commit that exposes this problem,
the current output is:
(gdb) backtrace
Python Exception <class 'RuntimeError'>: error in Blah.__class__
warning: internal error: Unhandled Python exception
Python Exception <class 'gdb.error'>: A Unwinder should return gdb.UnwindInfo instance.
#0 corrupt_frame_inner () at /home/andrew/projects/binutils-gdb/build.dev-g/gdb/testsuite/../../../src.dev-g/gdb/test>
(gdb)
An additional observation is that we use PyObject_IsInstance to check
that the return value is a gdb.UnwindInfo, or a sub-class. However,
gdb.UnwindInfo lacks the Py_TPFLAGS_BASETYPE flag, and so cannot be
sub-classed. As such, PyObject_IsInstance is not really needed, we
could use PyObject_TypeCheck instead. The PyObject_TypeCheck function
only returns 0 or 1, there is no -1 error case. Switching to
PyObject_TypeCheck then, fixes the above problem.
There's a new test that exposes the problems that originally existed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In python/py-registers.c we make use of PyObject_IsInstance. The
PyObject_IsInstance can return -1 for an error, 0 for false, or 1 for
true.
In py-registers.c we treat the return value from PyObject_IsInstance
as a boolean, which means both -1 and 1 will be treated as true.
If PyObject_IsInstance returns -1 for an error, this will be treated
as true, we will then invoke undefined behaviour as the pyo_reg_id
object will be treated as a gdb.RegisterDescriptor, even though it
might not be.
I noticed that the gdb.RegisterDescriptor class does not have the
Py_TPFLAGS_BASETYPE flag, and therefore cannot be inherited from. As
such, using PyObject_IsInstance is not necessary, we can use
PyObject_TypeCheck instead. The PyObject_TypeCheck function only
returns 0 or 1, so we don't need to worry about the error case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdbpy_is_color function uses PyObject_IsInstance, and converts the
return from PyObject_IsInstance to a bool.
Unfortunately, PyObject_IsInstance can return -1, 0, or 1, for error,
failure, or success respectively. When converting to a bool both -1
and 1 will convert to true.
Additionally, when PyObject_IsInstance returns -1 an error will be
set.
What this means is that, if gdbpy_is_color is called with a non
gdb.Color object, and the PyObject_IsInstance check raises an error,
then (a) GDB will continue as if the object is a gdb.Color object,
which is likely going to invoke undefined behaviour, see
gdbpy_get_color for example, and (b) when GDB eventually returns to
the Python interpreter, due to an error being set, we'll see:
Python Exception <class 'SystemError'>: PyEval_EvalFrameEx returned a result with an error set
Error occurred in Python: PyEval_EvalFrameEx returned a result with an error set
However, after the previous commit, gdb.Color can no longer be
sub-classed, this means that fixing the above problems is easy, we can
replace the PyObject_IsInstance check with a PyObject_TypeCheck, the
PyObject_TypeCheck function only returns 0 or 1, there's no -1 error
case.
It's also worth noting that PyObject_TypeCheck is the function that is
more commonly used within GDB's Python API implementation, include the
py-color.c use there were only 4 PyObject_IsInstance uses. Of the
remaining 3, 2 are fine, and one other (in py-disasm.c) is also
wrong. I'll address that in a separate patch.
There's also a new test included which exposes the above issue.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Remove the Py_TPFLAGS_BASETYPE flag from the gdb.Color type. This
effectively makes gdb.Color final; users can no longer create classes
that inherit from gdb.Color.
Right now I cannot think of any cases where inheritance would be
needed over composition for a simple type like gdb.Color. If I'm
wrong, then it's easy to add Py_TPFLAGS_BASETYPE back in later, this
would be an extension of the API. But it's much harder to remove the
flag later as that might break existing user code (note: there has
been no release of GDB yet that includes the gdb.Color type).
Introducing this restriction makes the next commit easier.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The PyObject_IsInstance function can return -1 for errors, 0 to
indicate false, and 1 to indicate true.
I noticed in python/py-disasm.c that we treat the result of
PyObject_IsInstance as a bool. This means that if PyObject_IsInstance
returns -1, then this will be treated as true. The consequence of
this is that we will invoke undefined behaviour by treating the result
from the _print_insn call as if it was a DisassemblerResult object,
even though PyObject_IsInstance raised an error, and the result might
not be of the required type.
I could fix this by taking the -1 result into account, however,
gdb.DisassemblerResult cannot be sub-classed, the type doesn't have
the Py_TPFLAGS_BASETYPE flag. As such, we can switch to using
PyObject_TypeCheck instead, which only return 0 or 1, with no error
case.
I have also taken the opportunity to improve the error message emitted
if the result has the wrong type. Better error message make debugging
issues easier.
I've added a test which exposes the problem when using
PyObject_IsInstance, and I've updated the existing test for the
improved error message.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A few minor GNU/GDB coding style issues in py-color.c:
- Space after '&' reference operator in one place.
- Some excessive indentation on a couple of lines.
- Spaces after '!' logical negation operator.
- Using a pointer as a bool in a couple of places.
There should be no functional changes after this commit.
|
|
Spotted a stray white space at the end of an error message. Removed,
and updated the py-breakpoint.exp test to check this case.
|
|
I noticed that this commit:
commit 6447969d0ac774b6dec0f95a0d3d27c27d158690
Date: Sat Oct 5 22:27:44 2024 +0300
Add an option with a color type.
has an unnecessary `Py_INCREF (self);` in gdb.Color.__init__. This
means that the reference count on all gdb.Color objects (that pass
through __init__) will be +1 from where they should normally be, and
this will stop the gdb.Color objects from being deallocated.
Fix by removing the Py_INCREF call.
Add a test which exposes the memory leak.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
value_struct_elt_bitpos is weird: it takes an in/out value parameter,
and it takes an error string parameter. However, it only has a single
caller, which never uses the "out" value.
I think it was done this way to mimic value_struct_elt. However,
value_struct_elt is pretty ugly and I don't think it's worth
imitating.
This patch cleans up value_struct_elt_bitpos a bit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This updates the copyright headers to include 2025. I did this by
running gdb/copyright.py and then manually modifying a few files as
noted by the script.
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
Consider the following scenario:
...
$ cat hello
int
main (void)
{
printf ("hello\n");
return 0;
}
$ gcc -x c hello -g
$ gdb -q -iex "maint set gnu-source-highlight enabled off" a.out
Reading symbols from a.out...
(gdb) start
Temporary breakpoint 1 at 0x4005db: file hello, line 6.
Starting program: /data/vries/gdb/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Temporary breakpoint 1, main () at hello:6
6 printf ("hello\n");
...
This doesn't produce highlighting for line 6, because:
- pygments is used for highlighting instead of source-highlight, and
- pygments guesses the language for highlighting only based on the filename,
which in this case doesn't give a clue.
Fix this by:
- adding a language parameter to the extension_language_ops.colorize interface,
- passing the language as found in the debug info, and
- using it in gdb.styling.colorize to pick the pygments lexer.
The new test-case gdb.python/py-source-styling-2.exp excercises a slightly
different scenario: it compiles a c++ file with a .c extension, and checks
that c++ highlighting is done instead of c highlighting.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR cli/30966
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30966
|
|
This cleans up the last codespell report in the Python directory and
adds gdb/python to pre-commit.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
Use GDB/MI command "-complete" to implement.
Co-authored-by: Simon Farre <simon.farre.cx@gmail.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31140
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Since commit a691853148f ("gdb/python: introduce gdbpy_registry"), when
building gdb with gcc 9, I run into:
...
In file included from gdb/varobj.c:38:0:
gdb/python/python-internal.h:1211:47: error: expected ‘;’ before ‘<’ token
using StorageKey = typename registry<O>::key<Storage>;
^
...
due to this code:
...
template <typename Storage>
class gdbpy_registry
{
...
template<typename O>
using StorageKey = typename registry<O>::key<Storage>;
template<typename O>
Storage *get_storage (O *owner, const StorageKey<O> &key) const
{ ... }
...
}
...
As an experiment, I tried out eliminating the type alias:
...
template<typename O>
Storage *get_storage (O *owner,
const typename registry<O>::key<Storage> &key) const
{ ... }
...
and got instead:
...
In file included from gdb/varobj.c:38:0:
gdb/python/python-internal.h:1211:63: error: non-template ‘key’ used as template
Storage *get_storage (O *owner,
const typename registry<O>::key<Storage> &key) const
^~~
gdb/python/python-internal.h:1211:63: note: use ‘registry<O>::template key’ \
to indicate that it is a template
...
Following that suggestion, I tried:
...
template<typename O>
Storage *
get_storage (O *owner,
const typename registry<O>::template key<Storage> &key) const
{ ... }
...
which fixed the problem.
Likewise, adding the template keyword in the type alias fixes the original
problem, so fix it like that.
Tested on x86_64-linux.
|
|
This commit converts gdb.Symtab_and_line to use gdbpy_registry for
lifecycle management.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit converts gdb.Symtab to use gdbpy_registry for lifecycle
management. Since gdb.Symtab only holds on the struct symtab * (and
prev/next links) the default invalidator can be used.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit converts gdb.Type to use gdbpy_registry for lifecycle
management.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit converts gdb.Symbol to use gdbpy_registry for lifecycle
management. Since gdb.Symbol only holds on the struct symbol * (and
prev/next links) the default invalidator can be used.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit introduces new template class gdbpy_registry to simplify
Python object lifecycle management. As of now, each of the Python
object implementations contain its own (copy of) lifecycle management
code that is largely very similar. The aim of gdbpy_registry is to
factor out this code into a common (template) class in order to simplify
the code.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Previous commit changed type_to_type_object() so each time it is
called with particular struct value* it returns the same object.
Therefore there's no longer need to hold on type objects (gdb.Type)
from struct value_object in order to preserve identity of gdb.Type
objects held in value_object::type and value_object::dynamic_type
members. This in turn allowed for some simplification in various
functions.
While at it I changed a couple of NULLs to nullptrs.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit changes type_to_type_object() so that each it is called
with a particular struct type * it returns the very same gdb.Type
object.
This is done in the same way as for gdb.Symtab objects in earlier commit
("gdb/python: preserve identity for gdb.Symtab objects") except that
types may be either objfile-owned or arch-owned.
Prior this commit, arch-owned objects we not put into any list (like
objfile-owned ones) so they could not be easily looked up. This commit
changes the code so arch-owned list are put into per-architecture list
which is then used (solely) for looking up arch-owned gdb.Type.
Another complication comes from the fact that when objfile is about to
be freed, associated gdb.Type instances are not merely invalidated
(like it is done with gdb.Symtab or gdb.Symbol objects) but instead the
type is copied and the copy is arch-owned. So we need two different
"deleters", one for objfile-owned types that copies the type (as before)
and then insert the object to per-architecture list and another one
for arch-owned types.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Previous commit changed symtab_to_symtab_object() so each time it is
called with particula struct symtab* it returns the same object.
Therefore there's no longer need to hold on symtab object (gdb.Symtab)
from struct sal_object in order to preserve identity of Symtab object
held in gdb.Symtab_and_line.symtab property. This in turn allowed for
some simplification in various functions.
While at it I changed a couple of NULLs to nullptrs.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit changes symbol_to_symbol_object() so that each it is called
with a particular struct symbol * it returns the very same gdb.Symbol
object.
This is done in the same way as for gdb.Symtab objects in earlier commit
("gdb/python: preserve identity for gdb.Symtab objects") except that
symbols may be either objfile-owned or arch-owned.
Prior this commit, arch-owned objects we not put into any list (like
objfile-owned ones) so they could not be easily looked up. This commit
changes the code so arch-owned list are put into per-architecture list
which is then used (solely) for looking up arch-owned gdb.Symbol.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit changes symtab_to_symtab_object() so that each it is called
with a particular struct symtab * it returns the very same gdb.Symtab
object.
This is done by searching per-objfile linked list of instances and - if
found - return it rather than creating new gdb.Symtab.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Currently, gdb.execute emits styled output when the command is sending
its output to GDB's stdout, and produces unstyled output when the
output is going to a string.
But it is not unreasonable that a user might wish to capture styled
output from a gdb.execute call, for example, the user might want to
display the styled output are part of some larger UI output block.
At the same time, I don't think it makes sense to always produce
styled output when capturing the output in a string; if what the user
wants is to parse the output, then the style escape sequences make
this far harder.
I propose that gdb.execute gain a new argument 'styling'. When False
we would always produce unstyled output, and when True we would
produce styled output if styling is not disabled by some other means.
For example, if GDB's 'set style enabled' is off, then I think
gdb.execute() should respect that. My assumption here is that
gdb.execute() might be executed by some extension. If the extension
thinks "styled output world work here", but the user hates styled
output, and has turned it off, then the extension should not be
forcing styled output on the user.
I chose 'styling' instead of 'styled' as the Python argument name
because we already use 'styling' in gdb.Value.format_string, and we
don't use 'styled' anywhere else. This is only a little bit of
consistency, but I still think it's a good thing.
The default for 'styling' will change depending on where the output is
going. When gdb.execute is sending the output to GDB's stdout then
the default for 'styling' is True. When the output is going to a
string, then the default for 'styling' will be False. Not only does
this match the existing behaviour, but I think this makes sense. By
default we assume that output captured in a string is going to be
parsed, and therefore styling markup is unhelpful, while output going
to stdout should receive styling.
This fixes part of the problem described in PR gdb/32676. That bug
tries to capture styled source listing in a string, which wasn't
previously possible.
There are some additional issues with capturing source code; GDB
caches the source code in the source code cache. However, GDB doesn't
check if the cached content is styled or not. As a consequence, if
the first time the source of a file is shown it is unstyled, then the
cached will hold the unstyled source code, and future requests will
return that unstyled source. I'll address this issue in a separate
patch.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32676
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The function construct_inferior_arguments (gdbsupport/common-inferior.cc)
currently escapes all special shell characters. After this commit
there will be two "levels" of quoting:
1. The current "full" quoting, where all posix shell special
characters are quoted, and
2. a new "reduced" quoting, where only the characters that GDB sees
as special (quotes and whitespace) are quoted.
After this, almost all construct_inferior_arguments calls will use the
"full" quoting, which is the current quoting. The "reduced" quoting
will be used in this commit to restore the behaviour that was lost in
the previous commit (more details below).
In the future, the reduced quoting will be useful for some additional
inferior argument that I have planned. I already posted my full
inferior argument work here:
https://inbox.sourceware.org/gdb-patches/cover.1730731085.git.aburgess@redhat.com
But that series is pretty long, and wasn't getting reviewed, so I'm
posted the series in parts now.
Before the previous commit, GDB behaved like this:
$ gdb -eiex 'set startup-with-shell off' --args /tmp/exec '$FOO'
(gdb) show args
Argument list to give program being debugged when it is started is "$FOO".
Notice that with 'startup-with-shell' off, the argument was left as
just '$FOO'. But after the previous commit, this changed to:
$ gdb -eiex 'set startup-with-shell off' --args /tmp/exec '$FOO'
(gdb) show args
Argument list to give program being debugged when it is started is "\$FOO".
Now the '$' is escaped with a backslash. This commit restores the
original behaviour, as this is (currently) the only way to unquoted
shell special characters into arguments from the GDB command line.
The series that I listed above includes a new command line option for
GDB which provides a better approach for controlling the quoting of
special shell characters, but that work requires these patches to be
merged first.
I've split out the core of construct_inferior_arguments into the new
function escape_characters, which takes a set of characters to escape.
Then the two functions escape_shell_characters and
escape_gdb_characters call escape_characters with the appropriate
character sets.
Finally, construct_inferior_arguments, now takes a boolean which
indicates if we should perform full shell escaping, or just perform
the reduced escaping.
I've updated all uses of construct_inferior_arguments to pass a
suitable value to indicate what escaping to perform (mostly just
'true', but one case in main.c is different), also I've updated
inferior::set_args to take the same boolean flag, and pass it through
to construct_inferior_arguments.
Tested-By: Guinevere Larsen <guinevere@redhat.com>
|