Age | Commit message (Collapse) | Author | Files | Lines |
|
After this series, expand_symtabs_matching is now misnamed. This
patch renames it, renames some associated types, and also fixes up
some comments that I previously missed.
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes gdbpy_lookup_static_symbols to the callback approach,
merging the search loop and the call to expand_symtabs_matching.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16994
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16998
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
gdb currently crashes if you try to get the dynamic_type from a
gdb.Value of a POD struct:
(gdb) py print(gdb.parse_and_eval('pod').dynamic_type)
Fatal signal: Segmentation fault
It happens because value_rtti_type() returns NULL for them, and this is
not handled correctly.
Fixed by using val->type() as a fallback in this case.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
PR ada/33217 points out that gdb incorrectly calls the <ctype.h>
functions. In particular, gdb feels free to pass a 'char' like:
char *str = ...;
... isdigit (*str)
This is incorrect as isdigit only accepts EOF and values that can be
represented as 'unsigned char' -- that is, a cast is needed here to
avoid undefined behavior when 'char' is signed and a character in the
string might be sign-extended. (As an aside, I think this API seems
obviously bad, but unfortunately this is what the standard says, and
some systems check this.)
Rather than adding casts everywhere, this changes all the code in gdb
that uses any <ctype.h> API to instead call the corresponding c-ctype
function.
Now, c-ctype has some limitations compared to <ctype.h>. It works as
if the C locale is in effect, so in theory some non-ASCII characters
may be misclassified. This would only affect a subset of character
sets, though, and in most places I think ASCII is sufficient -- for
example the many places in gdb that check for whitespace.
Furthermore, in practice most users are using UTF-8-based locales,
where these functions aren't really informative for non-ASCII
characters anyway; see the existing workarounds in gdb/c-support.h.
Note that safe-ctype.h cannot be used because it causes conflicts with
readline.h. And, we canot poison the <ctype.h> identifiers as this
provokes errors from some libstdc++ headers.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33217
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Currently, trying to convert a 128-bit integer from a gdb.Value to a
Python integer will fail. This is surprising because Python uses
bigints internally.
The bug here is that valpy_long uses value_as_long, which fails for
anything wider than LONGEST. This patch fixes the problem by using
the recommended Python API.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33366
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
In VariableReference.to_object, we try to convert a gdb.Value to an
int without checking if the value is actually available. This came to
light in PR gdb/33345, after the x86 CET shadow stack patches were
merged.
If the x86 CET shadow stack register is available on the machine,
but the shadow stack feature is not enabled at run time, then the
register will show as "<unavailable>".
As the register is of type 'void *', then in the DAP code we try to
add a 'memoryReference' attribute with the value of the register
formatted as hex. This will fail if the register is unavailable.
To test this change you'll need:
(a) a machine which support the shadow stack feature, and
(b) to revert the changes from commit 63b862be762e1e6e7 in the file
gdb.dap/scopes.exp.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33345
Reviewed-By: Christina Schimpe <christina.schimpe@intel.com>
|
|
Add a new gdb.Value.is_unavailable attribute. This is similar to the
existing Value.is_optimized_out attribute, but returns True if any
part of the value is <unavailable>.
The existing Value.is_optimized_out attribute returns true if any part
of the value is optimized out, so I thought that Value.is_unavailable
should work the same way.
There's also a test.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33345
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Christina Schimpe <christina.schimpe@intel.com>
|
|
For background, see this thread:
https://inbox.sourceware.org/gdb-patches/20250612144607.27507-1-tdevries@suse.de
Tom describes the issue clearly in the above thread, here's what he
said:
Once in a while, when running test-case gdb.base/bp-cmds-continue-ctrl-c.exp,
I run into:
...
Breakpoint 2, foo () at bp-cmds-continue-ctrl-c.c:23^M
23 usleep (100);^M
^CFAIL: $exp: run: stop with control-c (unexpected) (timeout)
FAIL: $exp: run: stop with control-c
...
This is PR python/32167, observed both on x86_64-linux and powerpc64le-linux.
This is not a timeout due to accidental slowness, gdb actually hangs.
The backtrace at the hang is (on cfarm120 running AlmaLinux 9.6):
...
(gdb) bt
#0 0x00007fffbca9dd94 in __lll_lock_wait () from
/lib64/glibc-hwcaps/power10/libc.so.6
#1 0x00007fffbcaa6ddc in pthread_mutex_lock@@GLIBC_2.17 () from
/lib64/glibc-hwcaps/power10/libc.so.6
#2 0x000000001067aee8 in __gthread_mutex_lock ()
at /usr/include/c++/11/ppc64le-redhat-linux/bits/gthr-default.h:749
#3 0x000000001067afc8 in __gthread_recursive_mutex_lock ()
at /usr/include/c++/11/ppc64le-redhat-linux/bits/gthr-default.h:811
#4 0x000000001067b0d4 in std::recursive_mutex::lock ()
at /usr/include/c++/11/mutex:108
#5 0x000000001067b380 in std::lock_guard<std::recursive_mutex>::lock_guard ()
at /usr/include/c++/11/bits/std_mutex.h:229
#6 0x0000000010679d3c in set_quit_flag () at gdb/extension.c:865
#7 0x000000001066b6dc in handle_sigint () at gdb/event-top.c:1264
#8 0x00000000109e3b3c in handler_wrapper () at gdb/posix-hdep.c:70
#9 <signal handler called>
#10 0x00007fffbcaa6d14 in pthread_mutex_lock@@GLIBC_2.17 () from
/lib64/glibc-hwcaps/power10/libc.so.6
#11 0x000000001067aee8 in __gthread_mutex_lock ()
at /usr/include/c++/11/ppc64le-redhat-linux/bits/gthr-default.h:749
#12 0x000000001067afc8 in __gthread_recursive_mutex_lock ()
at /usr/include/c++/11/ppc64le-redhat-linux/bits/gthr-default.h:811
#13 0x000000001067b0d4 in std::recursive_mutex::lock ()
at /usr/include/c++/11/mutex:108
#14 0x000000001067b380 in std::lock_guard<std::recursive_mutex>::lock_guard ()
at /usr/include/c++/11/bits/std_mutex.h:229
#15 0x00000000106799cc in set_active_ext_lang ()
at gdb/extension.c:775
#16 0x0000000010b287ac in gdbpy_enter::gdbpy_enter ()
at gdb/python/python.c:232
#17 0x0000000010a8e3f8 in bpfinishpy_handle_stop ()
at gdb/python/py-finishbreakpoint.c:414
...
What happens here is the following:
- the gdbpy_enter constructor attempts to set the current extension language
to python using set_active_ext_lang
- set_active_ext_lang attempts to lock ext_lang_mutex
- while doing so, it is interrupted by sigint_wrapper (the SIGINT handler),
handling a SIGINT
- sigint_wrapper calls handle_sigint, which calls set_quit_flag, which also
tries to lock ext_lang_mutex
- since std::recursive_mutex::lock is not async-signal-safe, things go wrong,
resulting in a hang.
The hang bisects to commit 8bb8f834672 ("Fix gdb.interrupt race"), which
introduced the lock, making PR python/32167 a regression since gdb 15.1.
Commit 8bb8f834672 fixes PR dap/31263, a race reported by ThreadSanitizer:
...
WARNING: ThreadSanitizer: data race (pid=615372)
Read of size 1 at 0x00000328064c by thread T19:
#0 set_active_ext_lang(extension_language_defn const*) gdb/extension.c:755
#1 scoped_disable_cooperative_sigint_handling::scoped_disable_cooperative_sigint_handling()
gdb/extension.c:697
#2 gdbpy_interrupt gdb/python/python.c:1106
#3 cfunction_vectorcall_NOARGS <null>
Previous write of size 1 at 0x00000328064c by main thread:
#0 scoped_disable_cooperative_sigint_handling::scoped_disable_cooperative_sigint_handling()
gdb/extension.c:704
#1 fetch_inferior_event() gdb/infrun.c:4591
...
Location is global 'cooperative_sigint_handling_disabled' of size 1 at 0x00000328064c
...
SUMMARY: ThreadSanitizer: data race gdb/extension.c:755 in \
set_active_ext_lang(extension_language_defn const*)
...
The problem here is that gdb.interrupt is called from a worker thread, and its
implementation, gdbpy_interrupt races with the main thread on some variable.
The fix presented here is based on the fix that Tom proposed, but
fills in the missing Mingw support.
The problem is basically split into two: hosts that support unix like
signals, and Mingw, which doesn't support signals.
For signal supporting hosts, I've adopted the approach that Tom
suggests, gdbpy_interrupt uses kill() to send SIGINT to the GDB
process. This is then handled in the main thread as if the user had
pressed Ctrl+C. For these hosts no locking is required, so the
existing lock is removed. However, everywhere the lock currently
exists I've added an assert:
gdb_assert (is_main_thread ());
If this assert ever triggers then we're setting or reading the quit
flag on a worker thread, this will be a problem without the mutex.
For Mingw, the current mutex is retained. This is fine as there are
no signals, so no chance of the mutex acquisition being interrupted by
a signal, and so, deadlock shouldn't be an issue.
To manage the complexity of when we need an assert, and when we need
the mutex, I've created 'struct ext_lang_guard', which can be used as
a RAII object. This object either performs the assertion check, or
acquires the mutex, depending on the host.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32167
Co-Authored-By: Tom de Vries <tdevries@suse.de>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In this commit:
commit dbd05b9edcf760a7001985f89bc760358a3c19d7
Date: Wed Aug 20 10:45:09 2025 +0100
gdb/python: check return value of PyObject_New in all cases
I missed a call to PyObject_New in python/py-disasm.c, which this
commit addresses.
Unlike the previous commit, the call to PyObject_New in py-disasm.c is
contained within the scoped_disasm_info_object class, which makes it
harder to check for NULL and return.
So in this commit I've rewritten the scoped_disasm_info_object class,
moving the call to PyObject_New out into gdbpy_print_insn, which is
the only place that scoped_disasm_info_object was being used.
As scoped_disasm_info_object is no longer responsible for creating the
underlying Python object, I figured that I might as well move the
initialisation of that object out of scoped_disasm_info_object too.
With that done, the scoped_disasm_info_object now has just one task,
invalidating the existing disasm_info_object at the end of the scope.
So I renamed scoped_disasm_info_object to
scoped_invalidate_disasm_info, which reflects its only task.
I made a couple of other small adjustments that were requested during
review, these are both in the same code area: updating
disasm_info_fill to take an object reference rather than a pointer,
and removing the local variable insn_disas_obj from gdbpy_print_insn,
and inline its value at the one place it was used.
There should be no user visible changes after this commit. Except for
the PyObject_New call, which now has proper error checking. But in
the working case, nothing should have changed.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Update gdbpy_create_ptid_object (python/py-infthread.c) to return a
gdbpy_ref<> rather than a 'PyObject *'. This reduces the chances that
a caller will leak an object, though no such memory leaks are fixed in
this commit, this is just a code improvement patch.
There should be no user visible changes after this commit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
I noticed a possible memory leak in gdbpy_create_ptid_object, in
py-infthread.c. We create a Tuple, and hold the reference in a
'PyObject*' local.
If we then fail to create any of the tuple contents we perform an
early exit, returning nullptr, this will leak the Tuple object.
Currently, we create the Tuple as the first action in the function,
but we don't really need the tuple until the end of the function.
In this commit I have:
1. Moved creation of the Tuple until the end of the function, just
before we need it.
2. Stored the Tuple reference in a gdbpy_ref<>. This is not
strictly needed any more, but is (I think) good practice as
future changes to the function will not need to worry about
releasing the Tuple object.
3. Taken the opportunity to replace a NULL with nullptr in this
function.
4. Inlined the local variable declarations to the point of first
use.
There should be no user visible changes after this commit.
No tests as I have no idea how to make gdb_py_object_from_longest (and
friends) fail, and so trigger the memory leak. I suspect we'd never
actually see this leak in the real world, but it doesn't hurt to clean
these things up.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit allows gdb.Color objects to be used to style output from
GDB commands written in Python, and the styled output should work
correctly with pagination.
There are two parts to fixing this:
First, GDB needs to be able to track the currently applied style
within the page_file class. This means that style changes need to be
achieved with calls to pager_file::emit_style_escape.
Now usually, GDB does this by calling something like fprintf_styled,
which takes care to apply the style for us. However, that's not
really an option here as a gdb.Color isn't a full style, and as the
gdb.Color object is designed to be converted directly into escape
sequences that can then be printed, we really need a solution that
works with this approach.
However pager_file::puts already has code in place to handle escape
sequences. Right now all this code does is spot the escape sequence
and append it to the m_wrap_buffer. But in this commit I propose that
we go one step further, parse the escape sequence back into a
ui_file_style object in pager_file::puts, and then we can call
pager_file::emit_style_escape.
If the parsing doesn't work then we can just add the escape sequence
to m_wrap_buffer as we did before.
But wait, how can this work if a gdb.Color isn't a full style? Turns
out that's not a problem. We only ever emit the escape sequence for
those parts of a style that need changing, so a full style that sets
the foreground color will emit the same escape sequence as a gdb.Color
for the foreground. When we convert the escape sequence back into a
ui_file_style, then we get a style with everything set to default,
except the foreground color.
I had hoped that this would be all that was needed. But unfortunately
this doesn't work because of the second problem...
... the implementation of the Python function gdb.write() calls
gdb_printf(), which calls gdb_vprintf(), which calls ui_file::vprintf,
which calls ui_out::vmessage, which calls ui_out::call_do_message, and
finally we reach cli_ui_out::do_message. This final do_message
function does this:
ui_file *stream = m_streams.back ();
stream->emit_style_escape (style);
stream->puts (str.c_str ());
stream->emit_style_escape (ui_file_style ());
If we imagine the case where we are emitting a style, triggered from
Python like this:
gdb.write(gdb.Color('red').escape_sequence(True))
the STYLE in this case will be the default ui_file_style(), and STR
will hold the escape sequence we are writing.
After the first change, where pager_file::puts now calls
pager_file::emit_style_escape, the current style of STREAM will have
been updated. But this means that the final emit_style_escape will
now restore the default style.
The fix for this is to avoid using the high level gdb_printf from
gdb.write(), and instead use gdb_puts instead. The gdb_puts function
doesn't restore the default style, which means our style modification
survives.
There's a new test included. This test includes what appears like a
pointless extra loop (looping over a single value), but this makes
sense given the origin of this patch. I've pulled this commit from a
longer series:
https://inbox.sourceware.org/gdb-patches/cover.1755080429.git.aburgess@redhat.com
I want to get this bug fix merged before GDB 17 branches, but the
longer series is not getting reviews, so for now I'm just merging this
one fix. Once the rest of the series gets merged, I'll be extending
the test, and the loop (mentioned above) will now loop over more
values.
|
|
solib_ops
Change the "iterate over objfiles in search order" operation from a
gdbarch method to methods on both program_space and solib_ops.
The first motivation for this is that I want to encapsulate solib-svr4's
data into svr4_solib_ops (in a subsequent series), instead of it being
in a separate structure (svr4_info). It is awkward to do so as long as
there are entry points that aren't the public solib_ops interface.
The second motivation is my project of making it able to have multiple
solib_ops per program space (which should be the subject of said
subsequent series), to better support heterogenousa systems (like ROCm,
with CPU and GPU in the same inferior). When we have this, when stopped
in GPU code, it won't make sense to ask the host's architecture to do
the iteration, as the logic could be different for the GPU architecture.
Instead, program_space::iterate_over_objfiles_in_search_order will be
responsible to delegate to the various solib_ops using a logic that is
yet to be determined.
I included this patch in this series (rather than the following one)
so that svr4_solib_ops::iterate_over_objfiles_in_search_order can access
svr4_solib_ops::default_debug_base, introduced in a later patch in this
series.
default_iterate_over_objfiles_in_search_order becomes the default
implementation of solib_ops::iterate_over_objfiles_in_search_order.
As far as I know, all architectures using
svr4_iterate_over_objfiles_in_search_order also use solib_ops_svr4, so I
don't expect this patch to cause behavior changes.
Change-Id: I71f8a800b8ce782ab973af2f2eb5fcfe4e06ec76
Reviewed-By: Guinevere Larsen <guinevere@redhat.com>
|
|
I spotted a few cases where the return value of PyObject_New was not
being checked against nullptr, but we were dereferencing the result.
All fixed here. The fixed functions can now return NULL, so I checked
all the callers, and I believe there will handle a return of NULL
correctly.
Assuming calls to PyObject_New never fail, there should be no user
visible changes after this commit.
No tests here as I don't know how we'd go about causing a Python
object allocation to fail.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
The enum address_class and related fields and methods seem misnamed to
me. Generalize it to "location_class". The enumerators in
address_class are already prefixed with LOC, so the new name seems
logical to me. Rename related fields and methods as well.
Plus, address_class could easily be mistaken for other unrelated things
named "address class" in GDB or DWARF.
Tested by rebuilding.
Change-Id: I0dca3738df412b350715286c608041b08e9b4d82
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
The convenience function get_frame_pc_if_available would take a pointer
to a variable that should be set if available, and would return a
boolean indicating whether that action was successful or not.
Now that GDB supports C++17 features, this indirection of a pointer and
returning boolean is unnecessary, since the function can return an
optional, and code that calls it can check if the optional contains a
value.
This commit makes that modernization. It should have no visible
effects.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This patch changes type::fields to return an array_view of the fields,
then fixes up the fallout.
More cleanups would be possible here (in particular in the field
initialization code) but I haven't done so.
The main motivation for this patch was to make it simpler to iterate
over the fields of a type.
Regression tested on x86-64 Fedora 41.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
PR dap/33228 points out a failure that occurs when the DAP client
requests more children of a variable than actually exist. Currently,
gdb throws a somewhat confusing exception. This patch changes this
code to throw a DAPException instead, resulting in a more ordinary and
readable failure.
The spec seems to be silent on what to do in this case. I chose an
exception on the theory that it's easier to be strict now and lift the
restriction later (if needed) than vice versa.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33228
|
|
While investigating a different bug, I noticed that the DAP code would
report a "void *"-typed register as having children -- however,
requesting the children of this register would fail.
The issue here is that a plain "void *" can't be dereferenced. This
patch changes the default visualizer to treat a "void *" as a scalar.
This adds a new test; but also arranges to examine all the returned
registers -- this was the first thing I attempted and it seemed
reasonable to have a test that double-checks that all the registers
really can be dereferenced as appropriate.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33228
|
|
This patch introduces a new macro, INIT_GDB_FILE. This is used to
replace the current "_initialize_" idiom when introducing a per-file
initialization function. That is, rather than write:
void _initialize_something ();
void
_initialize_something ()
{
...
}
... now you would write:
INIT_GDB_FILE (something)
{
...
}
The macro handles both the declaration and definition of the function.
The point of this approach is that it makes it harder to accidentally
cause an initializer to be omitted; see commit 2711e475 ("Ensure
cooked_index_entry self-tests are run"). Specifically, the regexp now
used by make-init-c seems harder to trick.
New in v2: un-did some erroneous changes made by the script.
The bulk of this patch was written by script.
Regression tested on x86-64 Fedora 41.
|
|
A user pointed out that DAP allows the "threads" request to work when
the inferior is running. This is documented in the overview, not the
specification.
While looking into this, I found a few other issues:
* The _thread_name function was not marked @in_gdb_thread.
This isn't very important but is still an oversight.
* DAP requires all threads to have a name -- the field is not optional
in the "Thread" type.
* There was no test examining events resulting from the inferior
printing to stdout.
This patch fixes all these problems.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33080
|
|
This commit adds a new gdb.warning() function. This function takes a
string and then calls GDB's internal warning() function. This will
display the string as a warning.
Using gdb.warning() means that the message will get the new emoji
prefix if the user has that feature turned on. Also, the message will
be sent to gdb.STDERR without the user having to remember to print to
the correct stream.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Makes it possible to set and remove other types of breakpoints while the
process is running. Makes debugging more convenient.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed a minor grammer issue in a comment in DAP.
|
|
In gdbpy_parse_command_name (python/py-cmd.c) there is a call to
xmalloc that can easily be replaced with a call to
make_unique_xstrndup, which makes the code easier to read (I think).
In gdbscm_parse_command_name (guile/scm-cmd.c) the same fix can be
applied to remove an identical xmalloc call. And there is an
additional xmalloc call, which can also be replaced with
make_unique_xstrndup in the same way.
The second xmalloc call in gdbscm_parse_command_name was also present
in gdbpy_parse_command_name at one point, but was replaced with a use
of std::string by this commit:
commit 075c55e0cc0a68eeab777027213c2f545618e844
Date: Wed Dec 26 11:05:57 2018 -0700
Remove more calls to xfree from Python
I haven't changed the gdbscm_parse_command_name to use std::string
though, as that doesn't work well with the guile exception model.
Guile exceptions work by performing a longjmp from the function that
raises the exception, back to the guile run-time. The consequence of
this is that destructors are not run. For example, if
gdbscm_parse_command_name calls gdbscm_out_of_range_error, then any
function local objects in gdbscm_parse_command_name will not have
their destructors called.
What this means is that, for the existing `result` and `prefix_text`
locals, any allocated memory managed by these objects will be leaked
if an exception is called. However, fixing this is pretty easy, one
way is to just assign nullptr to these locals before raising the
exception, this would cause the allocated memory to be released.
But for std::string it is harder to ensure that the managed memory has
actually been released. We can call std::string::clear() and then
maybe std::string::shrink_to_fit(), but this is still not guaranteed
to release any managed memory. In fact, I believe the only way to
ensure all managed memory is released, is to call the std::string
destructor.
And so, for functions that can throw a guile exception, it is easier
to just avoid std::string.
As for the memory leak that I identify above; I'll fix that in a
follow on commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A commit I recently pushed:
commit 0b5023cc71d3af8b18e10e6599a3f9381bc15265
Date: Sat Apr 12 09:15:53 2025 +0100
gdb/python/guile: user created prefix commands get help list
can trigger a segfault if a user tries to create nested prefix
commands. For example, this will trigger a crash:
(gdb) python gdb.ParameterPrefix("prefix-1", gdb.COMMAND_NONE)
(gdb) python gdb.ParameterPrefix("prefix-1 prefix-2", gdb.COMMAND_NONE)
Fatal signal: Segmentation fault
... etc ...
If the user adds an actual parameter under 'prefix-1' before creating
'prefix-2', then everything is fine:
(gdb) python gdb.ParameterPrefix("prefix-1", gdb.COMMAND_NONE)
(gdb) python gdb.Parameter('prefix-1 param-1', gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
(gdb) python gdb.ParameterPrefix("prefix-1 prefix-2", gdb.COMMAND_NONE)
The mistake in the above patch is in how gdbpy_parse_command_name is
used. The BASE_LIST output argument from this function points to the
list of commands for the prefix, not to the prefix command itself.
So when gdbpy_parse_command_name is called for 'prefix-1 prefix-2',
BASE_LIST points to the list of commands associated with 'prefix-1',
not to the actual 'prefix-1' cmd_list_element.
Back in cmdpy_init, from where gdbpy_parse_command_name was called, I
was walking back from the first entry in BASE_LIST to figure out if
this was a "show" prefix command or not. However, if BASE_LIST is
empty then there is no first item, and this would trigger the
segfault.
The solution it to extend gdbpy_parse_command_name to also return the
prefix cmd_list_element in addition to the existing values. With this
done, and cmdpy_init updated, the segfault is now avoided.
There's a new test that would trigger the crash without the patch.
And, of course, the above commit also broke guile in the exact same
way. And the fix is exactly the same. And there's a guile test too.
NOTE: We should investigate possibly sharing some of this boiler plate
helper code between Python and Guile. But not in this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Consider GDB's builtin prefix set/show prefix sub-commands, if they
are invoked with no sub-command name then they work like this:
(gdb) show print
print address: Printing of addresses is on.
print array: Pretty formatting of arrays is off.
print array-indexes: Printing of array indexes is off.
print asm-demangle: Demangling of C++/ObjC names in disassembly listings is off.
... cut lots of lines ...
(gdb) set print
List of set print subcommands:
set print address -- Set printing of addresses.
set print array -- Set pretty formatting of arrays.
set print array-indexes -- Set printing of array indexes.
set print asm-demangle -- Set demangling of C++/ObjC names in disassembly listings.
... cut lots of lines ...
Type "help set print" followed by set print subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Type "apropos -v word" for full documentation of commands related to "word".
Command name abbreviations are allowed if unambiguous.
(gdb)
That is 'show print' lists the values of all settings under the
'print' prefix, and 'set print' lists the help text for all settings
under the 'set print' prefix.
Now, if we try to create something similar using the Python API:
(gdb) python gdb.ParameterPrefix("my-prefix", gdb.COMMAND_NONE)
(gdb) python gdb.Parameter("my-prefix foo", gdb.COMMAND_OBSCURE, gdb.PARAM_BOOLEAN)
(gdb) show my-prefix
(gdb) set my-prefix
Neither 'show my-prefix' or 'set my-prefix' gives us the same details
relating to the sub-commands that we get with the builtin prefix
commands.
This commit aims to address this.
Currently, in cmdpy_init, when a new command is created, we always set
the commands callback function to cmdpy_function. It is within
cmdpy_function that we spot that the command is a prefix command, and
that there is no gdb.Command.invoke method, and so return early.
This commit changes things so that the rules are now:
1. For NON prefix commands, we continue to use cmdpy_function.
2. For prefix commands that do have a gdb.Command.invoke
method (i.e. can handle unknown sub-commands), continue to use
cmdpy_function.
3. For all other prefix commands, don't use cmdpy_function, instead
use GDB's normal callback function for set/show prefixes.
This requires splitting the current call to add_prefix_cmd into either
a call to add_prefix_cmd, add_show_prefix_cmd, or
add_basic_prefix_cmd, as appropriate.
After these changes, we now see this:
(gdb) python gdb.ParameterPrefix("my-prefix", gdb.COMMAND_NONE) │
(gdb) python gdb.Parameter("my-prefix foo", gdb.COMMAND_OBSCURE, gdb.PARAM_BOOLEAN)
(gdb) show my-prefix │
my-prefix foo: The current value of 'my-prefix foo' is "off".
(gdb) set my-prefix
List of "set my-prefix" subcommands:
set my-prefix foo -- Set the current value of 'my-prefix foo'.
Type "help set my-prefix" followed by subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Type "apropos -v word" for full documentation of commands related to "word".
Command name abbreviations are allowed if unambiguous.
(gdb)
Which matches how a prefix defined within GDB would act.
I have made the same changes to the Guile API.
|
|
In Ada, a field can have a dynamic bit offset in its enclosing record.
In DWARF 3, this was handled using a dynamic
DW_AT_data_member_location, combined with a DW_AT_bit_offset -- this
combination worked out ok because in practice GNAT only needs a
dynamic byte offset with a fixed offset within the byte.
However, this approach was deprecated in DWARF 4 and then removed in
DWARF 5. No replacement approach was given, meaning that in strict
mode there is no way to express this.
This is a DWARF bug, see
https://dwarfstd.org/issues/250501.1.html
In a discussion on the DWARF mailing list, a couple people mentioned
that compilers could use the obvious extension of a dynamic
DW_AT_data_bit_offset. I've implemented this for LLVM:
https://github.com/llvm/llvm-project/pull/141106
In preparation for that landing, this patch implements support for
this construct in gdb.
New in v2: renamed some constants and added a helper method, per
Simon's review.
New in v3: more renamings.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
DAP requests have a "defer_stop_events" option that is intended to
defer the emission of any "stopped" event until after the current
request completes. This was needed to handle async continues like
"finish &".
However, I noticed that sometimes DAP tests can fail, because a stop
event does arrive before the response to the "stepOut" request. I've
only noticed this when the machine is fairly loaded -- for instance
when I'm regression-testing a series, it may occur in some of the
tests mid-series.
I believe the problem is that the implementation in the "request"
function is incorrect -- the flag is set when "request" is invoked,
but instead it must be deferred until the request itself is run. That
is, the setting must be captured in one of the wrapper functions.
Following up on this, Simon pointed out that introducing a delay
before sending a request's response will cause test case failures.
That is, there's a race here that is normally hidden.
Investigation showed that that deferred requests can't force event
deferral. This patch implements this; but more testing showed many
more race failures. Some of these are due to how the test suite is
written.
Anyway, in the end I took the radical approach of deferring all events
by default. Most DAP requests are asynchronous by nature, so this
seemed ok. The only case I found that really required this is
pause.exp, where the test (rightly) expects to see a 'continued' event
while performing an inferior function call.
I went through all events and all requests and tried to convince
myself that this patch will cause acceptable behavior in every case.
However, it's hard to be completely sure about this approach. Maybe
there are cases that do still need an event before the response, but
we just don't have tests for them.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32685
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
At commit 34b0776fd73^, flake8 reports the following F405 warnings:
...
$ pre-commit run flake8 --file gdb/python/lib/gdb/__init__.py
flake8...................................................................Failed
- hook id: flake8
- exit code: 1
F405 'flush' may be undefined, or defined from star imports: _gdb
F405 'write' may be undefined, or defined from star imports: _gdb
F405 'STDOUT' may be undefined, or defined from star imports: _gdb
F405 'STDERR' may be undefined, or defined from star imports: _gdb
...
F405 'selected_inferior' may be undefined, or defined from star imports: _gdb
F405 'execute' may be undefined, or defined from star imports: _gdb
F405 'parameter' may be undefined, or defined from star imports: _gdb
...
The F405s are addressed by commit 34b0776fd73 ('Suppress some "undefined"
warnings from flake8').
The problem indicated by the first F405 is that the use of flush here:
...
class _GdbFile(object):
...
def flush(self):
flush(stream=self.stream)
...
cannot be verified by flake8. It concludes that either, flush is undefined,
or it is defined by this "star import":
...
from _gdb import * # noqa: F401,F403
...
In this particular case, indeed flush is defined by the star import.
This can be addressed by simply adding:
...
flush(stream=self.stream) # noqa: F405
...
but that has only effect for flake8, so other analyzers may report the same
problem.
The commit 34b0776fd73 addresses it instead by adding an "import _gdb" and
adding a "_gdb." prefix:
...
_gdb.flush(stream=self.stream)
...
This introduces a second way to specify _gdb names, but the first one still
remains, and occasionally someone will use the first one, which then requires
fixing once flake8 is run [1].
While this works to silence the warnings, there is a problem: if a developer
makes a typo:
...
_gdb.flash(stream=self.stream)
...
this is not detected by flake8.
This matters because although the python import already complains:
...
$ gdb -q -batch -ex "python import gdb"
Exception ignored in: <gdb._GdbFile object at 0x7f6186d4d7f0>
Traceback (most recent call last):
File "__init__.py", line 63, in flush
_gdb.flash(stream=self.stream)
AttributeError: module '_gdb' has no attribute 'flash'
...
that doesn't trigger if the code is hidden behind some control flow:
...
if _var_mostly_false:
flash(stream=self.stream)
...
Instead, fix the F405s by reverting commit 34b0776fd73 and adding a second
import of _gdb alongside the star import which lists the names used locally:
...
from _gdb import * # noqa: F401,F403
+from _gdb import (
+ STDERR,
+ STDOUT,
+ Command,
+ execute,
+ flush,
+ parameter,
+ selected_inferior,
+ write,
+)
...
This gives the following warnings for the flash typo:
...
31:1: F401 '_gdb.flush' imported but unused
70:5: F811 redefinition of unused 'flush' from line 31
71:9: F405 'flash' may be undefined, or defined from star imports: _gdb
...
The benefits of this approach compared to the previous one are that:
- the typo is noticed, and
- when using a new name, the F405 fix needs to be done once (by adding it to
the explicit import list), while previously the fix had to be applied to
each use (by adding the "_gdb." prefix).
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
[1] Commit 475799b692e ("Fix some pre-commit nits in gdb/__init__.py")
|
|
I believe we previously agreed that the minimum supported Python
version should be 3.4. This patch makes this change, harmonizing the
documentation (which was inconsistent about the minimum version) and
the code.
New in v2: rebased, and removed a pre-3.4 workaround from __init__.py.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
Acked-By: Tom de Vries <tdevries@suse.de>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31870
|
|
When DAP completion requests receives empty string to complete,
the script crashes due trying to access element -1 from list
being a result of `text.splitlines()` (which for `text == ""`
evaluates into empty list).
This patch adds simple check if `text` is populated, and when it
is not, skips transformations and assigns correct result directly.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It conflicts with the ldirname function that will be added in the next
libiberty sync.
|
|
I noticed that pre-commit has some complaints (flake8 and codespell)
about gdb/__init__.py. This patch fixes these.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
This commit adds a new gdb.ParameterPrefix class to GDB's Python API.
When creating multiple gdb.Parameters, it is often desirable to group
these together under a sub-command, for example, 'set print' has lots
of parameters nested under it, like 'set print address', and 'set
print symbol'. In the Python API the 'print' part of these commands
are called prefix commands, and are created using gdb.Command objects.
However, as parameters are set via the 'set ....' command list, and
shown through the 'show ....' command list, creating a prefix for a
parameter usually requires two prefix commands to be created, one for
the 'set' command, and one for the 'show' command.
This often leads to some duplication, or at the very least, each user
will end up creating their own helper class to simplify creation of
the two prefix commands.
This commit adds a new gdb.ParameterPrefix class. Creating a single
instance of this class will create both the 'set' and 'show' prefix
commands, which can then be used while creating the gdb.Parameter.
Here is an example of it in use:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE)
This adds 'set my-prefix' and 'show my-prefix', both of which are
prefix commands. The user can then add gdb.Parameter objects under
these prefixes.
The gdb.ParameterPrefix initialise method also supports documentation
strings, so we can write:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE,
"Configuration setting relating to my special extension.")
which will set the documentation string for the prefix command.
Also, it is possible to support prefix commands that use the `invoke`
functionality to handle unknown sub-commands. This is done by
sub-classing gdb.ParameterPrefix and overriding either 'invoke_set' or
'invoke_show' to handle the 'set' or 'show' prefix command
respectively.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
I was recently attempting to create some parameters via the Python
API. I wanted these parameters to appear similar to how GDB handles
the existing 'style' parameters.
Specifically, I was interested in this behaviour:
(gdb) help show style filename foreground
Show the foreground color for this property.
(gdb) help set style filename foreground
Set the foreground color for this property.
(gdb)
Notice how each 'help' command only gets a single line of output.
I tried to reproduce this behaviour via the Python API and was unable.
The problem is that, in order to get just a single line of output like
this, the style parameters are registered with a call to
add_setshow_color_cmd with the 'help_doc' being passed as nullptr.
On the Python side, when parameters are created, the 'help_doc' is
obtained with a call to get_doc_string (python/py-param.c). This
function either returns the __doc__ string, or a default string: "This
command is not documented.".
To avoid returning the default we could try setting __doc__ to an
empty string, but setting this field to any string means that GDB
prints a line for that string, like this:
class test_param(gdb.Parameter):
__doc__ = ""
def __init__(self, name):
super ().__init__(name, gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
self.value = True
test_param('print test')
Then in GDB:
(gdb) help set print test
Set the current value of 'print test'.
(gdb)
The blank line is the problem I'd like to solve.
This commit makes a couple of changes to how parameter doc strings are
handled.
If the doc string is set to an empty string, then GDB now converts
this to nullptr, which removes the blank line problem, the new
behaviour in GDB (for the above `test_param`) is:
(gdb) help set print test
Set the current value of 'print test'.
(gdb)
Next, I noticed that if the set/show docs are set to empty strings,
then the results are less than ideal:
class test_param(gdb.Parameter):
set_doc = ""
def __init__(self, name):
super ().__init__(name, gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
self.value = True
test_param('print test')
And in GDB:
(gdb) help set print test
This command is not documented.
(gdb)
So, if the set/show docs are the empty string, GDB now forces these to
be the default string instead, the new behaviour in GDB is:
(gdb) help set print test
Set the current value of 'print test'.
This command is not documented.
(gdb)
I've added some additional asserts; the set/show docs should always be
non-empty strings, which I believe is the case after this commit. And
the 'doc' string returned from get_doc_string should never nullptr,
but could be empty.
There are new tests to cover all these changes.
|
|
The manual for gdb.Parameter says:
If NAME consists of multiple words, and no prefix parameter group
can be found, an exception is raised.
This makes sense; we cannot create a parameter within a prefix group,
if the prefix doesn't exist. And this almost works, so:
(gdb) python gdb.Parameter("xxx foo", gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
Python Exception <class 'RuntimeError'>: Could not find command prefix xxx.
Error occurred in Python: Could not find command prefix xxx.
The prefix 'xxx' doesn't exist, and we get an error. But, if we try
multiple levels of prefix:
(gdb) python gdb.Parameter("print xxx foo", gdb.COMMAND_NONE, gdb.PARAM_BOOLEAN)
This completes without error, however, we didn't get what we were
maybe expecting:
(gdb) show print xxx foo
Undefined show print command: "xxx foo". Try "help show print".
But we did get:
(gdb) show print foo
The current value of 'print foo' is "off".
GDB stopped scanning the prefix string at the unknown 'xxx', and just
created the parameter there. I don't think this makes sense, nor is
it inline with the manual.
An identical problem exists with gdb.Command creation; GDB stops
parsing the prefix at the first unknown prefix, and just creates the
command there. The manual for gdb.Command says:
NAME is the name of the command. If NAME consists of multiple
words, then the initial words are looked for as prefix commands.
In this case, if one of the prefix commands does not exist, an
exception is raised.
So again, the correct action is, I believe, to raise an exception.
The problem is in gdbpy_parse_command_name (python/py-cmd.c), GDB
calls lookup_cmd_1 to look through the prefix string and return the
last prefix group. If the very first prefix word is invalid then
lookup_cmd_1 returns NULL, and this case is handled. However, if
there is a valid prefix, followed by an invalid prefix, then
lookup_cmd_1 will return a pointer to the last valid prefix list, and
will update the input argument to point to the start of the invalid
prefix word. This final case, where the input is left pointing to an
unknown prefix, was previously not handled.
I've fixed gdbpy_parse_command_name, and added tests for command and
parameter creation to cover this case.
The exact same error is present in the guile API too. The guile
documentation for make-parameter and make-command says the same things
about unknown prefixes resulting in an exception, but the same error
is present in gdbscm_parse_command_name (guile/scm-cmd.c), so I've
fixed that too, and added some tests.
|
|
The documentation for the Source interface says
* The path of the source to be shown in the UI.
* It is only used to locate and load the content of the source if no
* `sourceReference` is specified (or its value is 0).
but the code used `path` first. I fixed it to use `sourceReference` first.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that the gdb.Color.escape_sequence() method would produce an
escape sequence even when styling is disabled.
I think this is the wrong choice. Ideally, when styling is
disabled (e.g. with 'set style enabled off'), GDB should not be
producing styled output.
If a GDB extension is using gdb.Color to apply styling to the output,
then currently, the extension should be checking 'show style enabled'
any time Color.escape_sequence() is used. This means lots of code
duplication, and the possibility that some locations will be missed,
which means disabling styling no longer does what it says.
I propose that Color.escape_sequence() should return the empty string
if styling is disabled. A Python extension can then do:
python
c_none = gdb.Color('none')
c_red = gdb.Color('red')
print(c_red.escape_sequence(True)
+ "Text in red."
+ c_none.escape_sequence(True))
end
If styling is enable this will print some red text. And if styling is
disabled, then it will print text in the terminal's default color.
I have applied the same fix to the guile API.
I have extended the tests to cover this case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Using the trigger patch described in the previous commit, I get:
...
$ gdb
(gdb) <q>error detected on stdin
Fatal signal: Segmentation fault
----- Backtrace -----
0x64c7b3 gdb_internal_backtrace_1
/data/vries/gdb/src/gdb/bt-utils.c:127
0x64c937 _Z22gdb_internal_backtracev
/data/vries/gdb/src/gdb/bt-utils.c:196
0x94db83 handle_fatal_signal
/data/vries/gdb/src/gdb/event-top.c:1021
0x94dd48 handle_sigsegv
/data/vries/gdb/src/gdb/event-top.c:1098
0x7f372be578ff ???
0x10b7c0a _Z9gdb_flushP7ui_file
/data/vries/gdb/src/gdb/utils.c:1527
0xd4b938 gdbpy_flush
/data/vries/gdb/src/gdb/python/python.c:1624
0x7f372d73b276 _PyCFunction_FastCallDict
Objects/methodobject.c:231
0x7f372d73b276 _PyCFunction_FastCallKeywords
Objects/methodobject.c:294
0x7f372d794a09 call_function
Python/ceval.c:4851
0x7f372d78e838 _PyEval_EvalFrameDefault
Python/ceval.c:3351
0x7f372d796e6e PyEval_EvalFrameEx
Python/ceval.c:754
0x7f372d796e6e _PyFunction_FastCall
Python/ceval.c:4933
0x7f372d796e6e _PyFunction_FastCallDict
Python/ceval.c:5035
0x7f372d6fefc8 _PyObject_FastCallDict
Objects/abstract.c:2310
0x7f372d6fefc8 _PyObject_Call_Prepend
Objects/abstract.c:2373
0x7f372d6fe162 _PyObject_FastCallDict
Objects/abstract.c:2331
0x7f372d700705 callmethod
Objects/abstract.c:2583
0x7f372d700705 _PyObject_CallMethodId
Objects/abstract.c:2640
0x7f372d812a41 flush_std_files
Python/pylifecycle.c:699
0x7f372d81281d Py_FinalizeEx
Python/pylifecycle.c:768
0xd4d49b finalize_python
/data/vries/gdb/src/gdb/python/python.c:2308
0x9587eb _Z17ext_lang_shutdownv
/data/vries/gdb/src/gdb/extension.c:330
0xfd98df _Z10quit_forcePii
/data/vries/gdb/src/gdb/top.c:1817
0x6b3080 _Z12quit_commandPKci
/data/vries/gdb/src/gdb/cli/cli-cmds.c:483
0x1056577 stdin_event_handler
/data/vries/gdb/src/gdb/ui.c:131
0x1986970 handle_file_event
/data/vries/gdb/src/gdbsupport/event-loop.cc:551
0x1986f4b gdb_wait_for_event
/data/vries/gdb/src/gdbsupport/event-loop.cc:672
0x1985e0c _Z16gdb_do_one_eventi
/data/vries/gdb/src/gdbsupport/event-loop.cc:263
0xb66f2e start_event_loop
/data/vries/gdb/src/gdb/main.c:402
0xb670ba captured_command_loop
/data/vries/gdb/src/gdb/main.c:466
0xb68b9b captured_main
/data/vries/gdb/src/gdb/main.c:1344
0xb68c44 _Z8gdb_mainP18captured_main_args
/data/vries/gdb/src/gdb/main.c:1363
0x41a3b1 main
/data/vries/gdb/src/gdb/gdb.c:38
---------------------
A fatal error internal to GDB has been detected, further
debugging is not possible. GDB will now terminate.
This is a bug, please report it. For instructions, see:
<https://www.gnu.org/software/gdb/bugs/>.
Segmentation fault (core dumped)
$ q
...
Fix this in gdbpy_flush by checking for nullptr gdb_stdout/gdb_stderr (and
likewise in ioscm_flush) such that we get instead:
...
$ gdb
(gdb) <q>error detected on stdin
$ q
...
Tested on x86_64-linux.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
flake8 7.2.0 appears to have this new warning:
F824: global name / nonlocal name is unused: name is never assigned in scope
It points out a few places in our code base where "global" is not
necessary, fix them.
Change-Id: Ia6fb08686977559726fefe2a5bb95d8dcb298bb0
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python documentation does make it clear that keywords arguments
are supported for functions that take 2 or more arguments. The
documentation makes no promise for keyword argument support on
functions that only take a single argument.
That said, I'm a fan of keyword arguments, I think they help document
the code, and make intentions clearer, even for single argument
functions.
As I'm changing gdb.Color anyway (see previous commit), I'd like to
add keyword argument support to gdb.Color.escape_sequence, even though
this is a single argument method. This should be harmless for anyone
who doesn't want to use keywords, but adds the option for those of us
that do.
I've also removed a redundant check that the 'self' argument was a
gdb.Color object; Python already ensures this is the case.
And I have folded the check that the single argument is a bool into
the gdb_PyArg_ParseTupleAndKeywords call, this means that the error
message will include the incorrect type name now, which should make
debugging issues easier.
Tests have been extended to cover both cases -- it appears the
incorrect argument type error was not previously tested, so it is
now.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python API documentation is clear:
Functions and methods which have two or more optional arguments allow
them to be specified using keyword syntax.
The gdb.Color.__init__ method matches this description, but doesn't
support keyword arguments.
This commit fixes this by adding keyword argument support.
There's a new test to cover this functionality.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I've been reviewing all uses of PyObject_IsInstance, and I believe
that the use of PyObject_IsInstance in py-unwind.c is not entirely
correct. The use of PyObject_IsInstance is in this code in
frame_unwind_python::sniff:
if (PyObject_IsInstance (pyo_unwind_info,
(PyObject *) &unwind_info_object_type) <= 0)
error (_("A Unwinder should return gdb.UnwindInfo instance."));
The problem is that PyObject_IsInstance can return -1 to indicate an
error, in which case a Python error will have been set. Now, the
above code appears to handle this case, it checks for '<= 0', however,
frame_unwind_python::sniff has this near the start:
gdbpy_enter enter_py (gdbarch);
And looking in python.c at 'gdbpy_enter::~gdbpy_enter ()', you'll
notice that if an error is set then the error is printed, but also, we
get a warning about an unhandled Python exception. Clearly, all
exceptions should have been handled by the time the gdbpy_enter
destructor is called.
I've added a test as part of this commit that exposes this problem,
the current output is:
(gdb) backtrace
Python Exception <class 'RuntimeError'>: error in Blah.__class__
warning: internal error: Unhandled Python exception
Python Exception <class 'gdb.error'>: A Unwinder should return gdb.UnwindInfo instance.
#0 corrupt_frame_inner () at /home/andrew/projects/binutils-gdb/build.dev-g/gdb/testsuite/../../../src.dev-g/gdb/test>
(gdb)
An additional observation is that we use PyObject_IsInstance to check
that the return value is a gdb.UnwindInfo, or a sub-class. However,
gdb.UnwindInfo lacks the Py_TPFLAGS_BASETYPE flag, and so cannot be
sub-classed. As such, PyObject_IsInstance is not really needed, we
could use PyObject_TypeCheck instead. The PyObject_TypeCheck function
only returns 0 or 1, there is no -1 error case. Switching to
PyObject_TypeCheck then, fixes the above problem.
There's a new test that exposes the problems that originally existed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In python/py-registers.c we make use of PyObject_IsInstance. The
PyObject_IsInstance can return -1 for an error, 0 for false, or 1 for
true.
In py-registers.c we treat the return value from PyObject_IsInstance
as a boolean, which means both -1 and 1 will be treated as true.
If PyObject_IsInstance returns -1 for an error, this will be treated
as true, we will then invoke undefined behaviour as the pyo_reg_id
object will be treated as a gdb.RegisterDescriptor, even though it
might not be.
I noticed that the gdb.RegisterDescriptor class does not have the
Py_TPFLAGS_BASETYPE flag, and therefore cannot be inherited from. As
such, using PyObject_IsInstance is not necessary, we can use
PyObject_TypeCheck instead. The PyObject_TypeCheck function only
returns 0 or 1, so we don't need to worry about the error case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdbpy_is_color function uses PyObject_IsInstance, and converts the
return from PyObject_IsInstance to a bool.
Unfortunately, PyObject_IsInstance can return -1, 0, or 1, for error,
failure, or success respectively. When converting to a bool both -1
and 1 will convert to true.
Additionally, when PyObject_IsInstance returns -1 an error will be
set.
What this means is that, if gdbpy_is_color is called with a non
gdb.Color object, and the PyObject_IsInstance check raises an error,
then (a) GDB will continue as if the object is a gdb.Color object,
which is likely going to invoke undefined behaviour, see
gdbpy_get_color for example, and (b) when GDB eventually returns to
the Python interpreter, due to an error being set, we'll see:
Python Exception <class 'SystemError'>: PyEval_EvalFrameEx returned a result with an error set
Error occurred in Python: PyEval_EvalFrameEx returned a result with an error set
However, after the previous commit, gdb.Color can no longer be
sub-classed, this means that fixing the above problems is easy, we can
replace the PyObject_IsInstance check with a PyObject_TypeCheck, the
PyObject_TypeCheck function only returns 0 or 1, there's no -1 error
case.
It's also worth noting that PyObject_TypeCheck is the function that is
more commonly used within GDB's Python API implementation, include the
py-color.c use there were only 4 PyObject_IsInstance uses. Of the
remaining 3, 2 are fine, and one other (in py-disasm.c) is also
wrong. I'll address that in a separate patch.
There's also a new test included which exposes the above issue.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Remove the Py_TPFLAGS_BASETYPE flag from the gdb.Color type. This
effectively makes gdb.Color final; users can no longer create classes
that inherit from gdb.Color.
Right now I cannot think of any cases where inheritance would be
needed over composition for a simple type like gdb.Color. If I'm
wrong, then it's easy to add Py_TPFLAGS_BASETYPE back in later, this
would be an extension of the API. But it's much harder to remove the
flag later as that might break existing user code (note: there has
been no release of GDB yet that includes the gdb.Color type).
Introducing this restriction makes the next commit easier.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The PyObject_IsInstance function can return -1 for errors, 0 to
indicate false, and 1 to indicate true.
I noticed in python/py-disasm.c that we treat the result of
PyObject_IsInstance as a bool. This means that if PyObject_IsInstance
returns -1, then this will be treated as true. The consequence of
this is that we will invoke undefined behaviour by treating the result
from the _print_insn call as if it was a DisassemblerResult object,
even though PyObject_IsInstance raised an error, and the result might
not be of the required type.
I could fix this by taking the -1 result into account, however,
gdb.DisassemblerResult cannot be sub-classed, the type doesn't have
the Py_TPFLAGS_BASETYPE flag. As such, we can switch to using
PyObject_TypeCheck instead, which only return 0 or 1, with no error
case.
I have also taken the opportunity to improve the error message emitted
if the result has the wrong type. Better error message make debugging
issues easier.
I've added a test which exposes the problem when using
PyObject_IsInstance, and I've updated the existing test for the
improved error message.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A few minor GNU/GDB coding style issues in py-color.c:
- Space after '&' reference operator in one place.
- Some excessive indentation on a couple of lines.
- Spaces after '!' logical negation operator.
- Using a pointer as a bool in a couple of places.
There should be no functional changes after this commit.
|
|
Spotted a stray white space at the end of an error message. Removed,
and updated the py-breakpoint.exp test to check this case.
|