Age | Commit message (Collapse) | Author | Files | Lines |
|
After this mailing list posting:
https://sourceware.org/pipermail/gdb-patches/2023-February/196607.html
it seems to me that in practice an Ada task maps 1:1 with a GDB
thread, and so it doesn't really make sense to allow uses to give both
a thread and a task within a single breakpoint or watchpoint
condition.
This commit updates GDB so that the user will get an error if both
are specified.
I've added new tests to cover the CLI as well as the Python and Guile
APIs. For the Python and Guile testing, as far as I can tell, this
was the first testing for this corner of the APIs, so I ended up
adding more than just a single test.
For documentation I've added a NEWS entry, but I've not added anything
to the docs themselves. Currently we document the commands with a
thread-id or task-id as distinct command, e.g.:
'break LOCSPEC task TASKNO'
'break LOCSPEC task TASKNO if ...'
'break LOCSPEC thread THREAD-ID'
'break LOCSPEC thread THREAD-ID if ...'
As such, I don't believe there is any indication that combining 'task'
and 'thread' would be expected to work; it seems clear to me in the
above that those four options are all distinct commands.
I think the NEWS entry is enough that if someone is combining these
keywords (it's not clear what the expected behaviour would be in this
case) then they can figure out that this was a deliberate change in
GDB, but for a new user, the manual doesn't suggest combining them is
OK, and any future attempt to combine them will give an error.
Approved-By: Pedro Alves <pedro@palves.net>
|
|
Python functions implementing DAP requests should not use positional
parameters -- it only makes sense to call them with keyword arguments.
This patch changes the few remaining cases to start with the special
"*" parameter, following this rule.
|
|
This patch implements a simplication that I suggested here:
https://sourceware.org/pipermail/gdb-patches/2022-March/186320.html
Currently, the interp::exec virtual method interface is such that
subclass implementations must catch exceptions and then return them
via normal function return.
However, higher up the in chain, for the CLI we get to
interpreter_exec_cmd, which does:
for (i = 1; i < nrules; i++)
{
struct gdb_exception e = interp_exec (interp_to_use, prules[i]);
if (e.reason < 0)
{
interp_set (old_interp, 0);
error (_("error in command: \"%s\"."), prules[i]);
}
}
and for MI we get to mi_cmd_interpreter_exec, which has:
void
mi_cmd_interpreter_exec (const char *command, char **argv, int argc)
{
...
for (i = 1; i < argc; i++)
{
struct gdb_exception e = interp_exec (interp_to_use, argv[i]);
if (e.reason < 0)
error ("%s", e.what ());
}
}
Note that if those errors are reached, we lose the original
exception's error code. I can't see why we'd want that.
And, I can't see why we need to have interp_exec catch the exception
and return it via the normal return path. That's normally needed when
we need to handle propagating exceptions across C code, like across
readline or ncurses, but that's not the case here.
It seems to me that we can simplify things by removing some
try/catch-ing and just letting exceptions propagate normally.
Note, the "error in command" error shown above, which only exists in
the CLI interpreter-exec command, is only ever printed AFAICS if you
run "interpreter-exec console" when the top level interpreter is
already the console/tui. Like:
(gdb) interpreter-exec console "foobar"
Undefined command: "foobar". Try "help".
error in command: "foobar".
You won't see it with MI's "-interpreter-exec console" from a top
level MI interpreter:
(gdb)
-interpreter-exec console "foobar"
&"Undefined command: \"foobar\". Try \"help\".\n"
^error,msg="Undefined command: \"foobar\". Try \"help\"."
(gdb)
nor with MI's "-interpreter-exec mi" from a top level MI interpreter:
(gdb)
-interpreter-exec mi "-foobar"
^error,msg="Undefined MI command: foobar",code="undefined-command"
^done
(gdb)
in both these cases because MI's -interpreter-exec just does:
error ("%s", e.what ());
You won't see it either when running an MI command with the CLI's
"interpreter-exec mi":
(gdb) interpreter-exec mi "-foobar"
^error,msg="Undefined MI command: foobar",code="undefined-command"
(gdb)
This last case is because MI's interp::exec implementation never
returns an error:
gdb_exception
mi_interp::exec (const char *command)
{
mi_execute_command_wrapper (command);
return gdb_exception ();
}
Thus I think that "error in command" error is pretty pointless, and
since it simplifies things to not have it, the patch just removes it.
The patch also ends up addressing an old FIXME.
Change-Id: I5a6432a80496934ac7127594c53bf5221622e393
Approved-By: Tom Tromey <tromey@adacore.com>
Approved-By: Kevin Buettner <kevinb@redhat.com>
|
|
This helps resolve some cyclic include problem later in the series.
The only language-related thing frame.h needs is enum language, and that
is in defs.h.
Doing so reveals that a bunch of files were relying on frame.h to
include language.h, so fix the fallouts here and there.
Change-Id: I178a7efec1953c2d088adb58483bade1f349b705
Reviewed-By: Bruno Larsen <blarsen@redhat.com>
|
|
This commit splits the `set/show print elements' option into two. We
retain `set/show print elements' for controlling how many elements of an
array we print, but a new `set/show print characters' setting is added
which is used for controlling how many characters of a string are
printed.
The motivation behind this change is to allow users a finer level of
control over how data is printed, reflecting that, although strings can
be thought of as arrays of characters, users often want to treat these
two things differently.
For compatibility reasons by default the `set/show print characters'
option is set to `elements', which makes the limit for character strings
follow the setting of the `set/show print elements' option, as it used
to. Using `set print characters' with any other value makes the limit
independent from the `set/show print elements' setting, however it can
be restored to the default with the `set print characters elements'
command at any time.
A corresponding `-characters' option for the `print' command is added,
with the same semantics, i.e. one can use `elements' to make a given
`print' invocation follow the limit of elements, be it set with the
`-elements' option also given with the same invocation or taken from the
`set/show print elements' setting, for characters as well regardless of
the current setting of the `set/show print characters' option.
The GDB changes are all pretty straightforward, just changing references
to the old 'print_max' to use a new `get_print_max_chars' helper which
figures out which of the two of `print_max' and `print_max_chars' values
to use.
Likewise, the documentation is just updated to reference the new setting
where appropriate.
To make people's life easier the message shown by `show print elements'
now indicates if the setting also applies to character strings:
(gdb) set print characters elements
(gdb) show print elements
Limit on string chars or array elements to print is 200.
(gdb) set print characters unlimited
(gdb) show print elements
Limit on array elements to print is 200.
(gdb)
and the help text shows the dependency as well:
(gdb) help set print elements
Set limit on array elements to print.
"unlimited" causes there to be no limit.
This setting also applies to string chars when "print characters"
is set to "elements".
(gdb)
In the testsuite there are two minor updates, one to add `-characters'
to the list of completions now shown for the `print' command, and a bare
minimum pair of checks for the right handling of `set print characters'
and `show print characters', copied from the corresponding checks for
`set print elements' and `show print elements' respectively.
Co-Authored-By: Maciej W. Rozycki <macro@embecosm.com>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Rather than just `unlimited' allow the integer set commands (or command
options) to define arbitrary keywords for the user to use, removing
hardcoded arrangements for the `unlimited' keyword.
Remove the confusingly named `var_zinteger', `var_zuinteger' and
`var_zuinteger_unlimited' `set'/`show' command variable types redefining
them in terms of `var_uinteger', `var_integer' and `var_pinteger', which
have the range of [0;UINT_MAX], [INT_MIN;INT_MAX], and [0;INT_MAX] each.
Following existing practice `var_pinteger' allows extra negative values
to be used, however unlike `var_zuinteger_unlimited' any number of such
values can be defined rather than just `-1'.
The "p" in `var_pinteger' stands for "positive", for the lack of a more
appropriate unambiguous letter, even though 0 obviously is not positive;
"n" would be confusing as to whether it stands for "non-negative" or
"negative".
Add a new structure, `literal_def', the entries of which define extra
keywords allowed for a command and numerical values they correspond to.
Those values are not verified against the basic range supported by the
underlying variable type, allowing extra values to be allowed outside
that range, which may or may not be individually made visible to the
user. An optional value translation is possible with the structure to
follow the existing practice for some commands where user-entered 0 is
internally translated to UINT_MAX or INT_MAX. Such translation can now
be arbitrary. Literals defined by this structure are automatically used
for completion as necessary.
So for example:
const literal_def integer_unlimited_literals[] =
{
{ "unlimited", INT_MAX, 0 },
{ nullptr }
};
defines an extra `unlimited' keyword and a user-visible 0 value, both of
which get translated to INT_MAX for the setting to be used with.
Similarly:
const literal_def zuinteger_unlimited_literals[] =
{
{ "unlimited", -1, -1 },
{ nullptr }
};
defines the same keyword and a corresponding user-visible -1 value that
is used for the requested setting. If the last member were omitted (or
set to `{}') here, then only the keyword would be allowed for the user
to enter and while -1 would still be used internally trying to enter it
as a part of a command would result in an "integer -1 out of range"
error.
Use said error message in all cases (citing the invalid value requested)
replacing "only -1 is allowed to set as unlimited" previously used for
`var_zuinteger_unlimited' settings only rather than propagating it to
`var_pinteger' type. It could only be used for the specific case where
a single extra `unlimited' keyword was defined standing for -1 and the
use of numeric equivalents is discouraged anyway as it is for historical
reasons only that they expose GDB internals, confusingly different
across variable types. Similarly update the "must be >= -1" Guile error
message.
Redefine Guile and Python parameter types in terms of the new variable
types and interpret extra keywords as Scheme keywords and Python strings
used to communicate corresponding parameter values. Do not add a new
PARAM_INTEGER Guile parameter type, however do handle the `var_integer'
variable type now, permitting existing parameters defined by GDB proper,
such as `listsize', to be accessed from Scheme code.
With these changes in place it should be trivial for a Scheme or Python
programmer to expand the syntax of the `make-parameter' command and the
`gdb.Parameter' class initializer to have arbitrary extra literals along
with their internal representation supplied.
Update the testsuite accordingly.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
On openSUSE Leap 15.4 with python 3.6, the gdb.dap/basic-dap.exp test-case
fails as follows:
...
ERROR: eof reading json header
while executing
"error "eof reading json header""
invoked from within
"expect {
-i exp19 -timeout 10
-re "^Content-Length: (\[0-9\]+)\r\n" {
set length $expect_out(1,string)
exp_continue
}
-re "^(\[^\r\n\]+)..."
("uplevel" body line 1)
invoked from within
"uplevel $body" NONE eof reading json header
UNRESOLVED: gdb.dap/basic-dap.exp: startup - initialize
...
Investigation using a "catch throw" shows that:
...
(gdb)
at gdb/python/py-utils.c:396
396 error (_("Error occurred in Python: %s"), msg.get ());
(gdb) p msg.get ()
$1 = 0x2b91d10 "module 'queue' has no attribute 'SimpleQueue'"
...
The python class queue.SimpleQueue was introduced in python 3.7.
Fix this by falling back to queue.Queue for python <= 3.6.
Tested on x86_64-linux, by successfully running the test-case:
...
# of expected passes 47
...
|
|
PyObject_CallNoArgs was introduced in Python 3.9, so avoid it in favor
of PyObject_CallObject.
|
|
The Debugger Adapter Protocol is a JSON-RPC protocol that IDEs can use
to communicate with debuggers. You can find more information here:
https://microsoft.github.io/debug-adapter-protocol/
Frequently this is implemented as a shim, but it seemed to me that GDB
could implement it directly, via the Python API. This patch is the
initial implementation.
DAP is implemented as a new "interp". This is slightly weird, because
it doesn't act like an ordinary interpreter -- for example it doesn't
implement a command syntax, and doesn't use GDB's ordinary event loop.
However, this seemed like the best approach overall.
To run GDB in this mode, use:
gdb -i=dap
The DAP code will accept JSON-RPC messages on stdin and print
responses to stdout. GDB redirects the inferior's stdout to a new
pipe so that output can be encapsulated by the protocol.
The Python code uses multiple threads to do its work. Separate
threads are used for reading JSON from the client and for writing JSON
to the client. All GDB work is done in the main thread. (The first
implementation used asyncio, but this had some limitations, and so I
rewrote it to use threads instead.)
This is not a complete implementation of the protocol, but it does
implement enough to demonstrate that the overall approach works.
There is a rudimentary test suite. It uses a JSON parser written in
pure Tcl. This parser is under the same license as Tcl itself, so I
felt it was acceptable to simply import it into the tree.
There is also a bit of documentation -- just documenting the new
interpreter name.
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
[ Partial resubmission of an earlier submission by Andrew (
https://sourceware.org/pipermail/gdb-patches/2012-September/096347.html ), so
listing him as co-author. ]
With x86_64-linux and target board unix/-m32, we have:
...
(gdb) continue^M
Continuing.^M
Exception #10^M
^M
Breakpoint 3, throw_exception_1 (e=10) at py-finish-breakpoint2.cc:23^M
23 throw new int (e);^M
(gdb) FAIL: gdb.python/py-finish-breakpoint2.exp: \
check FinishBreakpoint in catch()
...
The following scenario happens:
- set breakpoint in throw_exception_1, a function that throws an exception
- continue
- hit breakpoint, with call stack main.c:38 -> throw_exception_1
- set a finish breakpoint
- continue
- hit the breakpoint again, with call stack main.c:48 -> throw_exception
-> throw_exception_1
Due to the exception, the function call did not properly terminate, and the
finish breakpoint didn't trigger. This is expected behaviour.
However, the intention is that gdb detects this situation at the next stop
and calls the out_of_scope callback, which would result here in this test-case
in a rather confusing "exception did not finish" message. So the problem is
that this message doesn't show up, in other words, the out_of_scope callback
is not called.
[ Note that the fact that the situation is detected only at the next stop
(wherever that happens to be) could be improved upon, and the earlier
submission did that by setting a longjmp breakpoint. But I'm considering this
problem out-of-scope for this patch. ]
Note that the message does show up later, at thread exit:
...
[Inferior 1 (process 20046) exited with code 0236]^M
exception did not finish ...^M
...
The decision on whether to call the out_of_scope call back is taken in
bpfinishpy_detect_out_scope_cb, and the interesting bit is here:
...
if (b->pspace == current_inferior ()->pspace
&& (!target_has_registers ()
|| frame_find_by_id (b->frame_id) == NULL))
bpfinishpy_out_of_scope (finish_bp);
...
In the case of the thread exit, the callback triggers because
target_has_registers () == 0.
So why doesn't the callback trigger in the case of the breakpoint?
Well, the b->frame_id is the frame_id of the frame of main (the frame
in which the finish breakpoint is supposed to trigger), so AFAIU
frame_find_by_id (b->frame_id) == NULL will only be true once we've
left main, at which point I guess we don't stop till thread exit.
Fix this by saving the frame in which the finish breakpoint was created, and
using frame_find_by_id () == NULL on that frame instead, such that we have:
...
(gdb) continue^M
Continuing.^M
Exception #10^M
^M
Breakpoint 3, throw_exception_1 (e=10) at py-finish-breakpoint2.cc:23^M
23 throw new int (e);^M
exception did not finish ...^M
(gdb) FAIL: gdb.python/py-finish-breakpoint2.exp: \
check FinishBreakpoint in catch()
...
Still, the test-case is failing because it's setup to match the behaviour that
we get on x86_64-linux with target board unix/-m64:
...
(gdb) continue^M
Continuing.^M
Exception #10^M
stopped at ExceptionFinishBreakpoint^M
(gdb) PASS: gdb.python/py-finish-breakpoint2.exp: \
check FinishBreakpoint in catch()
...
So what happens here? Again, due to the exception, the function call did not
properly terminate, but the finish breakpoint still triggers. This is somewhat
unexpected. This happens because it just so happens to be that the frame
return address at which the breakpoint is set, is also the first instruction
after the exception has been handled. This is a know problem, filed as
PR29909, so KFAIL it, and modify the test-case to expect the out_of_scope
callback.
Also add a breakpoint after setting the finish breakpoint but before throwing
the exception, to check that we don't call the out_of_scope callback too early.
Tested on x86_64-linux, with target boards unix/-m32.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
PR python/27247
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27247
|
|
This changes the uses of value_print_options to use 'true' and 'false'
rather than integers.
|
|
[I sent this earlier today, but I don't see it in the archives.
Resending it through a different computer / SMTP.]
The use of the static buffer in command_line_input is becoming
problematic, as explained here [1]. In short, with this patch [2] that
attempt to fix a post-hook bug, when running gdb.base/commands.exp, we
hit a case where we read a "define" command line from a script file
using command_command_line_input. The command line is stored in
command_line_input's static buffer. Inside the define command's
execution, we read the lines inside the define using command_line_input,
which overwrites the define command, in command_line_input's static
buffer. After the execution of the define command, execute_command does
a command look up to see if a post-hook is registered. For that, it
uses a now stale pointer that used to point to the define command, in
the static buffer, causing a use-after-free. Note that the pointer in
execute_command points to the dynamically-allocated buffer help by the
static buffer in command_line_input, not to the static object itself,
hence why we see a use-after-free.
Fix that by removing the static buffer. I initially changed
command_line_input and other related functions to return an std::string,
which is the obvious but naive solution. The thing is that some callees
don't need to return an allocated string, so this this an unnecessary
pessimization. I changed it to passing in a reference to an std::string
buffer, which the callee can use if it needs to return
dynamically-allocated content. It fills the buffer and returns a
pointers to the C string inside. The callees that don't need to return
dynamically-allocated content simply don't use it.
So, it started with modifying command_line_input as described above, all
the other changes derive directly from that.
One slightly shady thing is in handle_line_of_input, where we now pass a
pointer to an std::string's internal buffer to readline's history_value
function, which takes a `char *`. I'm pretty sure that this function
does not modify the input string, because I was able to change it (with
enough massaging) to take a `const char *`.
A subtle change is that we now clear a UI's line buffer using a
SCOPE_EXIT in command_line_handler, after executing the command.
This was previously done by this line in handle_line_of_input:
/* We have a complete command line now. Prepare for the next
command, but leave ownership of memory to the buffer . */
cmd_line_buffer->used_size = 0;
I think the new way is clearer.
[1] https://inbox.sourceware.org/gdb-patches/becb8438-81ef-8ad8-cc42-fcbfaea8cddd@simark.ca/
[2] https://inbox.sourceware.org/gdb-patches/20221213112241.621889-1-jan.vrany@labware.com/
Change-Id: I8fc89b1c69870c7fc7ad9c1705724bd493596300
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
In 2014, the function `gdbpy_should_stop' has been replaced with
`gdbpy_breakpoint_cond_says_stop'
This replaces `gdbpy_should_stop' with `gdbpy_breakpoint_cond_says_stop' in the
comments.
Since `gdbpy_should_stop' has been renamed as noted in `gdb/ChangeLog-2014':
* python/py-breakpoint.c (gdbpy_breakpoint_cond_says_stop): Renamed
from gdbpy_should_stop. Change result type to enum scr_bp_stop.
Change-Id: I0ef3491ce5e057c5e75ef8b569803b30a5838575
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
I saw that bppy_init used a non-const "char *". Fixing this revealed
that the xstrdup here was also unnecessary, so this patch removes it.
|
|
While working on another patch, Simon pointed out that GDB could be
improved by marking the functions passed to the disassembler as
noexcept.
https://sourceware.org/pipermail/gdb-patches/2022-October/193084.html
The reason this is important is the on some hosts, libopcodes, being C
code, will not be compiled with support for handling exceptions. As
such, an attempt to throw an exception over libopcodes code will cause
GDB to terminate.
See bug gdb/29712 for an example of when this happened.
In this commit all the functions that are passed to the disassembler,
and which might be used as callbacks by libopcodes are marked
noexcept.
Ideally, I would have liked to change these typedefs:
using read_memory_ftype = decltype (disassemble_info::read_memory_func);
using memory_error_ftype = decltype (disassemble_info::memory_error_func);
using print_address_ftype = decltype (disassemble_info::print_address_func);
using fprintf_ftype = decltype (disassemble_info::fprintf_func);
using fprintf_styled_ftype = decltype (disassemble_info::fprintf_styled_func);
which are declared in disasm.h, as including the noexcept keyword.
However, when I tried this, I ran into this warning/error:
In file included from ../../src/gdb/disasm.c:25:
../../src/gdb/disasm.h: In constructor ‘gdb_printing_disassembler::gdb_printing_disassembler(gdbarch*, ui_file*, gdb_disassemble_info::read_memory_ftype, gdb_disassemble_info::memory_error_ftype, gdb_disassemble_info::print_address_ftype)’:
../../src/gdb/disasm.h:116:3: error: mangled name for ‘gdb_printing_disassembler::gdb_printing_disassembler(gdbarch*, ui_file*, gdb_disassemble_info::read_memory_ftype, gdb_disassemble_info::memory_error_ftype, gdb_disassemble_info::print_address_ftype)’ will change in C++17 because the exception specification is part of a function type [-Werror=noexcept-type]
116 | gdb_printing_disassembler (struct gdbarch *gdbarch,
| ^~~~~~~~~~~~~~~~~~~~~~~~~
So I've left that change out. This does mean that if somebody adds a
new use of the disassembler classes in the future, and forgets to mark
the callbacks as noexcept, this will compile fine. We'll just have to
manually check for that during review.
|
|
Bug gdb/29712 identifies a problem with the Python disassembler API.
In some cases GDB will try to throw an exception through the
libopcodes disassembler code, however, not all targets include
exception unwind information when compiling C code, for targets that
don't include this information GDB will terminate when trying to pass
the exception through libopcodes.
To explain what GDB is trying to do, consider the following trivial
use of the Python disassembler API:
class ExampleDisassembler(gdb.disassembler.Disassembler):
class MyInfo(gdb.disassembler.DisassembleInfo):
def __init__(self, info):
super().__init__(info)
def read_memory(self, length, offset):
return super().read_memory(length, offset)
def __init__(self):
super().__init__("ExampleDisassembler")
def __call__(self, info):
info = self.MyInfo(info)
return gdb.disassembler.builtin_disassemble(info)
This disassembler doesn't add any value, it defers back to GDB to do
all the actual work, but it serves to allow us to discuss the problem.
The problem occurs when a Python exception is raised by the
MyInfo.read_memory method. The MyInfo.read_memory method is called
from the C++ function gdbpy_disassembler::read_memory_func. The C++
stack at the point this function is called looks like this:
#0 gdbpy_disassembler::read_memory_func (memaddr=4198805, buff=0x7fff9ab9d2a8 "\220ӹ\232\377\177", len=1, info=0x7fff9ab9d558) at ../../src/gdb/python/py-disasm.c:510
#1 0x000000000104ba06 in fetch_data (info=0x7fff9ab9d558, addr=0x7fff9ab9d2a9 "ӹ\232\377\177") at ../../src/opcodes/i386-dis.c:305
#2 0x000000000104badb in ckprefix (ins=0x7fff9ab9d100) at ../../src/opcodes/i386-dis.c:8571
#3 0x000000000104e28e in print_insn (pc=4198805, info=0x7fff9ab9d558, intel_syntax=-1) at ../../src/opcodes/i386-dis.c:9548
#4 0x000000000104f4d4 in print_insn_i386 (pc=4198805, info=0x7fff9ab9d558) at ../../src/opcodes/i386-dis.c:9949
#5 0x00000000004fa7ea in default_print_insn (memaddr=4198805, info=0x7fff9ab9d558) at ../../src/gdb/arch-utils.c:1033
#6 0x000000000094fe5e in i386_print_insn (pc=4198805, info=0x7fff9ab9d558) at ../../src/gdb/i386-tdep.c:4072
#7 0x0000000000503d49 in gdbarch_print_insn (gdbarch=0x5335560, vma=4198805, info=0x7fff9ab9d558) at ../../src/gdb/gdbarch.c:3351
#8 0x0000000000bcc8c6 in disasmpy_builtin_disassemble (self=0x7f2ab07f54d0, args=0x7f2ab0789790, kw=0x0) at ../../src/gdb/python/py-disasm.c:324
### ... snip lots of frames as we pass through Python itself ...
#22 0x0000000000bcd860 in gdbpy_print_insn (gdbarch=0x5335560, memaddr=0x401195, info=0x7fff9ab9e3c8) at ../../src/gdb/python/py-disasm.c:783
#23 0x00000000008995a5 in ext_lang_print_insn (gdbarch=0x5335560, address=0x401195, info=0x7fff9ab9e3c8) at ../../src/gdb/extension.c:939
#24 0x0000000000741aaa in gdb_print_insn_1 (gdbarch=0x5335560, vma=0x401195, info=0x7fff9ab9e3c8) at ../../src/gdb/disasm.c:1078
#25 0x0000000000741bab in gdb_disassembler::print_insn (this=0x7fff9ab9e3c0, memaddr=0x401195, branch_delay_insns=0x0) at ../../src/gdb/disasm.c:1101
So gdbpy_disassembler::read_memory_func is called from the libopcodes
disassembler to read memory, this C++ function then calls into user
supplied Python code to do the work.
If the user supplied Python code raises an gdb.MemoryError exception
indicating the memory read failed, this is fine. The C++ code
converts this exception back into a return value that libopcodes can
understand, and returns to libopcodes.
However, if the user supplied Python code raises some other exception,
what we want is for this exception to propagate through GDB and appear
as if raised by the call to gdb.disassembler.builtin_disassemble. To
achieve this, when gdbpy_disassembler::read_memory_func spots an
unknown Python exception, we must pass the information about this
exception from frame #0 to frame #8 in the above backtrace. Frame #8
is the C++ implementation of gdb.disassembler.builtin_disassemble, and
so it is this function that we want to re-raise the unknown Python
exception, so the user can, if they want, catch the exception in their
code.
The previous mechanism by which the exception was passed was to pack
the details of the Python exception into a C++ exception, then throw
the exception from frame #0, and catch the exception in frame #8,
unpack the details of the Python exception, and re-raise it.
However, this relies on the exception passing through frames #1 to #7,
some of which are in libopcodes, which is C code, and so, might not be
compiled with exception support.
This commit proposes an alternative solution that does not rely on
throwing a C++ exception.
When we spot an unhandled Python exception in frame #0, we will store
the details of this exception within the gdbpy_disassembler object
currently in use. Then we return to libopcodes a value indicating
that the memory_read failed.
libopcodes will now continue to disassemble as though that memory read
failed (with one special case described below), then, when we
eventually return to disasmpy_builtin_disassemble we check to see if
there is an exception stored in the gdbpy_disassembler object. If
there is then this exception can immediately be installed, and then we
return back to Python, when the user will be able to catch the
exception.
There is one extra change in gdbpy_disassembler::read_memory_func.
After the first call that results in an exception being stored on the
gdbpy_disassembler object, any future calls to the ::read_memory_func
function will immediately return as if the read failed. This avoids
any additional calls into user supplied Python code.
My thinking here is that should the first call fail with some unknown
error, GDB should not keep trying with any additional calls. This
maintains the illusion that the exception raised from
MyInfo.read_memory is immediately raised by
gdb.disassembler.builtin_disassemble. I have no tests for this change
though - to trigger this issue would rely on a libopcodes disassembler
that will try to read further memory even after the first failed
read. I'm not aware of any such disassembler that currently does
this, but that doesn't mean such a disassembler couldn't exist in the
future.
With this change in place the gdb.python/py-disasm.exp test should now
pass on AArch64.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29712
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Currently, FinishBreakpoints are set at the return address of a frame based on
the `finish' command, and are meant to be temporary breakpoints. However, they
are not being cleaned up after use, as reported in PR python/18655. This was
happening because the disposition of the breakpoint was not being set
correctly.
This commit fixes this issue by correctly setting the disposition in the
post-stop hook of the breakpoint. It also adds a test to ensure this feature
isn't regressed in the future.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=18655
|
|
The python code maintains a list of threads for each inferior. This
list is implemented as a linked list. When the number of threads grows
high, this implementation can begin to be a performance bottleneck as
finding a particular thread_object in the list has a complexity of O(N).
We see this in ROCgdb[1], a downstream port of GDB for AMDGUP. On
AMDGPU devices, the number of threads can get significantly higher than
on usual GDB workloads.
In some situations, we can reach the end of the inferior process with
GDB still having a substantial list of known threads. While running
target_mourn_inferior, we end up in inferior::clear_thread_list which
iterates over all remaining threads and marks each thread exited. This
fires the gdb::observers::thread_exit observer and eventually
py-inferior.c:set_thread_exited gets called. This function searches in
the linked list with poor performances.
This patch proposes to change the linked list that keeps the per
inferior_object list of thread_objects into a std::unordered_map. This
allows to have the search operation complexity be O(1) on average
instead of O(N).
With this patch, we can complete clear_thread_list in about 2.5 seconds
compared to 10 minutes without it.
Except for the performance change, no user visible change is expected.
Regression tested on Ubuntu-22.04 x86_64.
[1] https://github.com/ROCm-Developer-Tools/ROCgdb
|
|
A user noticed that TYPE_CODE_FIXED_POINT was not exported by the gdb
Python layer. This patch fixes the bug, and prevents future
occurences of this type of bug.
|
|
Similarly to booleans and following the fix for PR python/29217 make
`gdb.parameter' accept `None' for `unlimited' with parameters of the
PARAM_UINTEGER, PARAM_INTEGER, and PARAM_ZUINTEGER_UNLIMITED types, as
`None' is already returned by parameters of the two former types, so
one might expect to be able to feed it back. It also makes it possible
to avoid the need to know what the internal integer representation is
for the special setting of `unlimited'.
Expand the testsuite accordingly.
Approved-By: Simon Marchi <simon.marchi@polymtl.ca>
|
|
In a later commit in this series I will propose removing all of the
explicit gdbpy_initialize_* calls from python.c and replace these
calls with a more generic mechanism.
One of the side effects of this generic mechanism is that the order in
which the various Python sub-systems within GDB are initialized is no
longer guaranteed.
On the whole I don't think this matters, most of the sub-systems are
independent of each other, though testing did reveal a few places
where we did have dependencies, though I don't think those
dependencies were explicitly documented in comment anywhere.
This commit is similar to the previous one, and fixes the second
dependency issue that I found.
In this case the finish_breakpoint_object_type uses the
breakpoint_object_type as its tp_base, this means that
breakpoint_object_type must have been initialized with a call to
PyType_Ready before finish_breakpoint_object_type can be initialized.
Previously we depended on the ordering of calls to
gdbpy_initialize_breakpoints and gdbpy_initialize_finishbreakpoints in
python.c.
After this commit a new function gdbpy_breakpoint_init_breakpoint_type
exists, this function ensures that breakpoint_object_type has been
initialized, and can be called from any gdbpy_initialize_* function.
I feel that this change makes the dependency explicit, which I think
is a good thing.
There should be no user visible changes after this commit.
|
|
In a later commit in this series I will propose removing all of the
explicit gdbpy_initialize_* calls from python.c and replace these
calls with a more generic mechanism.
One of the side effects of this generic mechanism is that the order in
which the various Python sub-systems within GDB are initialized is no
longer guaranteed.
On the whole I don't think this matters, most of the sub-systems are
independent of each other, though testing did reveal a few places
where we did have dependencies, though I don't think those
dependencies were explicitly documented in a comment anywhere.
This commit removes the first dependency issue, with this and the next
commit, all of the implicit inter-sub-system dependencies will be
replaced by explicit dependencies, which will allow me to, I think,
clean up how the sub-systems are initialized.
The dependency is around the py_insn_type. This type is setup in
gdbpy_initialize_instruction and used in gdbpy_initialize_record.
Rather than depend on the calls to these two functions being in a
particular order, in this commit I propose adding a new function
py_insn_get_insn_type. This function will take care of setting up the
py_insn_type type and calling PyType_Ready. This helper function can
be called from gdbpy_initialize_record and
gdbpy_initialize_instruction, and the py_insn_type will be initialized
just once.
To me this is better, the dependency is now really obvious, but also,
we no longer care in which order gdbpy_initialize_record and
gdbpy_initialize_instruction are called.
There should be no user visible changes after this commit.
|
|
PR python/16324 points out that comparing a frame id to null_frame_id
can never succeed, and proposes simply removing the dead code. That
is what this patch does.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16324
|
|
The implementation of gdb.lookup_objfile() iterates over all objfiles and
compares their name or build id to the user-provided search string.
This will cause problems when supporting linker namespaces as the first
objfile in any namespace will be found. Instead, use
gdbarch_iterate_over_objfiles_in_search_order to only consider the
namespace of gdb.current_objfile() for the search, which defaults to the
initial namespace when gdb.current_objfile() is None.
|
|
This changes GDB to use frame_info_ptr instead of frame_info *
The substitution was done with multiple sequential `sed` commands:
sed 's/^struct frame_info;/class frame_info_ptr;/'
sed 's/struct frame_info \*/frame_info_ptr /g' - which left some
issues in a few files, that were manually fixed.
sed 's/\<frame_info \*/frame_info_ptr /g'
sed 's/frame_info_ptr $/frame_info_ptr/g' - used to remove whitespace
problems.
The changed files were then manually checked and some 'sed' changes
undone, some constructors and some gets were added, according to what
made sense, and what Tromey originally did
Co-Authored-By: Bruno Larsen <blarsen@redhat.com>
Approved-by: Tom Tomey <tom@tromey.com>
|
|
This replaces frame_id_eq with operator== and operator!=. I wrote
this for a version of this series that I later abandoned; but since it
simplifies the code, I left this patch in.
Approved-by: Tom Tomey <tom@tromey.com>
|
|
This commit was inspired by this stackoverflow post:
https://stackoverflow.com/questions/73491793/why-is-there-a-%C2%B1-in-lea-rax-rip-%C2%B1-0xeb3
One of the comments helpfully links to this Python test case:
from pygments import formatters, lexers, highlight
def colorize_disasm(content, gdbarch):
try:
lexer = lexers.get_lexer_by_name("asm")
formatter = formatters.TerminalFormatter()
return highlight(content, lexer, formatter).rstrip().encode()
except:
return None
print(colorize_disasm("lea [rip+0x211] # COMMENT", None).decode())
Run the test case and you should see that the '+' character is
underlined, and could be confused with a combined +/- symbol.
What's happening is that Pygments is failing to parse the input text,
and the '+' is actually being marked in the error style. The error
style is red and underlined.
It is worth noting that the assembly instruction being disassembled
here is an x86-64 instruction in the 'intel' disassembly style, rather
than the default att style. Clearly the Pygments module expects the
att syntax by default.
If we change the test case to this:
from pygments import formatters, lexers, highlight
def colorize_disasm(content, gdbarch):
try:
lexer = lexers.get_lexer_by_name("asm")
lexer.add_filter('raiseonerror')
formatter = formatters.TerminalFormatter()
return highlight(content, lexer, formatter).rstrip().encode()
except:
return None
res = colorize_disasm("lea rax,[rip+0xeb3] # COMMENT", None)
if res:
print(res.decode())
else:
print("No result!")
Here I've added the call: lexer.add_filter('raiseonerror'), and I am
now checking to see if the result is None or not. Running this and
the test now print 'No result!' - instead of styling the '+' in the
error style, we instead give up on the styling attempt.
There are two things we need to fix relating to this disassembly
text. First, Pygments is expecting att style disassembly, not the
intel style that this example uses. Fortunately, Pygments also
supports the intel style, all we need to do is use the 'nasm' lexer
instead of the 'asm' lexer.
However, this leads to the second problem; in our disassembler line we
have '# COMMENT'. The "official" Intel disassembler style uses ';'
for its comment character, however, gas and libopcodes use '#' as the
comment character, as gas uses ';' for an instruction separator.
Unfortunately, Pygments expects ';' as the comment character, and
treats '#' as an error, which means, with the addition of the
'raiseonerror' filter, that any line containing a '#' comment, will
not get styled correctly.
However, as the i386 disassembler never produces a '#' character other
than for comments, we can easily "fix" Pygments parsing of the
disassembly line. This is done by creating a filter. This filter
looks for an Error token with the value '#', we then change this into
a comment token. Every token after this (until the end of the line)
is also converted into a comment.
In this commit I do the following:
1. Check the 'disassembly-flavor' setting and select between the
'asm' and 'nasm' lexers based on the setting. If the setting is not
available then the 'asm' lexer is used by default,
2. Use "add_filter('raiseonerror')" to ensure that the formatted
output will not include any error text, which would be underlined,
and might be confusing,
3. If the 'nasm' lexer is selected, then add an additional filter
that will format '#' and all other text on the line, as a comment,
and
4. If Pygments throws an exception, instead of returning None,
return the original, unmodified content. This will mean that this
one instruction is printed without styling, but GDB will continue to
call into the Python code to style later instructions.
I haven't included a test specifically for the above error case,
though I have manually check that the above case now styles
correctly (with no underline). The existing style tests check that
the disassembler styling still works though, so I know I've not
generally broken things.
One final thought I have after looking at this issue is that I wonder
now if using Pygments for styling disassembly from every architecture
is actually a good idea?
Clearly, the 'asm' lexer is OK with att style x86-64, but not OK with
intel style x86-64, so who knows how well it will handle other random
architectures?
When I first added this feature I tested it against some random
RISC-V, ARM, and X86-64 (att style) code, and it seemed fine, but I
never tried to make an exhaustive check of all instructions, so its
quite possible that there are corner cases where things are styled
incorrectly.
With the above changes I think that things should be a bit better
now. If a particular instruction doesn't parse correctly then our
Pygments based styling code will just not style that one instruction.
This is combined with the fact that many architectures are now moving
to libopcodes based styling, which is much more reliable.
So, I think it is fine to keep using Pygments as a fallback mechanism
for styling all architectures, even if we know it might not be perfect
in all cases.
|
|
Remove the macro, replace all uses with calls to type::length.
Change-Id: Ib9bdc954576860b21190886534c99103d6a47afb
|
|
Remove the macro, replace all uses by calls to type::target_type.
Change-Id: Ie51d3e1e22f94130176d6abd723255282bb6d1ed
|
|
When running the gdb/configure script on ubuntu 22.04 with
python-3.10.4, I see:
checking for python... no
checking for python3... /usr/bin/python3
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils import sysconfig
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
from distutils import sysconfig
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils import sysconfig
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
from distutils import sysconfig
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils import sysconfig
[...]/gdb/python/python-config.py:7: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
from distutils import sysconfig
checking for python... yes
The distutils module is deprecated as per the PEP 632[1] and will be
removed in python-3.12.
This patch migrates gdb/python/python-config.py from distutils.sysconfig
to the sysconfig module[2].
The sysconfig module has has been introduced in the standard library in
python 3.2. Given that support for python < 3.2 has been removed by
edae3fd6600f: "gdb/python: remove Python 2 support", this patch does not
need to support both implementations for backward compatibility.
Tested on ubuntu-22.04 and ubuntu 20.04.
[1] https://peps.python.org/pep-0632/
[2] https://docs.python.org/3/library/sysconfig.html
Change-Id: Id0df2baf3ee6ce68bd01c236b829ab4c0a4526f6
|
|
This changes ui_out_redirect_pop to also perform the redirection, and
then updates several sites to use this, rather than explicit
redirects.
|
|
GDB overwrites Python's sys.stdout and sys.stderr, but does not
properly implement the 'flush' method -- it only ever will flush
stdout. This patch fixes the bug. I couldn't find a straightforward
way to write a test for this.
|
|
I noticed that gdbpy_parse_register_id would assert if passed a Python
object of a type it was not expecting. The included test case shows
this crash. This patch fixes the problem and also changes
gdbpy_parse_register_id to be more "Python-like" -- it always ensures
the Python error is set when it fails, and the callers now simply
propagate the existing exception.
|
|
gdbarch implements its own registry-like approach. This patch changes
it to instead use registry.h. It's a rather large patch but largely
uninteresting -- it's mostly a straightforward conversion from the old
approach to the new one.
The main benefit of this change is that it introduces type safety to
the gdbarch registry. It also removes a bunch of code.
One possible drawback is that, previously, the gdbarch registry
differentiated between pre- and post-initialization setup. This
doesn't seem very important to me, though.
|
|
This changes struct objfile to use a gdb_bfd_ref_ptr. In addition to
removing some manual memory management, this fixes a use-after-free
that was introduced by the registry rewrite series. The issue there
was that, in some cases, registry shutdown could refer to memory that
had already been freed. This help fix the bug by delaying the
destruction of the BFD reference (and thus the per-bfd object) until
after the registry has been shut down.
|
|
This rewrites registry.h, removing all the macros and replacing it
with relatively ordinary template classes. The result is less code
than the previous setup. It replaces large macros with a relatively
straightforward C++ class, and now manages its own cleanup.
The existing type-safe "key" class is replaced with the equivalent
template class. This approach ended up requiring relatively few
changes to the users of the registry code in gdb -- code using the key
system just required a small change to the key's declaration.
All existing users of the old C-like API are now converted to use the
type-safe API. This mostly involved changing explicit deletion
functions to be an operator() in a deleter class.
The old "save/free" two-phase process is removed, and replaced with a
single "free" phase. No existing code used both phases.
The old "free" callbacks took a parameter for the enclosing container
object. However, this wasn't truly needed and is removed here as
well.
|
|
When an objfile is destroyed, types that are still in use and
allocated on that objfile are copied. A temporary hash map is created
during this process, and it is allocated on the destroyed objfile's
obstack -- which normally is fine, as that is going to be destroyed
shortly anyway.
However, this approach requires that the objfile be passed to registry
destruction, and this won't be possible in the rewritten registry.
This patch changes the copied type hash table to simply use the heap
instead. It also removes the 'objfile' parameter from
copy_type_recursive, to make this all more clear.
This patch also fixes an apparent bug in copy_type_recursive.
Previously it was copying the dynamic property list to the dying
objfile's obstack:
- = copy_dynamic_prop_list (&objfile->objfile_obstack,
However I think this is incorrect -- that obstack is about to be
destroyed.
|
|
PR python/18385
v7:
This version addresses the issues pointed out by Tom.
Added nullchecks for Python object creations.
Changed from using PyLong_FromLong to the gdb_py-versions.
Re-factored some code to make it look more cohesive.
Also added the more safe Python reference count decrement PY_XDECREF,
even though the BreakpointLocation type is never instantiated by the
user (explicitly documented in the docs) decrementing < 0 is made
impossible with the safe call.
Tom pointed out that using the policy class explicitly to decrement a
reference counted object was not the way to go, so this has instead been
wrapped in a ref_ptr that handles that for us in blocpy_dealloc.
Moved macro from py-internal to py-breakpoint.c.
Renamed section at the bottom of commit message "Patch Description".
v6:
This version addresses the points Pedro gave in review to this patch.
Added the attributes `function`, `fullname` and `thread_groups`
as per request by Pedro with the argument that it more resembles the
output of the MI-command "-break-list". Added documentation for these attributes.
Cleaned up left overs from copy+paste in test suite, removed hard coding
of line numbers where possible.
Refactored some code to use more c++-y style range for loops
wrt to breakpoint locations.
Changed terminology, naming was very inconsistent. Used a variety of "parent",
"owner". Now "owner" is the only term used, and the field in the
gdb_breakpoint_location_object now also called "owner".
v5:
Changes in response to review by Tom Tromey:
- Replaced manual INCREF/DECREF calls with
gdbpy_ref ptrs in places where possible.
- Fixed non-gdb style conforming formatting
- Get parent of bploc increases ref count of parent.
- moved bploc Python definition to py-breakpoint.c
The INCREF of self in bppy_get_locations is due
to the individual locations holding a reference to
it's owner. This is decremented at de-alloc time.
The reason why this needs to be here is, if the user writes
for instance;
py loc = gdb.breakpoints()[X].locations[Y]
The breakpoint owner object is immediately going
out of scope (GC'd/dealloced), and the location
object requires it to be alive for as long as it is alive.
Thanks for your review, Tom!
v4:
Fixed remaining doc issues as per request
by Eli.
v3:
Rewritten commit message, shortened + reworded,
added tests.
Patch Description
Currently, the Python API lacks the ability to
query breakpoints for their installed locations,
and subsequently, can't query any information about them, or
enable/disable individual locations.
This patch solves this by adding Python type gdb.BreakpointLocation.
The type is never instantiated by the user of the Python API directly,
but is produced by the gdb.Breakpoint.locations attribute returning
a list of gdb.BreakpointLocation.
gdb.Breakpoint.locations:
The attribute for retrieving the currently installed breakpoint
locations for gdb.Breakpoint. Matches behavior of
the "info breakpoints" command in that it only
returns the last known or currently inserted breakpoint locations.
BreakpointLocation contains 7 attributes
6 read-only attributes:
owner: location owner's Python companion object
source: file path and line number tuple: (string, long) / None
address: installed address of the location
function: function name where location was set
fullname: fullname where location was set
thread_groups: thread groups (inferiors) where location was set.
1 writeable attribute:
enabled: get/set enable/disable this location (bool)
Access/calls to these, can all throw Python exceptions (documented in
the online documentation), and that's due to the nature
of how breakpoint locations can be invalidated
"behind the scenes", either by them being removed
from the original breakpoint or changed,
like for instance when a new symbol file is loaded, at
which point all breakpoint locations are re-created by GDB.
Therefore this patch has chosen to be non-intrusive:
it's up to the Python user to re-request the locations if
they become invalid.
Also there's event handlers that handle new object files etc, if a Python
user is storing breakpoint locations in some larger state they've
built up, refreshing the locations is easy and it only comes
with runtime overhead when the Python user wants to use them.
gdb.BreakpointLocation Python type
struct "gdbpy_breakpoint_location_object" is found in python-internal.h
Its definition, layout, methods and functions
are found in the same file as gdb.Breakpoint (py-breakpoint.c)
1 change was also made to breakpoint.h/c to make it possible
to enable and disable a bp_location* specifically,
without having its LOC_NUM, as this number
also can change arbitrarily behind the scenes.
Updated docs & news file as per request.
Testsuite: tests the .source attribute and the disabling of
individual locations.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=18385
Change-Id: I302c1c50a557ad59d5d18c88ca19014731d736b0
|
|
GDB uses the environment variable PYTHONDONTWRITEBYTECODE to
determine whether or not to write the result of byte-compiling
python modules when the "python dont-write-bytecode" setting
is "auto". Simon noticed that GDB's implementation doesn't
follow the Python documentation.
At present, GDB only checks for the existence of this environment
variable. That is not sufficient though. Regarding
PYTHONDONTWRITEBYTECODE, this document...
https://docs.python.org/3/using/cmdline.html
...says:
If this is set to a non-empty string, Python won't try to write
.pyc files on the import of source modules.
This commit fixes GDB's handling of PYTHONDONTWRITEBYTECODE by adding
an empty string check.
This commit also corrects the set/show command documentation for
"python dont-write-bytecode". The current doc was just a copy
of that for set/show python ignore-environment.
During his review of an earlier version of this patch, Eli Zaretskii
asked that the help text that I proposed for "set/show python
dont-write-bytecode" be expanded. I've done that in addition to
clarifying the documentation of this option in the GDB manual.
|
|
After this commit:
commit 81384924cdcc9eb2676dd9084b76845d7d0e0759
Date: Tue Apr 5 11:06:16 2022 +0100
gdb: have gdb_disassemble_info carry 'this' in its stream pointer
The disassemble_info::stream field will no longer be a ui_file*. That
commit failed to update one location in py-disasm.c though.
While running some tests using the Python disassembler API, I
triggered a call to gdbpy_disassembler::print_address_func, and, as I
had compiled GDB with the undefined behaviour sanitizer, GDB crashed
as the code currently (incorrectly) casts the stream field to be a
ui_file*.
In this commit I fix this error.
In order to test this case I had to tweak the existing test case a
little. I also spotted some debug printf statements in py-disasm.py,
which I have removed.
|
|
Fix typo in ref_output_0 variable in test_python.
Tested by running the selftest on x86_64-linux with python 3.11.
|
|
With python 3.11 I noticed:
...
$ gdb -q -batch -ex "maint selftest python"
Running selftest python.
Self test failed: self-test failed at gdb/python/python.c:2246
Ran 1 unit tests, 1 failed
...
In more detail:
...
(gdb) p output
$5 = "Traceback (most recent call last):\n File \"<string>\", line 0, \
in <module>\nKeyboardInterrupt\n"
(gdb) p ref_output
$6 = "Traceback (most recent call last):\n File \"<string>\", line 1, \
in <module>\nKeyboardInterrupt\n"
...
Fix this by also allowing line number 0.
Tested on x86_64-linux.
This should hopefully fix buildbot builder gdb-rawhide-x86_64.
|
|
This commit fixes a build error on machines lacking python headers
and/or libraries.
|
|
Python 3.11 deprecates PySys_SetPath and Py_SetProgramName. The
PyConfig API replaces these and other functions. This commit uses the
PyConfig API to provide equivalent functionality while also preserving
support for older versions of Python, i.e. those before Python 3.8.
A beta version of Python 3.11 is available in Fedora Rawhide. Both
Fedora 35 and Fedora 36 use Python 3.10, while Fedora 34 still used
Python 3.9. I've tested these changes on Fedora 34, Fedora 36, and
rawhide, though complete testing was not possible on rawhide due to
a kernel bug. That being the case, I decided to enable the newer
PyConfig API by testing PY_VERSION_HEX against 0x030a0000. This
corresponds to Python 3.10.
We could try to use the PyConfig API for Python versions as early as 3.8,
but I'm reluctant to do this as there may have been PyConfig related
bugs in earlier versions which have since been fixed. Recent linux
distributions should have support for Python 3.10. This should be
more than adequate for testing the new Python initialization code in
GDB.
Information about the PyConfig API as well as the motivation behind
deprecating the old interface can be found at these links:
https://github.com/python/cpython/issues/88279
https://peps.python.org/pep-0587/
https://docs.python.org/3.11/c-api/init_config.html
The v2 commit also addresses several problems that Simon found in
the v1 version.
In v1, I had used Py_DontWriteBytecodeFlag in the new initialization
code, but Simon pointed out that this global configuration variable
will be deprecated in Python 3.12. This version of the patch no longer
uses Py_DontWriteBytecodeFlag in the new initialization code.
Additionally, both Py_DontWriteBytecodeFlag and Py_IgnoreEnvironmentFlag
will no longer be used when building GDB against Python 3.10 or higher.
While it's true that both of these global configuration variables are
deprecated in Python 3.12, it makes sense to disable their use for
gdb builds against 3.10 and higher since those are the versions for
which the PyConfig API is now being used by GDB. (The PyConfig API
includes different mechanisms for making the same settings afforded
by use of the soon-to-be deprecated global configuration variables.)
Simon also noted that PyConfig_Clear() would not have be called for
one of the failure paths. I've fixed that problem and also made the
rest of the "bail out" code more direct. In particular,
PyConfig_Clear() will always be called, both for success and failure.
The v3 patch addresses some rebase conflicts related to module
initialization . Commit 3acd9a692dd ("Make 'import gdb.events' work")
uses PyImport_ExtendInittab instead of PyImport_AppendInittab. That
commit also initializes a struct for each module to import. Both the
initialization and the call to were moved ahead of the ifdefs to avoid
having to replicate (at least some of) the code three times in various
portions of the ifdefs.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28668
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29287
|
|
Currently, Python code can use event registries to detect when gdb
loads a new objfile, and when gdb clears the objfile list. However,
there's no way to detect the removal of an objfile, say when the
inferior calls dlclose.
This patch adds a gdb.free_objfile event registry and arranges for an
event to be emitted in this case.
|
|
When I rebased and updated the print_options patch, I forgot to update
print_options to add the new 'nibbles' feature to the result. This
patch fixes the oversight. I'm checking this in.
|
|
This adds a 'summary' mode to Value.format_string and to
gdb.print_options. For the former, it lets Python code format values
using this mode. For the latter, it lets a printer potentially detect
if it is being called in a backtrace with 'set print frame-arguments'
set to 'scalars'.
I considered adding a new mode here to let a pretty-printer see
whether it was being called in a 'backtrace' context at all, but I'm
not sure if this is really desirable.
|
|
PR python/17291 asks for access to the current print options. While I
think this need is largely satisfied by the existence of
Value.format_string, it seemed to me that a bit more could be done.
First, while Value.format_string uses the user's settings, it does not
react to temporary settings such as "print/x". This patch changes
this.
Second, there is no good way to examine the current settings (in
particular the temporary ones in effect for just a single "print").
This patch adds this as well.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=17291
|
|
Running 'black' on gdb fixed a couple of small issues. This patch is
the result.
|