Age | Commit message (Collapse) | Author | Files | Lines |
|
I believe that many calls to fprintf_symbol_filtered are incorrect.
In particular, there are some that pass a symbol's print name, like:
fprintf_symbol_filtered (gdb_stdout, sym->print_name (),
current_language->la_language, DMGL_ANSI);
fprintf_symbol_filtered uses the "demangle" global to decide whether
or not to demangle -- but print_name does this as well. This can lead
to double-demangling. Normally this could be innocuous, except I also
plan to change Ada demangling in a way that causes this to fail.
|
|
I don't understand what the sfunc function type in
cmd_list_element::function is for. Compared to cmd_simple_func_ftype,
it has an extra cmd_list_element parameter, giving the callback access
to the cmd_list_element for the command being invoked. This allows
registering the same callback with many commands, and alter the behavior
using the cmd_list_element's context.
From the comment in cmd_list_element, it sounds like at some point it
was the callback function type for set and show functions, hence the
"s". But nowadays, it's used for many more commands that need to access
the cmd_list_element object (see add_catch_command for example).
I don't really see the point of having sfunc at all, since do_sfunc is
just a trivial shim that changes the order of the arguments. All
commands using sfunc could just as well set cmd_list_element::func to
their callback directly.
Therefore, remove the sfunc field in cmd_list_element and everything
that goes with it. Rename cmd_const_sfunc_ftype to cmd_func_ftype and
use it for cmd_list_element::func, as well as for the add_setshow
commands.
Change-Id: I1eb96326c9b511c293c76996cea0ebc51c70fac0
|
|
A following patch will want to take some action when a pending wait
status is set on or removed from a thread. Add a getter and a setter on
thread_info for the pending waitstatus, so that we can add some code in
the setter later.
The thing is, the pending wait status field is in the
thread_suspend_state, along with other fields that we need to backup
before and restore after the thread does an inferior function call.
Therefore, make the thread_suspend_state member private
(thread_info::suspend becomes thread_info::m_suspend), and add getters /
setters for all of its fields:
- pending wait status
- stop signal
- stop reason
- stop pc
For the pending wait status, add the additional has_pending_waitstatus
and clear_pending_waitstatus methods.
I think this makes the thread_info interface a bit nicer, because we
now access the fields as:
thread->stop_pc ()
rather than
thread->suspend.stop_pc
The stop_pc field being in the `suspend` structure is an implementation
detail of thread_info that callers don't need to be aware of.
For the backup / restore of the thread_suspend_state structure, add
save_suspend_to and restore_suspend_from methods. You might wonder why
`save_suspend_to`, as opposed to a simple getter like
thread_suspend_state &suspend ();
I want to make it clear that this is to be used only for backing up and
restoring the suspend state, _not_ to access fields like:
thread->suspend ()->stop_pc
Adding some getters / setters allows adding some assertions. I find
that this helps understand how things are supposed to work. Add:
- When getting the pending status (pending_waitstatus method), ensure
that there is a pending status.
- When setting a pending status (set_pending_waitstatus method), ensure
there is no pending status.
There is one case I found where this wasn't true - in
remote_target::process_initial_stop_replies - which needed adjustments
to respect that contract. I think it's because
process_initial_stop_replies is kind of (ab)using the
thread_info::suspend::waitstatus to store some statuses temporarily, for
its internal use (statuses it doesn't intent on leaving pending).
process_initial_stop_replies pulls out stop replies received during the
initial connection using target_wait. It always stores the received
event in `evthread->suspend.waitstatus`. But it only sets
waitstatus_pending_p, if it deems the event interesting enough to leave
pending, to be reported to the core:
if (ws.kind != TARGET_WAITKIND_STOPPED
|| ws.value.sig != GDB_SIGNAL_0)
evthread->suspend.waitstatus_pending_p = 1;
It later uses this flag a bit below, to choose which thread to make the
"selected" one:
if (selected == NULL
&& thread->suspend.waitstatus_pending_p)
selected = thread;
And ultimately that's used if the user-visible mode is all-stop, so that
we print the stop for that interesting thread:
/* In all-stop, we only print the status of one thread, and leave
others with their status pending. */
if (!non_stop)
{
thread_info *thread = selected;
if (thread == NULL)
thread = lowest_stopped;
if (thread == NULL)
thread = first;
print_one_stopped_thread (thread);
}
But in any case (all-stop or non-stop), print_one_stopped_thread needs
to access the waitstatus value of these threads that don't have a
pending waitstatus (those that had TARGET_WAITKIND_STOPPED +
GDB_SIGNAL_0). This doesn't work with the assertions I've
put.
So, change the code to only set the thread's wait status if it is an
interesting one that we are going to leave pending. If the thread
stopped due to a non-interesting event (TARGET_WAITKIND_STOPPED +
GDB_SIGNAL_0), don't store it. Adjust print_one_stopped_thread to
understand that if a thread has no pending waitstatus, it's because it
stopped with TARGET_WAITKIND_STOPPED + GDB_SIGNAL_0.
The call to set_last_target_status also uses the pending waitstatus.
However, given that the pending waitstatus for the thread may have been
cleared in print_one_stopped_thread (and that there might not even be a
pending waitstatus in the first place, as explained above), it is no
longer possible to do it at this point. To fix that, move the call to
set_last_target_status in print_one_stopped_thread. I think this will
preserve the existing behavior, because set_last_target_status is
currently using the current thread's wait status. And the current
thread is the last one for which print_one_stopped_thread is called. So
by calling set_last_target_status in print_one_stopped_thread, we'll get
the same result. set_last_target_status will possibly be called
multiple times, but only the last call will matter. It just means
possibly more calls to set_last_target_status, but those are cheap.
Change-Id: Iedab9653238eaf8231abcf0baa20145acc8b77a7
|
|
I wrote this while debugging a problem where the expected unwinder for a
frame wasn't used. It adds messages to show which unwinders are
considered for a frame, why they are not selected (if an exception is
thrown), and finally which unwinder is selected in the end.
To be able to show a meaningful, human-readable name for the unwinders,
add a "name" field to struct frame_unwind, and update all instances to
include a name.
Here's an example of the output:
[frame] frame_unwind_find_by_frame: this_frame=0
[frame] frame_unwind_try_unwinder: trying unwinder "dummy"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "dwarf2 tailcall"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "inline"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "jit"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "python"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "amd64 epilogue"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "i386 epilogue"
[frame] frame_unwind_try_unwinder: no
[frame] frame_unwind_try_unwinder: trying unwinder "dwarf2"
[frame] frame_unwind_try_unwinder: yes
gdb/ChangeLog:
* frame-unwind.h (struct frame_unwind) <name>: New. Update
instances everywhere to include this field.
* frame-unwind.c (frame_unwind_try_unwinder,
frame_unwind_find_by_frame): Add debug messages.
Change-Id: I813f17777422425f0d08b22499817b23922e8ddb
|
|
Straightforward replacement of get_cmd_context / set_cmd_context with
cmd_list_element methods.
gdb/ChangeLog:
* cli/cli-decode.h (struct cmd_list_element) <set_context,
context>: New.
<context>: Rename to...
<m_context>: ... this.
* cli/cli-decode.c (set_cmd_context, get_cmd_context): Remove.
* command.h (set_cmd_context, get_cmd_context): Remove, use
cmd_list_element::set_context and cmd_list_element::context
everywhere instead.
Change-Id: I5016b0079014e3f17d1aa449ada7954473bf2b5d
|
|
Following on from the previous commit, this commit changes the API of
value_struct_elt to take gdb::optional<gdb::array_view<value *>>
instead of a pointer to the gdb::array_view.
This makes the optional nature of the array_view parameter explicit.
This commit is purely a refactoring commit, there should be no user
visible change after this commit.
I have deliberately kept this refactor separate from the previous two
commits as this is a more extensive change, and I'm not 100% sure that
using gdb::optional for the parameter type, instead of a pointer, is
going to be to everyone's taste. If there's push back on this patch
then this one can be dropped from the series.
gdb/ChangeLog:
* ada-lang.c (desc_bounds): Use '{}' instead of NULL to indicate
an empty gdb::optional when calling value_struct_elt.
(desc_data): Likewise.
(desc_one_bound): Likewise.
* eval.c (structop_base_operation::evaluate_funcall): Pass
gdb::array_view, not a gdb::array_view* to value_struct_elt.
(eval_op_structop_struct): Use '{}' instead of NULL to indicate
an empty gdb::optional when calling value_struct_elt.
(eval_op_structop_ptr): Likewise.
* f-lang.c (fortran_structop_operation::evaluate): Likewise.
* guile/scm-value.c (gdbscm_value_field): Likewise.
* m2-lang.c (eval_op_m2_high): Likewise.
(eval_op_m2_subscript): Likewise.
* opencl-lang.c (opencl_structop_operation::evaluate): Likewise.
* python/py-value.c (valpy_getitem): Likewise.
* rust-lang.c (rust_val_print_str): Likewise.
(rust_range): Likewise.
(rust_subscript): Likewise.
(eval_op_rust_structop): Likewise.
(rust_aggregate_operation::evaluate): Likewise.
* valarith.c (value_user_defined_op): Likewise.
* valops.c (search_struct_method): Change parameter type, update
function body accordingly, and update header comment.
(value_struct_elt): Change parameter type, update function body
accordingly.
* value.h (value_struct_elt): Update declaration.
|
|
This commit adds initial support for catchpoints to the python
breakpoint API.
This commit adds a BP_CATCHPOINT constant which corresponds to
GDB's internal bp_catchpoint. The new constant is documented in the
manual.
The user can't create breakpoints with type BP_CATCHPOINT after this
commit, but breakpoints that already exist, obtained with the
`gdb.breakpoints` function, can now have this type. Additionally,
when a stop event is reported for hitting a catchpoint, GDB will now
report a BreakpointEvent with the attached breakpoint being of type
BP_CATCHPOINT - previously GDB would report a generic StopEvent in
this situation.
gdb/ChangeLog:
* NEWS: Mention Python BP_CATCHPOINT feature.
* python/py-breakpoint.c (pybp_codes): Add bp_catchpoint support.
(bppy_init): Likewise.
(gdbpy_breakpoint_created): Likewise.
gdb/doc/ChangeLog:
* python.texinfo (Breakpoints In Python): Add BP_CATCHPOINT
description.
gdb/testsuite/ChangeLog:
* gdb.python/py-breakpoint.c (do_throw): New function.
(main): Call do_throw.
* gdb.python/py-breakpoint.exp (test_catchpoints): New proc.
|
|
GNAT emits encoded type names, but these aren't usually of interest to
users. The Ada language code in gdb hides this oddity -- but the
Python layer does not. This patch changes the Python code to use the
decoded Ada type name, when appropriate.
I looked at decoding Ada type names during construction, as that would
be cleaner. However, the Ada support in gdb relies on the encodings
at various points, so this isn't really doable right now.
2021-06-25 Tom Tromey <tromey@adacore.com>
* python/py-type.c (typy_get_name): Decode an Ada type name.
gdb/testsuite/ChangeLog
2021-06-25 Tom Tromey <tromey@adacore.com>
* gdb.ada/py_range.exp: Add type name test cases.
|
|
Run black to fix this formatting.
gdb/ChangeLog:
* python/lib/gdb/__init__.py: Format.
Change-Id: I68ea306d1991bf7243b2c8aeeb11719d668851e5
|
|
If we have multiple registered unwinders, this will helps identify which
unwinder was chosen and make it easier to track down potential problems.
Unwinders have a mandatory name argument, which we can use in the
message.
First, make gdb._execute_unwinders return a tuple containing the name,
in addition to the UnwindInfo. Then, make pyuw_sniffer include the name
in the debug message.
I moved the debug message earlier. I think it's good to print it as
early as possible, so that we see it in case an assert is hit in the
loop below, for example.
gdb/ChangeLog:
* python/lib/gdb/__init__.py (_execute_unwinders): Return tuple
with name of chosen unwinder.
* python/py-unwind.c (pyuw_sniffer): Print name of chosen
unwinder in debug message.
Change-Id: Id603545b44a97df2a39dd1872fe1f38ad5059f03
|
|
Add new methods to the PendingFrame and Frame classes to obtain the
stack frame level for each object.
The use of 'level' as the method name is consistent with the existing
attribute RecordFunctionSegment.level (though this is an attribute
rather than a method).
For Frame/PendingFrame I went with methods as these classes currently
only use methods, including for simple data like architecture, so I
want to be consistent with this interface.
gdb/ChangeLog:
* NEWS: Mention the two new methods.
* python/py-frame.c (frapy_level): New function.
(frame_object_methods): Register 'level' method.
* python/py-unwind.c (pending_framepy_level): New function.
(pending_frame_object_methods): Register 'level' method.
gdb/doc/ChangeLog:
* python.texi (Unwinding Frames in Python): Mention
PendingFrame.level.
(Frames In Python): Mention Frame.level.
gdb/testsuite/ChangeLog:
* gdb.python/py-frame.exp: Add Frame.level tests.
* gdb.python/py-pending-frame-level.c: New file.
* gdb.python/py-pending-frame-level.exp: New file.
* gdb.python/py-pending-frame-level.py: New file.
|
|
We already have two helper functions in py-utils.c:
gdb_py_object_from_longest (LONGEST l)
gdb_py_object_from_ulongest (ULONGEST l)
these wrap around calls to either PyLong_FromLongLong,
PyLong_FromLong, or PyInt_From_Long (if Python 2 is being used).
There is one place in gdb/python/* where a call to PyLong_FromLong was
added outside of the above utility functions, this was done in the
recent commit:
commit 55789354fcbaf879f3ca8475b647b2747dec486e
Date: Fri May 14 11:56:31 2021 +0200
gdb/python: add a 'connection_num' attribute to Inferior objects
In this commit I replace the direct use of PyLong_FromLong with a call
to gdb_py_object_from_longest. The only real change with this commit,
is that, for Python 2, we will now end up calling PyInt_FromLong
instead of PyLong_FromLong, but this should be invisible to the user.
For Python 3 there should be absolutely no change.
gdb/ChangeLog:
* python/py-inferior.c (infpy_get_connection_num): Call
gdb_py_object_from_longest instead of PyLong_FromLong directly.
|
|
This patch came about because I wanted to write a frame unwinder that
would corrupt the backtrace in a particular way. In order to achieve
what I wanted I ended up trying to write an unwinder like this:
class FrameId(object):
.... snip class definition ....
class TestUnwinder(Unwinder):
def __init__(self):
Unwinder.__init__(self, "some name")
def __call__(self, pending_frame):
pc_desc = pending_frame.architecture().registers().find("pc")
pc = pending_frame.read_register(pc_desc)
sp_desc = pending_frame.architecture().registers().find("sp")
sp = pending_frame.read_register(sp_desc)
# ... snip code to decide if this unwinder applies or not.
fid = FrameId(pc, sp)
unwinder = pending_frame.create_unwind_info(fid)
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
return unwinder
The important things here are the two calls:
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
On x86-64 these would fail with an assertion error:
gdb/regcache.c:168: internal-error: int register_size(gdbarch*, int): Assertion `regnum >= 0 && regnum < gdbarch_num_cooked_regs (gdbarch)' failed.
What happens is that in unwind_infopy_add_saved_register (py-unwind.c)
we call register_size, as register_size should only be called on
cooked (real or pseudo) registers, and 'pc' and 'sp' are implemented
as user registers (at least on x86-64), we trigger the assertion.
A simple fix would be to check in unwind_infopy_add_saved_register if
the register number we are handling is a cooked register or not, if
not we can throw a 'Bad register' error back to the Python code.
However, I think we can do better.
Consider that at the CLI we can do this:
(gdb) set $pc=0x1234
This works because GDB first evaluates '$pc' to get a register value,
then evaluates '0x1234' to create a value encapsulating the
immediate. The contents of the immediate value are then copied back
to the location of the register value representing '$pc'.
The value location for a user-register will (usually) be the location
of the real register that was accessed, so on x86-64 we'd expect this
to be $rip.
So, in this patch I propose that in the unwinder code, when
add_saved_register is called, if it is passed a
user-register (i.e. non-cooked) then we first fetch the register,
extract the real register number from the value's location, and use
that new register number when handling the add_saved_register call.
If either the value location that we get for the user-register is not
a cooked register then we can throw a 'Bad register' error back to the
Python code, but in most cases this will not happen.
gdb/ChangeLog:
* python/py-unwind.c (unwind_infopy_add_saved_register): Handle
saving user registers.
gdb/testsuite/ChangeLog:
* gdb.python/py-unwind-user-regs.c: New file.
* gdb.python/py-unwind-user-regs.exp: New file.
* gdb.python/py-unwind-user-regs.py: New file.
|
|
While reviewing a patch sent to the mailing list, I noticed there are few
places where python code checks if a variable is 'None' or not by using the
comparison operators '==' and '!='. PEP8[1], which is used as coding standard
in GDBÂ coding standards, recommends using 'is' / 'is not' when comparing to a
singleton such as 'None'.
This patch proposes to change the instances of '== None' by 'is None' and
'!= None' by 'is not None'.
[1] https://www.python.org/dev/peps/pep-0008/
gdb/doc/ChangeLog:
* python.texi (Writing a Pretty-Printer): Use 'is None' instead of
'== None'.
gdb/ChangeLog:
* python/lib/gdb/FrameDecorator.py (FrameDecorator): Use 'is None' instead of
'== None'.
(FrameVars): Use 'is not None' instead of '!= None'.
* python/lib/gdb/command/frame_filters.py (SetFrameFilterPriority): Use 'is None'
instead of '== None' and 'is not None' instead of '!= None'.
gdb/testsuite/ChangeLog:
* gdb.base/premature-dummy-frame-removal.py (TestUnwinder): Use
'is None' instead of '== None' and 'is not None' instead of
'!= None'.
* gdb.python/py-frame-args.py (lookup_function): Same.
* gdb.python/py-framefilter-invalidarg.py (Reverse_Function): Same.
* gdb.python/py-framefilter.py (Reverse_Function): Same.
* gdb.python/py-nested-maps.py (lookup_function): Same.
* gdb.python/py-objfile-script-gdb.py (lookup_function): Same.
* gdb.python/py-prettyprint.py (lookup_function): Same.
* gdb.python/py-section-script.py (lookup_function): Same.
* gdb.python/py-unwind-inline.py (dummy_unwinder): Same.
* gdb.python/python.exp: Same.
* gdb.rust/pp.py (lookup_function): Same.
|
|
If the TUI window object implements the click method, it is called for each
mouse click event in this window.
gdb/ChangeLog:
2021-06-04 Hannes Domani <ssbssa@yahoo.de>
* python/py-tui.c (class tui_py_window): Add click function.
(tui_py_window::click): Likewise.
gdb/doc/ChangeLog:
2021-06-04 Hannes Domani <ssbssa@yahoo.de>
* python.texi (TUI Windows In Python): Document Window.click.
|
|
It was removed (probably by mistake) in
51e78fc5fa21870d415c52f90b93e3c6ad57be46.
gdb/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* python/py-symbol.c (gdbpy_initialize_symbols): Restore
gdb.SYMBOL_LABEL_DOMAIN constant.
gdb/testsuite/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* gdb.python/py-symbol.exp: Test symbol constants.
|
|
I wrote a small script to spot a pattern of indentation mistakes I saw
happened in breakpoint.c. And while at it I ran it on all files and
fixed what I found. No behavior changes intended, just indentation and
addition / removal of curly braces.
gdb/ChangeLog:
* Fix some indentation mistakes throughout.
gdbserver/ChangeLog:
* Fix some indentation mistakes throughout.
Change-Id: Ia01990c26c38e83a243d8f33da1d494f16315c6e
|
|
Now that we have range functions that let us use ranged for loops, we
can remove iterate_over_breakpoints in favor of those, which are easier
to read and write. This requires exposing the declaration of
all_breakpoints and all_breakpoints_safe in breakpoint.h, as well as the
supporting types.
Change some users of iterate_over_breakpoints to use all_breakpoints,
when they don't need to delete the breakpoint, and all_breakpoints_safe
otherwise.
gdb/ChangeLog:
* breakpoint.h (iterate_over_breakpoints): Remove. Update
callers to use all_breakpoints or all_breakpoints_safe.
(breakpoint_range, all_breakpoints, breakpoint_safe_range,
all_breakpoints_safe): Move here.
* breakpoint.c (all_breakpoints, all_breakpoints_safe): Make
non-static.
(iterate_over_breakpoints): Remove.
* python/py-finishbreakpoint.c (bpfinishpy_detect_out_scope_cb):
Return void.
* python/py-breakpoint.c (build_bp_list): Add comment, reverse
return value logic.
* guile/scm-breakpoint.c (bpscm_build_bp_list): Return void.
Change-Id: Idde764a1f577de0423e4f2444a7d5cdb01ba5e48
|
|
To prevent flickering when first calling erase, then write, this new
argument indicates that the passed string contains the full contents of
the window. This fills every unused cell of the window with a space, so
it's not necessary to call erase beforehand.
gdb/ChangeLog:
2021-05-27 Hannes Domani <ssbssa@yahoo.de>
* python/py-tui.c (tui_py_window::output): Add full_window
argument.
(gdbpy_tui_write): Parse "full_window" argument.
gdb/doc/ChangeLog:
2021-05-27 Hannes Domani <ssbssa@yahoo.de>
* python.texi (TUI Windows In Python): Document "full_window"
argument.
|
|
The alias creation functions currently accept a name to specify the
target command. They pass this to add_alias_cmd, which needs to lookup
the target command by name.
Given that:
- We don't support creating an alias for a command before that command
exists.
- We always use add_info_alias just after creating that target command,
and therefore have access to the target command's cmd_list_element.
... change add_com_alias to accept the target command as a
cmd_list_element (other functions are done in subsequent patches). This
ensures we don't create the alias before the target command, because you
need to get the cmd_list_element from somewhere when you call the alias
creation function. And it avoids an unecessary command lookup. So it
seems better to me in every aspect.
gdb/ChangeLog:
* command.h (add_com_alias): Accept target as
cmd_list_element. Update callers.
Change-Id: I24bed7da57221cc77606034de3023fedac015150
|
|
In add_setshow_generic, we create set/show commands using add_setshow_*
functions, then look up the commands by name to set the context pointer.
It would be simpler and more efficient to use the return values of the
add_setshow_* functions, do that.
gdb/ChangeLog:
* python/py-param.c (add_setshow_generic): Use return values of
add_setshow functions.
Change-Id: I04d50736e1001ddb732d81e088468876df9c88ff
|
|
Remove a few instances where we look up a command by name, but could
just use the return value of a previous "add command" function call
instead.
gdb/ChangeLog:
* mi/mi-main.c (_initialize_mi_main):
* python/py-auto-load.c (gdbpy_initialize_auto_load):
* remote.c (_initialize_remote):
Change-Id: I6d06f9ca636e340c88c1064ae870483ad392607d
|
|
tui_win_info::refresh_window first redraws the background window, then
tui_wrefresh draws the python text on top of it, which flickers.
By using wnoutrefresh for the background window, the actual drawing on
the screen is only done once, without flickering.
gdb/ChangeLog:
2021-05-24 Hannes Domani <ssbssa@yahoo.de>
* python/py-tui.c (tui_py_window::refresh_window):
Avoid flickering.
|
|
Same idea as the previous patch, but for prefix instead of alias.
gdb/ChangeLog:
* cli/cli-decode.h (cmd_list_element) <is_prefix>: New, use it.
Change-Id: I76a9d2e82fc8d7429904424674d99ce6f9880e2b
|
|
While browsing this code, I found the name "prefixlist" really
confusing. I kept reading it as "list of prefixes". Which it isn't:
it's a list of sub-commands, for a prefix command. I think that
renaming it to "subcommands" would make things clearer.
gdb/ChangeLog:
* Rename "prefixlist" parameters to "subcommands" throughout.
* cli/cli-decode.h (cmd_list_element) <prefixlist>: Rename to...
<subcommands>: ... this.
* cli/cli-decode.c (lookup_cmd_for_prefixlist): Rename to...
(lookup_cmd_with_subcommands): ... this.
Change-Id: I150da10d03052c2420aa5b0dee41f422e2a97928
|
|
Define a 'connection_num' attribute for Inferior objects. The
read-only attribute is the ID of the connection of an inferior, as
printed by "info inferiors". In GDB's internal terminology, that's
the process stratum target of the inferior. If the inferior has no
target connection, the attribute is None.
gdb/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python/py-inferior.c (infpy_get_connection_num): New function.
(inferior_object_getset): Add a new element for 'connection_num'.
* NEWS: Mention the 'connection_num' attribute of Inferior objects.
gdb/doc/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python.texi (Inferiors In Python): Mention the 'connection_num'
attribute.
gdb/testsuite/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.python/py-inferior.exp: Add test cases for 'connection_num'.
|
|
The 'print max-depth' feature incorrectly causes GDB to skip printing
the string representation of pretty printed variables if the variable
is stored at a nested depth corresponding to the set max-depth value.
This change ensures that it is always printed before checking whether
the maximum print depth has been reached.
Regression tested with GCC 7.3.0 on x86_64, ppc64le, aarch64.
gdb/ChangeLog:
* cp-valprint.c (cp_print_value): Replaced duplicate code.
* guile/scm-pretty-print.c (ppscm_print_children): Check max_depth
just before printing child values.
(gdbscm_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
* python/py-prettyprint.c (print_children): Check max_depth just
before printing child values.
(gdbpy_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
gdb/testsuite/ChangeLog:
* gdb.python/py-format-string.c: Added a variable to test.
* gdb.python/py-format-string.exp: Check string representation is
printed at appropriate max_depth settings.
* gdb.python/py-nested-maps.exp: Likewise.
* gdb.guile/scm-pretty-print.exp: Add additional tests.
|
|
This avoids some manual memory management.
cmdpy_init correctly transfers ownership of the name to the
cmd_list_element, as it sets the name_allocated flag. However,
cmdpy_init (and add_setshow_generic) doesn't, it looks like the name is
just leaked. This is a bit tricky, because it actually creates two
commands (one set and one show), it would take a bit of refactoring of
the command code to give each their own allocated copy. For now, just
keep doing what the current code does but in a more explicit fashion,
with an explicit release.
gdb/ChangeLog:
* python/python-internal.h (gdbpy_parse_command_name): Return
gdb::unique_xmalloc_ptr.
* python/py-cmd.c (gdbpy_parse_command_name): Likewise.
(cmdpy_init): Adjust.
* python/py-param.c (parmpy_init): Adjust.
(add_setshow_generic): Take gdb::unique_xmalloc_ptr, release it
when done.
Change-Id: Iae5bc21fe2b22f12d5f954057b0aca7ca4cd3f0d
|
|
Previously, the prefixname field of struct cmd_list_element was manually
set for prefix commands. This seems verbose and error prone as it
required every single call to functions adding prefix commands to
specify the prefix name while the same information can be easily
generated.
Historically, this was not possible as the prefix field was null for
many commands, but this was fixed in commit
3f4d92ebdf7f848b5ccc9e8d8e8514c64fde1183 by Philippe Waroquiers, so
we can rely on the prefix field being set when generating the prefix
name.
This commit also fixes a use after free in this scenario:
* A command gets created via Python (using the gdb.Command class).
The prefix name member is dynamically allocated.
* An alias to the new command is created. The alias's prefixname is set
to point to the prefixname for the original command with a direct
assignment.
* A new command with the same name as the Python command is created.
* The object for the original Python command gets freed and its
prefixname gets freed as well.
* The alias is updated to point to the new command, but its prefixname
is not updated so it keeps pointing to the freed one.
gdb/ChangeLog:
* command.h (add_prefix_cmd): Remove the prefixname argument as
it can now be generated automatically. Update all callers.
(add_basic_prefix_cmd): Ditto.
(add_show_prefix_cmd): Ditto.
(add_prefix_cmd_suppress_notification): Ditto.
(add_abbrev_prefix_cmd): Ditto.
* cli/cli-decode.c (add_prefix_cmd): Ditto.
(add_basic_prefix_cmd): Ditto.
(add_show_prefix_cmd): Ditto.
(add_prefix_cmd_suppress_notification): Ditto.
(add_prefix_cmd_suppress_notification): Ditto.
(add_abbrev_prefix_cmd): Ditto.
* cli/cli-decode.h (struct cmd_list_element): Replace the
prefixname member variable with a method which generates the
prefix name at runtime. Update all code reading the prefix
name to use the method, and remove all code setting it.
* python/py-cmd.c (cmdpy_destroyer): Remove code to free the
prefixname member as it's now a method.
(cmdpy_function): Determine if the command is a prefix by
looking at prefixlist, not prefixname.
|
|
Adds some new debugging to python/py-breakpoint.c.
gdb/ChangeLog:
* python/py-breakpoint.c (pybp_debug): New static global.
(show_pybp_debug): New function.
(pybp_debug_printf): Define.
(PYBP_SCOPED_DEBUG_ENTER_EXIT): Define.
(gdbpy_breakpoint_created): Add some debugging.
(gdbpy_breakpoint_deleted): Likewise.
(gdbpy_breakpoint_modified): Likewise.
(_initialize_py_breakpoint): New function.
gdb/doc/ChangeLog:
* python.texinfo (Python Commands): Document 'set debug
py-breakpoint' and 'show debug py-breakpoint'.
|
|
Converts the debug print out in python/py-unwind.c to use the new
debug printing scheme. I have also modified what is printed in a few
places, for example, rather than printing frame pointers, I now print
the frame level, this matches what we do in the general 'set debug
frame' tracing, and is usually more helpful (I think).
I also added a couple of ENTER/EXIT scope printers.
gdb/ChangeLog:
* python/py-unwind.c (pyuw_debug): Convert to bool.
(show_pyuw_debug): New function.
(pyuw_debug_printf): Define.
(PYUW_SCOPED_DEBUG_ENTER_EXIT): Define.
(pyuw_this_id): Convert to new debug print macros.
(pyuw_prev_register): Likewise.
(pyuw_sniffer): Likewise.
(pyuw_dealloc_cache): Likewise.
(_initialize_py_unwind): Update now pyuw_debug is a bool, and add
show function when registering.
|
|
Replace fprint_frame_id with a member function frame_id::to_string
that returns a std::string. Convert all of the previous users of
fprint_frame_id to use the new member function. This means that
instead of writing things like this:
fprintf_unfiltered (file, " id=");
fprint_frame_id (file, s->id.id);
We can write this:
fprintf_unfiltered (file, " id=%s", s->id.id.to_string ().c_str ());
There should be no user visible changes after this commit.
gdb/ChangeLog:
* dummy-frame.c (fprint_dummy_frames): Convert use of
fprint_frame_id to use frame_id::to_string.
* frame.c (fprint_field): Delete.
(fprint_frame_id): Moved to...
(frame_id::to_string): ...this, rewritten to return a string.
(fprint_frame): Convert use of fprint_frame_id to use
frame_id::to_string.
(compute_frame_id): Likewise.
(frame_id_p): Likewise.
(frame_id_eq): Likewise.
(frame_id_inner): Likewise.
* frame.h (struct frame_id) <to_string>: New member function.
(fprint_frame_id): Delete declaration.
* guile/scm-frame.c (frscm_print_frame_smob): Convert use of
fprint_frame_id to use frame_id::to_string.
* python/py-frame.c (frame_object_to_frame_info): Likewise.
* python/py-unwind.c (unwind_infopy_str): Likewise.
(pyuw_this_id): Likewise.
|
|
Re-format all Python files using black [1] version 21.4b0. The goal is
that from now on, we keep all Python files formatted using black. And
that we never have to discuss formatting during review (for these files
at least) ever again.
One change is needed in gdb.python/py-prettyprint.exp, because it
matches the string representation of an exception, which shows source
code. So the change in formatting must be replicated in the expected
regexp.
To document our usage of black I plan on adding this to the "GDB Python
Coding Standards" wiki page [2]:
--8<--
All Python source files under the `gdb/` directory must be formatted
using black version 21.4b0.
This specific version can be installed using:
$ pip3 install 'black == 21.4b0'
All you need to do to re-format files is run `black <file/directory>`,
and black will re-format any Python file it finds in there. It runs
quite fast, so the simplest is to do:
$ black gdb/
from the top-level.
If you notice that black produces changes unrelated to your patch, it's
probably because someone forgot to run it before you. In this case,
don't include unrelated hunks in your patch. Push an obvious patch
fixing the formatting and rebase your work on top of that.
-->8--
Once this is merged, I plan on setting a up an `ignoreRevsFile`
config so that git-blame ignores this commit, as described here:
https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame
I also plan on working on a git commit hook (checked in the repo) to
automatically check the formatting of the Python files on commit.
[1] https://pypi.org/project/black/
[2] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
gdb/ChangeLog:
* Re-format all Python files using black.
gdb/testsuite/ChangeLog:
* Re-format all Python files using black.
* gdb.python/py-prettyprint.exp (run_lang_tests): Adjust.
Change-Id: I28588a22c2406afd6bc2703774ddfff47cd61919
|
|
Add two new commands to GDB that can be placed into the early
initialization to control how Python starts up. The new options are:
set python ignore-environment on|off
set python dont-write-bytecode auto|on|off
show python ignore-environment
show python dont-write-bytecode
These can be used from GDB's startup file to control how the Python
extension language behaves. These options are equivalent to the -E
and -B flags to python respectively, their descriptions from the
Python man page:
-E Ignore environment variables like PYTHONPATH and PYTHONHOME
that modify the behavior of the interpreter.
-B Don't write .pyc files on import.
gdb/ChangeLog:
* NEWS: Mention new commands.
* python/python.c (python_ignore_environment): New static global.
(show_python_ignore_environment): New function.
(set_python_ignore_environment): New function.
(python_dont_write_bytecode): New static global.
(show_python_dont_write_bytecode): New function.
(set_python_dont_write_bytecode): New function.
(_initialize_python): Register new commands.
gdb/doc/ChangeLog:
* python.texinfo (Python Commands): Mention new commands.
gdb/testsuite/ChangeLog:
* gdb.python/py-startup-opt.exp: New file.
|
|
Now that both Python and Guile are fully initialized from their
respective finish_initialization methods, the "finish" in the method
name doesn't really make sense; initialization starts _and_ finishes
with that method.
As such, this commit renames finish_initialization to just initialize.
There should be no user visible changes after this commit.
gdb/ChangeLog:
* extension-priv.h (struct extension_language_ops): Rename
'finish_initialization' to 'initialize'.
* extension.c (finish_ext_lang_initialization): Renamed to...
(ext_lang_initialization): ...this, update comment, and updated
the calls to reflect the change in struct extension_language_ops.
* extension.h (finish_ext_lang_initialization): Renamed to...
(ext_lang_initialization): ...this.
* guile/guile.c (gdbscm_finish_initialization): Renamed to...
(gdbscm_initialize): ...this, update comment at definition.
(guile_extension_ops): Update.
* main.c (captured_main_1): Update call to
finish_ext_lang_initialization.
* python/python.c (gdbpy_finish_initialization): Rename to...
(gdbpy_initialize): ...this, update comment at definition, and
update call to do_finish_initialization.
(python_extension_ops): Update.
(do_finish_initialization): Rename to...
(do_initialize): ...this, and update comment.
|
|
Delay Python initialisation until gdbpy_finish_initialization.
This is mostly about splitting the existing gdbpy_initialize_*
functions in two, all the calls to register_objfile_data_with_cleanup,
gdbarch_data_register_post_init, etc are moved into new _initialize_*
functions, but everything else is left in the gdbpy_initialize_*
functions.
Then the call to do_start_initialization (in python/python.c) is moved
from the _initialize_python function into gdbpy_finish_initialization.
There should be no user visible changes after this commit.
gdb/ChangeLog:
* python/py-arch.c (_initialize_py_arch): New function.
(gdbpy_initialize_arch): Move code to _initialize_py_arch.
* python/py-block.c (_initialize_py_block): New function.
(gdbpy_initialize_blocks): Move code to _initialize_py_block.
* python/py-inferior.c (_initialize_py_inferior): New function.
(gdbpy_initialize_inferior): Move code to _initialize_py_inferior.
* python/py-objfile.c (_initialize_py_objfile): New function.
(gdbpy_initialize_objfile): Move code to _initialize_py_objfile.
* python/py-progspace.c (_initialize_py_progspace): New function.
(gdbpy_initialize_pspace): Move code to _initialize_py_progspace.
* python/py-registers.c (_initialize_py_registers): New function.
(gdbpy_initialize_registers): Move code to
_initialize_py_registers.
* python/py-symbol.c (_initialize_py_symbol): New function.
(gdbpy_initialize_symbols): Move code to _initialize_py_symbol.
* python/py-symtab.c (_initialize_py_symtab): New function.
(gdbpy_initialize_symtabs): Move code to _initialize_py_symtab.
* python/py-type.c (_initialize_py_type): New function.
(gdbpy_initialize_types): Move code to _initialize_py_type.
* python/py-unwind.c (_initialize_py_unwind): New function.
(gdbpy_initialize_unwind): Move code to _initialize_py_unwind.
* python/python.c (_initialize_python): Move call to
do_start_initialization to gdbpy_finish_initialization.
(gdbpy_finish_initialization): Add call to
do_start_initialization.
|
|
Without any explicit dependencies specified, the observers attached
to the 'gdb::observers::new_objfile' observable are always notified
in the order in which they have been attached.
The new_objfile observer callback to auto-load scripts is attached in
'_initialize_auto_load'.
The new_objfile observer callback that propagates the new_objfile event
to the Python side is attached in 'gdbpy_initialize_inferior', which is
called via '_initialize_python'.
With '_initialize_python' happening before '_initialize_auto_load',
the consequence was that the new_objfile event was emitted on the Python
side before autoloaded scripts had been executed when a new objfile was
loaded.
As a result, trying to access the objfile's pretty printers (defined in
the autoloaded script) from a handler for the Python-side
'new_objfile' event would fail. Those would only be initialized later on
(when the 'auto_load_new_objfile' callback was called).
To make sure that the objfile passed to the Python event handler
is properly initialized (including its 'pretty_printers' member),
make sure that the 'auto_load_new_objfile' observer is notified
before the 'python_new_objfile' one that propagates the event
to the Python side.
To do this, make use of the mechanism to explicitly specify
dependencies between observers (introduced in a preparatory commit).
Add a corresponding testcase that involves a test library with an autoloaded
Python script and a handler for the Python 'new_objfile' event.
(The real world use case where I came across this issue was in an attempt
to extend handling for GDB pretty printers for dynamically loaded
objfiles in the Qt Creator IDE, s. [1] and [2] for more background.)
[1] https://bugreports.qt.io/browse/QTCREATORBUG-25339
[2] https://codereview.qt-project.org/c/qt-creator/qt-creator/+/333857/1
Tested on x86_64-linux (Debian testing).
gdb/ChangeLog:
* gdb/auto-load.c (_initialize_auto_load): 'Specify token
when attaching the 'auto_load_new_objfile' observer, so
other observers can specify it as a dependency.
* gdb/auto-load.h (struct token): Declare
'auto_load_new_objfile_observer_token' as token to be used
for the 'auto_load_new_objfile' observer.
* gdb/python/py-inferior.c (gdbpy_initialize_inferior): Make
'python_new_objfile' observer depend on 'auto_load_new_objfile'
observer, so it gets notified after the latter.
gdb/testsuite/ChangeLog:
* gdb.python/libpy-autoloaded-pretty-printers-in-newobjfile-event.so-gdb.py: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.h: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-main.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.exp: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.py: New test.
Change-Id: I8275b3f4c3bec32e56dd7892f9a59d89544edf89
|
|
Give a name to each observer, this will help produce more meaningful
debug message.
gdbsupport/ChangeLog:
* observable.h (class observable) <struct observer> <observer>:
Add name parameter.
<name>: New field.
<attach>: Add name parameter, update all callers.
Change-Id: Ie0cc4664925215b8d2b09e026011b7803549fba0
|
|
As reported in bug 27757, we get an internal error when doing:
$ cat test.c
struct foo {
int len;
int items[];
};
struct foo *p;
int main() {
return 0;
}
$ gcc test.c -g -O0 -o test
$ ./gdb -q -nx --data-directory=data-directory ./test -ex 'python gdb.parse_and_eval("p").type.target()["items"].type.range()'
Reading symbols from ./test...
/home/simark/src/binutils-gdb/gdb/gdbtypes.h:435: internal-error: LONGEST dynamic_prop::const_val() const: Assertion `m_kind == PROP_CONST' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n)
This is because the Python code (typy_range) blindly reads the high
bound of the type of `items` as a constant value. Since it is a
flexible array member, it has no high bound, the property is undefined.
Since commit 8c2e4e0689 ("gdb: add accessors to struct dynamic_prop"),
the getters check that you are not getting a property value of the wrong
kind, so this causes a failed assertion.
Fix it by checking if the property is indeed a constant value before
accessing it as such. Otherwise, use 0. This restores the previous GDB
behavior: because the structure was zero-initialized, this is what was
returned before. But now this behavior is explicit and not accidental.
Add a test, gdb.python/flexible-array-member.exp, that is derived from
gdb.base/flexible-array-member.exp. It tests the same things, but
through the Python API. It also specifically tests getting the range
from the various kinds of flexible array member types (AFAIK it wasn't
possible to do the equivalent through the CLI).
gdb/ChangeLog:
PR gdb/27757
* python/py-type.c (typy_range): Check that bounds are constant
before accessing them as such.
* guile/scm-type.c (gdbscm_type_range): Likewise.
gdb/testsuite/ChangeLog:
PR gdb/27757
* gdb.python/flexible-array-member.c: New test.
* gdb.python/flexible-array-member.exp: New test.
* gdb.guile/scm-type.exp (test_range): Add test for flexible
array member.
* gdb.guile/scm-type.c (struct flex_member): New.
(main): Use it.
Change-Id: Ibef92ee5fd871ecb7c791db2a788f203dff2b841
|
|
The 'create_breakpoint' function takes a 'parse_extra' argument that
determines whether the condition, thread, and force-condition
specifiers should be parsed from the extra string or be used from the
function arguments. However, for the case when 'parse_extra' is
false, there is no way to pass the force-condition specifier. This
patch adds it as a new argument.
Also, in the case when parse_extra is false, the current behavior is
as if the condition is being forced. This is a bug. The default
behavior should reject the breakpoint. See below for a demo of this
incorrect behavior. (The MI command '-break-insert' uses the
'create_breakpoint' function with parse_extra=0.)
$ gdb -q --interpreter=mi3 /tmp/simple
=thread-group-added,id="i1"
=cmd-param-changed,param="history save",value="on"
=cmd-param-changed,param="auto-load safe-path",value="/"
~"Reading symbols from /tmp/simple...\n"
(gdb)
-break-insert -c junk -f main
&"warning: failed to validate condition at location 1, disabling:\n "
&"No symbol \"junk\" in current context.\n"
^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="<MULTIPLE>",cond="junk",times="0",original-location="main",locations=[{number="1.1",enabled="N",addr="0x000000000000114e",func="main",file="/tmp/simple.c",fullname="/tmp/simple.c",line="2",thread-groups=["i1"]}]}
(gdb)
break main if junk
&"break main if junk\n"
&"No symbol \"junk\" in current context.\n"
^error,msg="No symbol \"junk\" in current context."
(gdb)
break main -force-condition if junk
&"break main -force-condition if junk\n"
~"Note: breakpoint 1 also set at pc 0x114e.\n"
&"warning: failed to validate condition at location 1, disabling:\n "
&"No symbol \"junk\" in current context.\n"
~"Breakpoint 2 at 0x114e: file /tmp/simple.c, line 2.\n"
=breakpoint-created,bkpt={number="2",type="breakpoint",disp="keep",enabled="y",addr="<MULTIPLE>",cond="junk",times="0",original-location="main",locations=[{number="2.1",enabled="N",addr="0x000000000000114e",func="main",file="/tmp/simple.c",fullname="/tmp/simple.c",line="2",thread-groups=["i1"]}]}
^done
(gdb)
After applying this patch, we get the behavior below:
(gdb)
-break-insert -c junk -f main
^error,msg="No symbol \"junk\" in current context."
This restores the behavior that is present in the existing releases.
gdb/ChangeLog:
2021-04-21 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* breakpoint.h (create_breakpoint): Add a new parameter,
'force_condition'.
* breakpoint.c (create_breakpoint): Use the 'force_condition'
argument when 'parse_extra' is false to check if the condition
is invalid at all of the breakpoint locations.
Update the users below.
(break_command_1)
(dprintf_command)
(trace_command)
(ftrace_command)
(strace_command)
(create_tracepoint_from_upload): Update.
* guile/scm-breakpoint.c (gdbscm_register_breakpoint_x): Update.
* mi/mi-cmd-break.c (mi_cmd_break_insert_1): Update.
* python/py-breakpoint.c (bppy_init): Update.
* python/py-finishbreakpoint.c (bpfinishpy_init): Update.
gdb/testsuite/ChangeLog:
2021-04-21 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.mi/mi-break.exp: Extend with checks for invalid breakpoint
conditions.
|
|
This adds a block search flags parameter to expand_symtabs_matching.
All callers are updated to search both the static and global blocks,
as that was the implied behavior before this patch.
This is a step toward replacing lookup_symbol with
expand_symtabs_matching.
gdb/ChangeLog
2021-04-17 Tom Tromey <tom@tromey.com>
* symtab.c (global_symbol_searcher::expand_symtabs)
(default_collect_symbol_completion_matches_break_on): Update.
* symmisc.c (maintenance_expand_symtabs): Update.
* symfile.h (expand_symtabs_matching): Add search_flags
parameter.
* symfile.c (expand_symtabs_matching): Add search_flags
parameter.
* symfile-debug.c (objfile::expand_symtabs_matching): Add
search_flags parameter.
* quick-symbol.h (struct quick_symbol_functions)
<expand_symtabs_matching>: Add search_flags parameter.
* python/py-symbol.c (gdbpy_lookup_static_symbols): Update.
* psymtab.c (recursively_search_psymtabs)
(psymbol_functions::expand_symtabs_matching): Add search_flags
parameter.
* psympriv.h (struct psymbol_functions) <expand_symtabs_matching>:
Add search_flags parameter.
* objfiles.h (struct objfile) <expand_symtabs_matching>: Add
search_flags parameter.
* linespec.c (iterate_over_all_matching_symtabs): Update.
* dwarf2/read.c (struct dwarf2_gdb_index)
<expand_symtabs_matching>: Add search_flags parameter.
(struct dwarf2_debug_names_index) <expand_symtabs_matching>: Add
search_flags parameter.
(dw2_map_matching_symbols): Update.
(dw2_expand_marked_cus, dw2_expand_symtabs_matching)
(dwarf2_gdb_index::expand_symtabs_matching): Add search_flags
parameter.
(dw2_debug_names_iterator): Change block_index to search flags.
<m_block_index>: Likewise.
(dw2_debug_names_iterator::next)
(dwarf2_debug_names_index::lookup_symbol)
(dwarf2_debug_names_index::expand_symtabs_for_function)
(dwarf2_debug_names_index::map_matching_symbols)
(dwarf2_debug_names_index::map_matching_symbols): Update.
(dwarf2_debug_names_index::expand_symtabs_matching): Add
search_flags parameter.
* ada-lang.c (ada_add_global_exceptions)
(collect_symbol_completion_matches): Update.
|
|
Python 3.4 has deprecated the imp module in favour of importlib. This
patch avoids the DeprecationWarning. This warning is visible to users
whose libpython.so has been compiled with --with-pydebug.
Considering that even python 3.5 has reached end of life, would it be
better to just use importlib and drop support for python 3.0 to 3.3?
2021-02-28 Boris Staletic <boris.staletic@gmail.com>
* gdb/python/lib/gdb/__init__.py: Use importlib on python 3.4+
to avoid deprecation warnings.
|
|
The small example for gdb.Parameter.get_set_string does not return a
string. The documentation is very clear that this method must return
a string, and indeed, inspecting the code in gdb/python/py-param.c
shows that a string return value is required (if an exception is not
thrown).
While inspecting the code in gdb/python/py-param.c I noticed that the
comment for the C++ code that invokes the Python get_set_string method
is wrong, so I updated that too.
gdb/ChangeLog:
* python/py-param.c (get_set_value): Update header comment.
gdb/doc/ChangeLog:
* python.texinfo (Parameters In Python): Return empty string in
small example code.
|
|
This commit:
commit d1cab9876d72d867b2de82688f5f5a2a4b655edb
Date: Tue Sep 15 11:08:56 2020 -0600
Don't use gdb_py_long_from_ulongest
Introduced a regression when GDB is compiled with Python 2. The frame
filter API expects the gdb.FrameDecorator.function () method to return
either a string (the name of a function) or an address, which GDB then
uses to lookup a msymbol.
If the address returned from gdb.FrameDecorator.function () comes from
gdb.Frame.pc () then before the above commit we would always expect to
see a PyLong object.
After the above commit we might (on Python 2) get a PyInt object.
The GDB code does not expect to see a PyInt, and only checks for a
PyLong, we then see an error message like:
RuntimeError: FrameDecorator.function: expecting a String, integer or None.
This commit just adds an additional call to PyInt_Check which handle
the missing case.
I had already written a test case to cover this issue before spotting
that the gdb.python/py-framefilter.exp test also triggers this
failure. As the new test case is slightly different I have kept it
in.
The new test forces the behaviour of gdb.FrameDecorator.function
returning an address. The reason the existing test case hits this is
due to the behaviour of the builtin gdb.FrameDecorator base class. If
the base class behaviour ever changed then the return an address case
would only be tested by the new test case.
gdb/ChangeLog:
* python/py-framefilter.c (py_print_frame): Use PyInt_Check as
well as PyLong_Check for Python 2.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-addr.c: New file.
* gdb.python/py-framefilter-addr.exp: New file.
* gdb.python/py-framefilter-addr.py: New file.
|
|
The current mechanism by which the Python gdb.current_objfile is
maintained does not allow for nested auto-load events. It is assumed
that once an auto-load script has finished loading then the current
objfile should be set back to NULL. In a nested situation, we should
be restoring the previous value.
We already have an RAII class to handle save/restore type behaviour,
so lets just switch to use that.
The test is a little contrived, but is simple enough, and triggers the
bug. The real use case might involve the auto-load script calling
functions (either in the just-loaded object file, or in the main
executable), which in turn trigger further auto-loads to occur.
gdb/ChangeLog:
* python/python.c (gdbpy_source_objfile_script): Use
make_scoped_restore to restore gdbpy_current_objfile.
(gdbpy_execute_objfile_script): Likewise.
gdb/testsuite/ChangeLog:
* gdb.python/py-auto-load-chaining-f1.c: New file.
* gdb.python/py-auto-load-chaining-f1.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining-f2.c: New file.
* gdb.python/py-auto-load-chaining-f2.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining.c: New file.
* gdb.python/py-auto-load-chaining.exp: New file.
|
|
If the user implements a TUI window in Python, and this window
responds to GDB events and then redraws its window contents then there
is currently an edge case which can lead to problems.
The Python API documentation suggests that calling methods like erase
or write on a TUI window (from Python code) will raise an exception if
the window is not valid.
And the description for is_valid says:
This method returns True when this window is valid. When the user
changes the TUI layout, windows no longer visible in the new layout
will be destroyed. At this point, the gdb.TuiWindow will no longer
be valid, and methods (and attributes) other than is_valid will
throw an exception.
From this I, as a user, would expect that if I did 'tui disable' to
switch back to CLI mode, then the window would no longer be valid.
However, this is not the case.
When the TUI is disabled the windows in the TUI are not deleted, they
are simply hidden. As such, currently, the is_valid method continues
to return true.
This means that if the users Python code does something like:
def event_handler (e):
global tui_window_object
if tui_window_object->is_valid ():
tui_window_object->erase ()
tui_window_object->write ("Hello World")
gdb.events.stop.connect (event_handler)
Then when a stop event arrives GDB will try to draw the TUI window,
even when the TUI is disabled.
This exposes two bugs. First, is_valid should be returning false in
this case, second, if the user forgot to add the is_valid call, then I
believe the erase and write calls should be throwing an
exception (when the TUI is disabled).
The solution to both of these issues is I think bound together, as it
depends on having a working 'is_valid' check.
There's a rogue assert added into tui-layout.c as part of this
commit. While working on this commit I managed to break GDB such that
TUI_CMD_WIN was nullptr, this was causing GDB to abort. I'm leaving
the assert in as it might help people catch issues in the future.
This patch is inspired by the work done here:
https://sourceware.org/pipermail/gdb-patches/2020-December/174338.html
gdb/ChangeLog:
* python/py-tui.c (gdbpy_tui_window) <is_valid>: New member
function.
(REQUIRE_WINDOW): Call is_valid member function.
(REQUIRE_WINDOW_FOR_SETTER): New define.
(gdbpy_tui_is_valid): Call is_valid member function.
(gdbpy_tui_set_title): Call REQUIRE_WINDOW_FOR_SETTER instead.
* tui/tui-data.h (struct tui_win_info) <is_visible>: Check
tui_active too.
* tui/tui-layout.c (tui_apply_current_layout): Add an assert.
* tui/tui.c (tui_enable): Move setting of tui_active earlier in
the function.
gdb/doc/ChangeLog:
* python.texinfo (TUI Windows In Python): Extend description of
TuiWindow.is_valid.
gdb/testsuite/ChangeLog:
* gdb.python/tui-window-disabled.c: New file.
* gdb.python/tui-window-disabled.exp: New file.
* gdb.python/tui-window-disabled.py: New file.
|
|
There's a bug in the python tui API. If the user tries to delete the
window title attribute then this will trigger undefined behaviour in
GDB due to a missing nullptr check.
gdb/ChangeLog:
* python/py-tui.c (gdbpy_tui_set_title): Check that the new value
for the title is not nullptr.
gdb/testsuite/ChangeLog:
* gdb.python/tui-window.exp: Add new tests.
* gdb.python/tui-window.py (TestWindow) <__init__>: Store
TestWindow object into global the_window.
<remote_title>: New method.
(delete_window_title): New function.
|
|
While working on another patch I noticed an oddly formatted error
message in the Python code.
When 'set python print-stack message' is in effect then consider this
Python script:
class TestCommand (gdb.Command):
def __init__ (self):
gdb.Command.__init__ (self, "test-cmd", gdb.COMMAND_DATA)
def invoke(self, args, from_tty):
raise RuntimeError ("bad")
TestCommand ()
And this GDB session:
(gdb) source path/to/python/script.py
(gdb) test-cmd
Python Exception <class 'RuntimeError'> bad:
Error occurred in Python: bad
The line 'Python Exception <class 'RuntimeError'> bad:' doesn't look
terrible in this situation, the colon at the end of the first line
makes sense given the second line.
However, there are places in GDB where there is no second line
printed, for example consider this python script:
def stop_listener (e):
raise RuntimeError ("bad")
gdb.events.stop.connect (stop_listener)
Then this GDB session:
(gdb) file helloworld.exe
(gdb) start
Temporary breakpoint 1 at 0x40112a: file hello.c, line 6.
Starting program: helloworld.exe
Temporary breakpoint 1, main () at hello.c:6
6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'> bad:
(gdb) si
0x000000000040112f 6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'> bad:
In this case there is no auxiliary information displayed after the
warning, and the line ending in the colon looks weird to me.
A quick survey of the code seems to indicate that it is not uncommon
for there to be no auxiliary information line printed, its not just
the one case I found above.
I propose that the line that currently looks like this:
Python Exception <class 'RuntimeError'> bad:
Be reformatted like this:
Python Exception <class 'RuntimeError'>: bad
I think this looks fine then in either situation. The first now looks
like this:
(gdb) test-cmd
Python Exception <class 'RuntimeError'>: bad
Error occurred in Python: bad
And the second like this:
(gdb) si
0x000000000040112f 6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'>: bad
There's just two tests that needed updating. Errors are checked for
in many more tests, but most of the time the pattern doesn't care
about the colon.
gdb/ChangeLog:
* python/python.c (gdbpy_print_stack): Reformat an error message.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter.exp: Update expected results.
* gdb.python/python.exp: Update expected results.
|
|
The last frame in a corrupt stack stores the frame_id of the next frame,
so these two frames currently compare as equal.
So if you have a backtrace where the oldest frame is corrupt, this happens:
(gdb) py
>f = gdb.selected_frame()
>while f.older():
> f = f.older()
>print(f == f.newer())
>end
True
With this change, that same example returns False.
gdb/ChangeLog:
2021-02-07 Hannes Domani <ssbssa@yahoo.de>
* python/py-frame.c (frapy_richcompare): Compare frame_id_is_next.
|
|
... and update all users.
gdb/ChangeLog:
* gdbtypes.h (get_type_arch): Rename to...
(struct type) <arch>: ... this, update all users.
Change-Id: I0e3ef938a0afe798ac0da74a9976bbd1d082fc6f
|