Age | Commit message (Collapse) | Author | Files | Lines |
|
The gdb docs promise that methods with more than two or more arguments
will accept keywords. However, I found that TuiWindow.write didn't
allow them. This patch adds the missing support.
|
|
Remove VALUE_REGNUM, replace it with a method on struct value. Set
`m_location.reg.regnum` directly from value::allocate_register_lazy,
which is fine because allocate_register_lazy is a static creation
function for struct value.
Change-Id: Id632502357da971617d9dce1e2eab9b56dbcf52d
|
|
I noticed that the DAP attach test case (and similarly
remoted-dap.exp) had a rogue exception stack trace in the log. It
turns out that an attach will generate a stop that does not have a
reason.
This patch fixes the problem in the _on_stop event listener by making
it a bit more careful when examining the event reason. It also adds
some machinery so that attach stops can be suppressed, which I think
is the right thing to do.
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This adds a new parameter to control the DAP logging level. By
default, "expected" exceptions are not logged, but the parameter lets
the user change this when more logging is desired.
This also changes a couple of spots to avoid logging the stack trace
for a DAPException.
This patch also documents the existing DAP logging parameter. I
forgot to document this before.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This introduces a new DAPException class, and then changes various
spots in the DAP implementation to wrap "expected" exceptions in this.
This class will help detect rogue exceptions caused by bugs in the
implementation.
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
In many cases, it's not possible for gdb to discover the executable
when a DAP 'attach' request is used. This patch lets the IDE supply
this information.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Commit d5cebea18e7a ("Make cached_reg_t own its data") added a
destructor to cached_reg_t.
That caused a build problem with Clang, which errors out like so:
> CXX python/py-unwind.o
> gdb/python/py-unwind.c:126:16: error: flexible array member 'reg' of type 'cached_reg_t[]' with non-trivial destruction
> 126 | cached_reg_t reg[];
> | ^
This is is not really a problem for our code, which allocates the
whole structure with xmalloc, and then initializes the array elements
with in-place new, and then takes care to call the destructor
manually. Like, commit d5cebea18e7a did:
@@ -928,7 +927,7 @@ pyuw_dealloc_cache (frame_info *this_frame, void *cache)
cached_frame_info *cached_frame = (cached_frame_info *) cache;
for (int i = 0; i < cached_frame->reg_count; i++)
- xfree (cached_frame->reg[i].data);
+ cached_frame->reg[i].~cached_reg_t ();
Maybe we should get rid of the flexible array member and use a bog
standard std::vector. I doubt this would cause any visible
performance issue.
Meanwhile, to unbreak the build, this commit switches from C99-style
flexible array member to 0-length array. It behaves the same, and
Clang doesn't complain. I got the idea from here:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70932#c11
GCC 9, our oldest support version, already supported this:
https://gcc.gnu.org/onlinedocs/gcc-9.1.0/gcc/Zero-Length.html
but the extension is actually much older than that. Note that
C99-style flexible array members are not standard C++ either.
Change-Id: I37dda18f367e238a41d610619935b2a0f2acacce
|
|
struct cached_reg_t owns its data buffer, but currently that is
managed manually. Convert it to use a unique_xmalloc_ptr.
Approved-By: Tom Tromey <tom@tromey.com>
Change-Id: I05a107098b717299e76de76aaba00d7fbaeac77b
|
|
Some functions related to the handling of registers in frames accept
"this frame", for which we want to read or write the register values,
while other functions accept "the next frame", which is the frame next
to that. The later is needed because we sometimes need to read register
values for a frame that does not exist yet (usually when trying to
unwind that frame-to-be).
value_of_register and value_of_register_lazy both take "this frame",
even if what they ultimately want internally is "the next frame". This
is annoying if you are in a spot that currently has "the next frame" and
need to call one of these functions (which happens later in this
series). You need to get the previous frame only for those functions to
get the next frame again. This is more manipulations, more chances of
mistake.
I propose to change these functions (and a few more functions in the
subsequent patches) to operate on "the next frame". Things become a bit
less awkward when all these functions agree on which frame they take.
So, in this patch, change value_of_register_lazy and value_of_register
to take "the next frame" instead of "this frame". This adds a lot of
get_next_frame_sentinel_okay, but if we convert the user registers API
to also use "the next frame" instead of "this frame", it will get simple
again.
Change-Id: Iaa24815e648fbe5ae3c214c738758890a91819cd
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
|
|
This changes explicit_location_spec to use unique_xmalloc_ptr,
removing some manual memory management.
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
|
|
In python/py-gdb-readline.c we make use of _PyOS_ReadlineTState,
however, this variable is no longer public in Python 3.13, and so GDB
no longer builds.
We are making use of _PyOS_ReadlineTState in order to re-acquire the
Python Global Interpreter Lock (GIL). The _PyOS_ReadlineTState
variable is set in Python's outer readline code prior to calling the
user (GDB) supplied readline callback function, which for us is
gdbpy_readline_wrapper. The gdbpy_readline_wrapper function is called
without the GIL held.
Instead of using _PyOS_ReadlineTState, I propose that we switch to
calling PyGILState_Ensure() and PyGILState_Release(). These functions
will acquire the GIL based on the current thread. I think this should
be sufficient; I can't imagine why we'd be running
gdbpy_readline_wrapper on one thread on behalf of a different Python
thread.... that would be unexpected I think.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Move gdbpy_gil class into python-internal.h, the next
commit wants to make use of this class from a file other
than python.c.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Currently, when creating a gdb.FinishBreakpoint in a function
called from an inline frame, it will never be hit:
```
(gdb) py fb=gdb.FinishBreakpoint()
Temporary breakpoint 1 at 0x13f1917b4: file C:/src/repos/binutils-gdb.git/gdb/testsuite/gdb.python/py-finish-breakpoint.c, line 47.
(gdb) c
Continuing.
Thread-specific breakpoint 1 deleted - thread 1 no longer in the thread list.
[Inferior 1 (process 1208) exited normally]
```
The reason is that the frame_id of a breakpoint has to be the
ID of a real frame, ignoring any inline frames.
With this fixed, it's working correctly:
```
(gdb) py fb=gdb.FinishBreakpoint()
Temporary breakpoint 1 at 0x13f5617b4: file C:/src/repos/binutils-gdb.git/gdb/testsuite/gdb.python/py-finish-breakpoint.c, line 47.
(gdb) c
Continuing.
Breakpoint 1, increase_inlined (a=0x40fa5c) at C:/src/repos/binutils-gdb.git/gdb/testsuite/gdb.python/py-finish-breakpoint.c:47
(gdb) py print(fb.return_value)
-8
```
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This implements DAP cancellation. A new object is introduced that
handles the details of cancellation. While cancellation is inherently
racy, this code attempts to make it so that gdb doesn't inadvertently
cancel the wrong request.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30472
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
Cancellation will generally be seen by the DAP code as a
KeyboardInterrupt. However, this derives from BaseException and not
Exception, so a small change is needed to send_gdb_with_response, to
forward the exception to the DAP server thread.
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP cancellation needs a way to interrupt whatever is happening on
gdb's main thread -- whether that is the inferior, a gdb CLI command,
or Python code.
This patch adds a new gdb.interrupt() function for this purpose. It
simply sets the quit flag and lets gdb do the rest.
No tests in this patch -- instead this is tested via the DAP
cancellation tests.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This changes the DAP server to move the JSON reader to a new thread.
This is key to implementing request cancellation, as now requests can
be read while an earlier one is being serviced.
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This patch introduces a new NotStoppedException type and changes the
DAP implementation of "not stopped" to use it. I was already touching
some code in this area and I thought this looked a little cleaner.
This also has the advantage that we can now choose not to log the
exception -- previously I was sometimes a bit alarmed when seeing this
in the logs, even though it is harmless.
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
Now that gdb adds stop-reason details to stop events, we can simplify
the DAP code to emit correct stop reasons in its own events. For the
most part a simple renaming of gdb reasons is sufficient; however,
"pause" must still be handled specially.
|
|
This changes Python stop events to carry a "details" dictionary, that
holds any relevant information about the stop. The details are
constructed using more or less the same procedure as is done for MI.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13587
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This moves the declaration of py_ui_out to a new header, so that it
can more readily be used by other code.
|
|
Now that DAP requests are normally run on the gdb thread, some DAP
helper functions are no longer needed. Removing these simplifies the
code.
|
|
This commit makes the gdb.Command.complete methods more verbose when
it comes to error handling.
Previous to this commit if any commands implemented in Python
implemented the complete method, and if there were any errors
encountered when calling that complete method, then GDB would silently
hide the error and continue as if there were no completions.
This makes is difficult to debug any errors encountered when writing
completion methods, and encourages the idea that Python extensions can
be broken, and GDB will just silently work around them.
I don't think this is a good idea. GDB should encourage extensions to
be written correctly, and robustly, and one way in which GDB can (I
think) support this, is by pointing out when an extension goes wrong.
In this commit I've gone through the Python command completion code,
and added calls to gdbpy_print_stack() or gdbpy_print_stack_or_quit()
in places where we were either clearing the Python error, or, in some
cases, just not handling the error at all.
One thing I have not changed is in cmdpy_completer (py-cmd.c) where we
process the list of completions returned from the Command.complete
method; this routine includes a call to gdbpy_is_string to check a
possible completion is a string, if not the completion is ignored.
I was tempted to remove this check, attempt to complete each result to
a string, and display an error if the conversion fails. After all,
returning anything but a string is surely a mistake by the extension
author.
However, the docs clearly say that only strings within the returned
list will be considered as completions. Anything else is ignored. As
such, and to avoid (what I think is pretty unlikely) breakage of
existing code, I've retained the gdbpy_is_string check.
After the gdbpy_is_string check we call python_string_to_host_string,
if this call fails then I do now print the error, where before we
ignored the error. I think this is OK; if GDB thinks something is a
string, but still can't convert it to a string, then I think it's OK
to display the error in that case.
Another case which I was a little unsure about was in
cmdpy_completer_helper, and the call to PyObject_CallMethodObjArgs,
which is when we actually call Command.complete. Previously, if this
call resulted in an exception then we would ignore this and just
pretend there were no completions.
Of all the changes, this is possibly the one with the biggest
potential for breaking existing scripts, but also, is, I think, the
most useful change. If the user code is wrong in some way, such that
an exception is raised, then previously the user would have no obvious
feedback about this breakage. Now GDB will print the exception for
them, making it, I think, much easier to debug their extension. But,
if there is user code in the wild that relies on raising an exception
as a means to indicate there are no completions .... well, that code
is going to break after this commit. I think we can live with this
though, the exceptions means no completions thing was never documented
behaviour.
I also added a new error() call if the PyObject_CallMethodObjArgs call
raises an exception. This causes the completion mechanism within GDB
to stop. Within GDB the completion code is called twice, the first
time to compute the work break characters, and then a second time to
compute the actual completions.
If PyObject_CallMethodObjArgs raises an exception when computing the
word break character, and we print it by calling
gdbpy_print_stack_or_quit(), but then carry on as if
PyObject_CallMethodObjArgs had returns no completions, GDB will
call the Python completion code again, which results in another call
to PyObject_CallMethodObjArgs, which might raise the same exception
again. This results in the Python exception being printed twice.
By throwing a C++ exception after the failed
PyObject_CallMethodObjArgs call, the completion mechanism is aborted,
and no completions are offered. But importantly, the Python exception
is only printed once. I think this gives a much better user
experience.
I've added some tests to cover this case, as I think this is the most
likely case that a user will run into.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
DAP specifies a "process" event that is sent when a process is started
or attached to. gdb was not emitting this (several DAP clients appear
to ignore it entirely), but it looked easy and harmless to implement.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30473
|
|
While working on cancellation, I noticed that a DAP 'pause' request
would set the "do not emit the continue" flag. This meant that a
subsequent request that should provoke a 'continue' event would
instead suppress the event.
I then tried writing a more obvious test case for this, involving an
inferior call -- and discovered that gdb.events.cont does not fire for
an inferior call.
This patch installs a new event listener for gdb.events.inferior_call
and arranges for this to emit continue and stop events when
appropriate. It also fixes the original bug, by adding a check to
exec_and_expect_stop.
|
|
GDB's Python API documentation for gdb.Command.complete() says:
The 'complete' method can return several values:
* If the return value is a sequence, the contents of the
sequence are used as the completions. It is up to 'complete'
to ensure that the contents actually do complete the word. A
zero-length sequence is allowed, it means that there were no
completions available. Only string elements of the sequence
are used; other elements in the sequence are ignored.
* If the return value is one of the 'COMPLETE_' constants
defined below, then the corresponding GDB-internal completion
function is invoked, and its result is used.
* All other results are treated as though there were no
available completions.
So, returning a non-sequence, and non-integer from a complete method
should be fine; it should just be treated as though there are no
completions.
However, if I write a complete method that returns None, I see this
behaviour:
(gdb) complete completefilenone x
Python Exception <class 'TypeError'>: 'NoneType' object is not iterable
warning: internal error: Unhandled Python exception
(gdb)
Which is caused because we currently assume that anything that is not
an integer must be iterable, and we call PyObject_GetIter on it. When
this call fails a Python exception is set, but instead of
clearing (and therefore ignoring) this exception as we do everywhere
else in the Python completion code, we instead just return with the
exception set.
In this commit I add a PySequence_Check call. If this call returns
false (and we've already checked the integer case) then we can assume
there are no completion results.
I've added a test which checks returning a non-sequence.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Reformat gdb/python/lib/gdb/missing_debug.py with black after commit
e8c3dafa5f5 ("[gdb/python] Don't import curses.ascii module unless necessary").
|
|
I ran into a failure in test-case gdb.python/py-missing-debug.exp with python
3.6, which was fixed by commit 7db795bc67a ("gdb/python: remove use of
str.isascii()").
However, I subsequently ran into a failure with python 3.11:
...
(gdb) PASS: $exp: initial checks: debug info no longer found
source py-missing-debug.py^M
Traceback (most recent call last):^M
File "py-missing-debug.py", line 17, in <module>^M
from gdb.missing_debug import MissingDebugHandler^M
File "missing_debug.py", line 21, in <module>^M
from curses.ascii import isascii, isalnum^M
File "/usr/lib64/python3.11/_import_failed/curses.py", line 16, in <module>^M
raise ImportError(f"""Module '{failed_name}' is not installed.^M
ImportError: Module 'curses' is not installed.^M
Use:^M
sudo zypper install python311-curses^M
to install it.^M
(gdb) FAIL: $exp: source python script
...
Apparently I have the curses module installed for 3.6, but not 3.11.
I could just install it, but the test-case worked fine with 3.11 before commit
7db795bc67a.
Fix this by only using the curses module when necessary, for python <= 3.7.
Tested on x86_64-linux, with both python 3.6 and 3.11.
|
|
A couple of spots in the DAP code use the same workaround for the
absence of queue.SimpleQueue before Python 3.6. This patch
consolidates these into a single spot.
|
|
Since GDB now requires C++17, we don't need the internally maintained
gdb::optional implementation. This patch does the following replacing:
- gdb::optional -> std::optional
- gdb::in_place -> std::in_place
- #include "gdbsupport/gdb_optional.h" -> #include <optional>
This change has mostly been done automatically. One exception is
gdbsupport/thread-pool.* which did not use the gdb:: prefix as it
already lives in the gdb namespace.
Change-Id: I19a92fa03e89637bab136c72e34fd351524f65e9
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
|
|
gdb::make_unique is a wrapper around std::make_unique when compiled with
C++17. Now that C++17 is required, use std::make_unique directly in the
codebase, and remove gdb::make_unique.
Change-Id: I80b615e46e4b7c097f09d78e579a9bdce00254ab
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net
|
|
Hannes' patch to show local variables in the TUI pointed out that
NoOpStructPrinter should ignore static members. This patch implements
this.
|
|
DAP specifies that a request can fail with the "notStopped" message if
the inferior is running but the request requires that it first be
stopped.
This patch implements this for gdb. Most requests are assumed to
require a stopped inferior, and the exceptions are noted by a new
'request' parameter.
You may notice that the implementation is a bit racy. I think this is
inherent -- unless the client waits for a stop event before sending a
request, the request may be processed at any time relative to a stop.
https://sourceware.org/bugzilla/show_bug.cgi?id=31037
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
ExecutionInvoker is no longer really needed, due to the previous DAP
refactoring. This patch removes it in favor of an ordinary function.
One spot (the 'continue' request) could still have used it, but is
more succinctly expressed as a lambda.
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
Nearly every DAP request implementation forwards its work to the gdb
thread, using send_gdb_with_response. This patch refactors the
'request' decorator to make this automatic, and to provide some
parameters so that the unusual requests can express their needs as
well.
In a few spots this simplifies the code by removing an unnecessary
helper function. This could be done in more places as well if we
wanted.
The main motivation for this patch is that I thought it would be
helpful for cancellation. I am still working on that, but meanwhile
the parameterization of 'request' makes it easy to handle the
'notStopped' response as well.
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP specifies a StackFrameFormat object that can be used to change how
the "name" part of a stack frame is constructed. While this output
can already be done in a nicer way (and also letting the client choose
the formatting), nevertheless it is in the spec, so I figured I'd
implement it.
While implementing this, I discovered that the current code does not
correctly preserve frame IDs across requests. I rewrote frame
iteration to preserve this, and it turned out to be simpler to combine
these patches.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30475
|
|
This commit:
commit 8f6c452b5a4e50fbb55ff1d13328b392ad1fd416
Date: Sun Oct 15 22:48:42 2023 +0100
gdb: implement missing debug handler hook for Python
introduced a use of str.isascii(), which was only added in Python 3.7.
This commit switches to use curses.ascii.isascii(), as this was
available in 3.6.
The same is true for str.isalnum(), which is replaced with
curses.ascii.isalnum().
There should be no user visible changes after this commit.
|
|
If you run gdb in the build tree without --data-directory, on a
program that does not have debug info, it will crash, because
gdbpy_handle_missing_debuginfo unconditionally uses gdb_python_module.
Other code in gdb using gdb_python_module checks it first and it
seemes harmless to do the same thing here. (gdb_python_initialized
does not cover this case so that python can be partially initialized
and still somewhat work.)
|
|
A co-worker requested that the DAP scope for a nested function's frame
also show the variables from outer frames. DAP doesn't directly
support this notion, so this patch arranges to put these variables
into the inner frames "Locals" scope.
I chose to do this only for DAP. For CLI and MI, gdb currently does
not do this, so this preserves the behavior.
Note that an earlier patch (see commit 4a1311ba) removed some code
that seemed to do something similar. However, that code did not
actually work.
|
|
While working on static links, I noticed that the DAP scopes code does
not handle the scenario where a frame decorator returns None. This
situation should be handled identically to a frame decorator returning
an empty iterator.
|
|
This adds a new gdb.Frame.static_link method to the gdb Python layer.
This can be used to find the static link frame for a given frame.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When resizing from a big to small terminal size, and you have a
TUI python window that would then be outside of the new size,
valgrind shows this error:
==3389== Invalid read of size 1
==3389== at 0xC3DFEE: wnoutrefresh (lib_refresh.c:167)
==3389== by 0xC3E3C9: wrefresh (lib_refresh.c:63)
==3389== by 0xA9766C: tui_unhighlight_win(tui_win_info*) (tui-wingeneral.c:134)
==3389== by 0x98921C: tui_py_window::rerender() (py-tui.c:183)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8C2A2: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1033)
==3389== by 0xA8C23C: tui_layout_split::apply(int, int, int, int, bool) (tui-layout.c:1030)
==3389== by 0xA8B1F8: tui_apply_current_layout(bool) (tui-layout.c:81)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== Address 0x115cc214 is 1,332 bytes inside a block of size 2,240 free'd
==3389== at 0x4A0A430: free (vg_replace_malloc.c:446)
==3389== by 0xC3CF7D: _nc_freewin (lib_newwin.c:121)
==3389== by 0xA8B1C6: tui_apply_current_layout(bool) (tui-layout.c:78)
==3389== by 0xA95CDB: tui_resize_all() (tui-win.c:525)
==3389== by 0xA95D1E: tui_async_resize_screen(void*) (tui-win.c:562)
==3389== by 0x6B855D: invoke_async_signal_handlers() (async-event.c:234)
==3389== by 0xC0CEF8: gdb_do_one_event(int) (event-loop.cc:199)
==3389== by 0x8E40E9: captured_command_loop() (main.c:407)
==3389== by 0x8E5E54: gdb_main(captured_main_args*) (main.c:1324)
==3389== by 0x62AC04: main (gdb.c:39)
It's because tui_py_window::m_inner_window still has the outside
coordinates, and wnoutrefresh then does an out-of-bounds access.
Fix this by resetting m_inner_window on every resize, it will anyways
be recreated in the next rerender call.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
I found a declaration in py-stopevent.h for which there is no
definition. This patch removes it.
|
|
This patch implements the DAP setVariable request.
setVariable is a bit odd in that it specifies the variable to modify
by passing in the variable's container and the name of the variable.
This approach can't handle variable shadowing (there are a couple of
open DAP bugs on this topic), so this patch renames duplicates to
avoid the problem.
|
|
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that gdb/python/python.c unconditionally includes
gdbsupport/selftest.h.
Make this conditional on GDB_SELF_TEST.
Tested on x86_64-linux.
|
|
A pretty-printer's 'children' method may return values other than a
gdb.Value -- it may return any value that can be converted to a
gdb.Value.
I noticed that this case did not work for DAP. This patch fixes the
problem.
|
|
Andry pointed out that the DAP code did not properly handle
gdb.LazyString results from a pretty-printer, yielding:
TypeError: Object of type LazyString is not JSON serializable
This patch fixes the problem, partly with a small patch in varref.py,
but mainly by implementing tp_str for LazyString.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Andry noticed that given a DAP setExpression request, where the
expression to set is a register, DAP will return the wrong value -- it
will return the old value, not the updated one.
This happens because gdb.Value.assign (which was recently added for
DAP) does not update the value.
In this patch, I chose to have the assign method update the Value
in-place. It's also possible to have it return a new value, but this
didn't seem very useful to me.
|