Age | Commit message (Collapse) | Author | Files | Lines |
|
I realized that I had only implemented DAP breakpoint conditions for
exception breakpoints, and not other kinds of breakpoints. This patch
corrects the oversight.
|
|
Currently, gdb will unwind the entire stack in response to the
stackTrace request. I had erroneously thought that the totalFrames
attribute was required in the response. However, the spec says:
If omitted or if `totalFrames` is larger than the available
frames, a client is expected to request frames until a request
returns less frames than requested (which indicates the end of the
stack).
This patch removes this from the response in order to improve
performance when the stack trace is very long.
|
|
This implements the DAP breakpointLocations request.
|
|
Co-workers who work on a program that uses DAP asked for the ability
to have gdb stop at the main subprogram when launching. This patch
implements this extension.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This adds a new "target" to the DAP attach request. This is passed to
"target remote". I thought "attach" made the most sense for this,
because in some sense gdb is attaching to a running process. It's
worth noting that all DAP "attach" parameters are defined by the
implementation.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
A DAP client can report the supportsVariableType capability in the
initialize request. In this case, gdb can include the type of a
variable or expression in various results.
|
|
This implements the DAP setExpression request.
|
|
This adds an 'assign' method to gdb.Value. This allows for assignment
without requiring the use of parse_and_eval.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
It occurred to me recently that gdb's DAP implementation should
probably check the types of objects coming from the client. This
patch implements this idea by reusing Python's existing type
annotations, and supplying a decorator that verifies these at runtime.
Python doesn't make it very easy to do runtime type-checking, so the
core of the checker is written by hand. I haven't tried to make a
fully generic runtime type checker. Instead, this only checks the
subset that is needed by DAP. For example, only keyword-only
functions are handled.
Furthermore, in a few spots, it wasn't convenient to spell out the
type that is accepted. I've added a couple of comments to this effect
in breakpoint.py.
I've tried to make this code compatible with older versions of Python,
but I've only been able to try it with 3.9 and 3.10.
|
|
My co-worker Kévin taught me that using a mutable object as a default
argument in Python is somewhat dangerous, because the object is
created a single time (when the function is defined), and so if it is
mutated in the body of the function, the changes will stick around.
This patch changes the cases like this in DAP to use () rather than []
as the default. This patch is merely preventative, as no bugs like
this are in the code.
|
|
The 'request' decorator is intended to also ensure that the request
function runs in the DAP thread. However, the unwrapped function is
installed in the global request map, so the wrapped version is never
called. This patch fixes the bug.
|
|
When I first started implementing DAP, I had some vague plan of having
the implementation functions use the same name as the request. I
abandoned this idea, but one vestige remained. This patch renames the
one remaining function to be gdb-ish.
|
|
A few DAP requests support a "singleThread" parameter, which is
somewhat similar to scheduler-locking. This patch implements support
for this.
|
|
This implements the DAP "stepOut" request.
|
|
This implements the DAP "attach" request.
Note that the copyright dates on the new test source file are not
incorrect -- this was copied verbatim from another directory.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This implements the DAP setExceptionBreakpoints request for Ada. This
is a somewhat minimal implementation, in that "exceptionOptions" are
not implemented (or advertised) -- I wasn't completely sure how this
feature is supposed to work.
I haven't added C++ exception handling here, but it's easy to do if
needed.
This patch relies on the new MI command execution support to do its
work.
|
|
History Of This Patch
=====================
This commit aims to address PR gdb/21699. There have now been a
couple of attempts to fix this issue. Simon originally posted two
patches back in 2021:
https://sourceware.org/pipermail/gdb-patches/2021-July/180894.html
https://sourceware.org/pipermail/gdb-patches/2021-July/180896.html
Before Pedro then posted a version of his own:
https://sourceware.org/pipermail/gdb-patches/2021-July/180970.html
After this the conversation halted. Then in 2023 I (Andrew) also took
a look at this bug and posted two versions:
https://sourceware.org/pipermail/gdb-patches/2023-April/198570.html
https://sourceware.org/pipermail/gdb-patches/2023-April/198680.html
The approach taken in my first patch was pretty similar to what Simon
originally posted back in 2021. My second attempt was only a slight
variation on the first.
Pedro then pointed out his older patch, and so we arrive at this
patch. The GDB changes here are mostly Pedro's work, but updated by
me (Andrew), any mistakes are mine.
The tests here are a combinations of everyone's work, and the commit
message is new, but copies bits from everyone's earlier work.
Problem Description
===================
Bug PR gdb/21699 makes the observation that using $_as_string with
GDB's printf can cause GDB to print unexpected data from the
inferior. The reproducer is pretty simple:
#include <stddef.h>
static char arena[100];
/* Override malloc() so value_coerce_to_target() gets a known
pointer, and we know we"ll see an error if $_as_string() gives
a string that isn't null terminated. */
void
*malloc (size_t size)
{
memset (arena, 'x', sizeof (arena));
if (size > sizeof (arena))
return NULL;
return arena;
}
int
main ()
{
return 0;
}
And then in a GDB session:
$ gdb -q test
Reading symbols from /tmp/test...
(gdb) start
Temporary breakpoint 1 at 0x4004c8: file test.c, line 17.
Starting program: /tmp/test
Temporary breakpoint 1, main () at test.c:17
17 return 0;
(gdb) printf "%s\n", $_as_string("hello")
"hello"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
(gdb) quit
The problem above is caused by how value_cstring is used within
py-value.c, but once we understand the issue then it turns out that
value_cstring is used in an unexpected way in many places within GDB.
Within py-value.c we have a null-terminated C-style string. We then
pass a pointer to this string, along with the length of this
string (so not including the null-character) to value_cstring.
In value_cstring GDB allocates an array value of the given character
type, and copies in requested number of characters. However
value_cstring does not add a null-character of its own. This means
that the value created by calling value_cstring is only
null-terminated if the null-character is included in the passed in
length. In py-value.c this is not the case, and indeed, in most uses
of value_cstring, this is not the case.
When GDB tries to print one of these strings the value contents are
pushed to the inferior, and then read back as a C-style string, that
is, GDB reads inferior memory until it finds a null-terminator. For
the py-value.c case, no null-terminator is pushed into the inferior,
so GDB will continue reading inferior memory until a null-terminator
is found, with unpredictable results.
Patch Description
=================
The first thing this patch does is better define what the arguments
for the two function value_cstring and value_string should represent.
The comments in the header file are updated to describe whether the
length argument should, or should not, include a null-character.
Also, the data argument is changed to type gdb_byte. The functions as
they currently exist will handle wide-characters, in which case more
than one 'char' would be needed for each character. As such using
gdb_byte seems to make more sense.
To avoid adding casts throughout GDB, I've also added an overload that
still takes a 'char *', but asserts that the character type being used
is of size '1'.
The value_cstring function is now responsible for adding a null
character at the end of the string value it creates.
However, once we start looking at how value_cstring is used, we
realise there's another, related, problem. Not every language's
strings are null terminated. Fortran and Ada strings, for example,
are just an array of characters, GDB already has the function
value_string which can be used to create such values.
Consider this example using current GDB:
(gdb) set language ada
(gdb) p $_gdb_setting("arch")
$1 = (97, 117, 116, 111)
(gdb) ptype $
type = array (1 .. 4) of char
(gdb) p $_gdb_maint_setting("test-settings string")
$2 = (0)
(gdb) ptype $
type = array (1 .. 1) of char
This shows two problems, first, the $_gdb_setting and
$_gdb_maint_setting functions are calling value_cstring using the
builtin_char character, rather than a language appropriate type. In
the first call, the 'arch' case, the value_cstring call doesn't
include the null character, so the returned array only contains the
expected characters. But, in the $_gdb_maint_setting example we do
end up including the null-character, even though this is not expected
for Ada strings.
This commit adds a new language method language_defn::value_string,
this function takes a pointer and length and creates a language
appropriate value that represents the string. For C, C++, etc this
will be a null-terminated string (by calling value_cstring), and for
Fortran and Ada this can be a bounded array of characters with no null
terminator. Additionally, this new language_defn::value_string
function is responsible for selecting a language appropriate character
type.
After this commit the only calls to value_cstring are from the C
expression evaluator and from the default language_defn::value_string.
And the only calls to value_string are from Fortan, Ada, and ObjectC
related code.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=21699
Co-Authored-By: Simon Marchi <simon.marchi@efficios.com>
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Co-Authored-By: Pedro Alves <pedro@palves.net>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Fix some more typos:
- distinquish -> distinguish
- actualy -> actually
- singe -> single
- frash -> frame
- chid -> child
- dissassembler -> disassembler
- uninitalized -> uninitialized
- precontidion -> precondition
- regsiters -> registers
- marge -> merge
- sate -> state
- garanteed -> guaranteed
- explictly -> explicitly
- prefices (nonstandard plural) -> prefixes
- bondary -> boundary
- formated -> formatted
- ithe -> the
- arrav -> array
- coresponding -> corresponding
- owend -> owned
- fials -> fails
- diasm -> disasm
- ture -> true
- tpye -> type
There's one code change, the name of macro SIG_CODE_BONDARY_FAULT changed to
SIG_CODE_BOUNDARY_FAULT.
Tested on x86_64-linux.
|
|
Fix a few typos:
- implemention -> implementation
- convertion(s) -> conversion(s)
- backlashes -> backslashes
- signoring -> ignoring
- (un)ambigious -> (un)ambiguous
- occured -> occurred
- hidding -> hiding
- temporarilly -> temporarily
- immediatelly -> immediately
- sillyness -> silliness
- similiar -> similar
- porkuser -> pokeuser
- thats -> that
- alway -> always
- supercede -> supersede
- accomodate -> accommodate
- aquire -> acquire
- priveleged -> privileged
- priviliged -> privileged
- priviledges -> privileges
- privilige -> privilege
- recieve -> receive
- (p)refered -> (p)referred
- succesfully -> successfully
- successfuly -> successfully
- responsability -> responsibility
- wether -> whether
- wich -> which
- disasbleable -> disableable
- descriminant -> discriminant
- construcstor -> constructor
- underlaying -> underlying
- underyling -> underlying
- structureal -> structural
- appearences -> appearances
- terciarily -> tertiarily
- resgisters -> registers
- reacheable -> reachable
- likelyhood -> likelihood
- intepreter -> interpreter
- disassemly -> disassembly
- covnersion -> conversion
- conviently -> conveniently
- atttribute -> attribute
- struction -> struct
- resonable -> reasonable
- popupated -> populated
- namespaxe -> namespace
- intialize -> initialize
- identifer(s) -> identifier(s)
- expection -> exception
- exectuted -> executed
- dungerous -> dangerous
- dissapear -> disappear
- completly -> completely
- (inter)changable -> (inter)changeable
- beakpoint -> breakpoint
- automativ -> automatic
- alocating -> allocating
- agressive -> aggressive
- writting -> writing
- reguires -> requires
- registed -> registered
- recuding -> reducing
- opeartor -> operator
- ommitted -> omitted
- modifing -> modifying
- intances -> instances
- imbedded -> embedded
- gdbaarch -> gdbarch
- exection -> execution
- direcive -> directive
- demanged -> demangled
- decidely -> decidedly
- argments -> arguments
- agrument -> argument
- amespace -> namespace
- targtet -> target
- supress(ed) -> suppress(ed)
- startum -> stratum
- squence -> sequence
- prompty -> prompt
- overlow -> overflow
- memember -> member
- languge -> language
- geneate -> generate
- funcion -> function
- exising -> existing
- dinking -> syncing
- destroh -> destroy
- clenaed -> cleaned
- changep -> changedp (name of variable)
- arround -> around
- aproach -> approach
- whould -> would
- symobl -> symbol
- recuse -> recurse
- outter -> outer
- freeds -> frees
- contex -> context
Tested on x86_64-linux.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
In gdb/python/py-value.c, in the value_object_methods array I noticed:
...
{ "const_value", valpy_const_value, METH_NOARGS,
"Return a 'const' qualied version of the same value." },
...
Fix the qualied -> qualified typo.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Remove the breakpoint_pointer_iterator layer. Adjust all users of
all_breakpoints and all_tracepoints to use references instead of
pointers.
Change-Id: I376826f812117cee1e6b199c384a10376973af5d
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
|
|
Remove the bp_location_pointer_iterator layer. Adjust all users of
breakpoint::locations to use references instead of pointers.
Change-Id: Iceed34f5e0f5790a9cf44736aa658be6d1ba1afa
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
|
|
This patch augments the DAP launch request with some optional new
parameters that let the client control the command-line arguments and
the environment of the inferior.
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This adds two new attributes and three new methods to gdb.Inferior.
The attributes let Python code see the command-line arguments and the
name of "main". Argument setting is also supported.
The methods let Python code manipulate the inferior's environment
variables.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
DAP specifies that if an evaluate request does not have a frameID
parameter, then the expression is evaluated in the global scope.
|
|
This adds a 'global_context' parse_and_eval to gdb.parse_and_eval.
This lets users request a parse that is done at "global scope".
I considered letting callers pass in a block instead, with None
meaning "global" -- but then there didn't seem to be a clean way to
express the default for this parameter.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This implements the DAP loadedSources request, using gdb.execute_mi to
avoid having to write another custom Python API.
|
|
This adds a new Python function, gdb.execute_mi, that can be used to
invoke an MI command but get the output as a Python object, rather
than a string. This is done by implementing a new ui_out subclass
that builds a Python object.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=11688
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This changes mi_parse_argv to be a method of mi_parse. This is just a
minor cleanup.
|
|
This changes mi_parse::args to be a private member, retrieved via
accessor. It also changes this member to be a std::string. This
makes it simpler for a subsequent patch to implement different
behavior for argument parsing.
|
|
If an MI command written in Python includes a number in its output,
currently that is simply emitted as a string. However, it's
convenient for a later patch if these are emitted using field_signed.
This does not make a difference to ordinary MI clients.
|
|
I recently added a 'dap' component to bugzilla, and I filed a few bugs
there. This patch removes the corresponding FIXME comments.
A few such comments still exist. In at least one case, I have a fix
I'll be submitting eventually; in others I think I need to do a bit of
investigation to properly file a bug report.
|
|
This commit extends the Python Disassembler API to allow for styling
of the instructions.
Before this commit the Python Disassembler API allowed the user to do
two things:
- They could intercept instruction disassembly requests and return a
string of their choosing, this string then became the disassembled
instruction, or
- They could call builtin_disassemble, which would call back into
libopcode to perform the disassembly. As libopcode printed the
instruction GDB would collect these print requests and build a
string. This string was then returned from the builtin_disassemble
call, and the user could modify or extend this string as needed.
Neither of these approaches allowed for, or preserved, disassembler
styling, which is now available within libopcodes for many of the more
popular architectures GDB supports.
This commit aims to fill this gap. After this commit a user will be
able to do the following things:
- Implement a custom instruction disassembler entirely in Python
without calling back into libopcodes, the custom disassembler will
be able to return styling information such that GDB will display
the instruction fully styled. All of GDB's existing style
settings will affect how instructions coming from the Python
disassembler are displayed in the expected manner.
- Call builtin_disassemble and receive a result that represents how
libopcode would like the instruction styled. The user can then
adjust or extend the disassembled instruction before returning the
result to GDB. Again, the instruction will be styled as expected.
To achieve this I will add two new classes to GDB,
DisassemblerTextPart and DisassemblerAddressPart.
Within builtin_disassemble, instead of capturing the print calls from
libopcodes and building a single string, we will now create either a
text part or address part and store these parts in a vector.
The DisassemblerTextPart will capture a small piece of text along with
the associated style that should be used to display the text. This
corresponds to the disassembler calling
disassemble_info::fprintf_styled_func, or for disassemblers that don't
support styling disassemble_info::fprintf_func.
The DisassemblerAddressPart is used when libopcodes requests that an
address be printed, and takes care of printing the address and
associated symbol, this corresponds to the disassembler calling
disassemble_info::print_address_func.
These parts are then placed within the DisassemblerResult when
builtin_disassemble returns.
Alternatively, the user can directly create parts by calling two new
methods on the DisassembleInfo class: DisassembleInfo.text_part and
DisassembleInfo.address_part.
Having created these parts the user can then pass these parts when
initializing a new DisassemblerResult object.
Finally, when we return from Python to gdbpy_print_insn, one way or
another, the result being returned will have a list of parts. Back in
GDB's C++ code we walk the list of parts and call back into GDB's core
to display the disassembled instruction with the correct styling.
The new API lives in parallel with the old API. Any existing code
that creates a DisassemblerResult using a single string immediately
creates a single DisassemblerTextPart containing the entire
instruction and gives this part the default text style. This is also
what happens if the user calls builtin_disassemble for an architecture
that doesn't (yet) support libopcode styling.
This matches up with what happens when the Python API is not involved,
an architecture without disassembler styling support uses the old
libopcodes printing API (the API that doesn't pass style info), and
GDB just prints everything using the default text style.
The reason that parts are created by calling methods on
DisassembleInfo, rather than calling the class constructor directly,
is DisassemblerAddressPart. Ideally this part would only hold the
address which the part represents, but in order to support backwards
compatibility we need to be able to convert the
DisassemblerAddressPart into a string. To do that we need to call
GDB's internal print_address function, and to do that we need an
gdbarch.
What this means is that the DisassemblerAddressPart needs to take a
gdb.Architecture object at creation time. The only valid place a user
can pull this from is from the DisassembleInfo object, so having the
DisassembleInfo act as a factory ensures that the correct gdbarch is
passed over each time. I implemented both solutions (the one
presented here, and an alternative where parts could be constructed
directly), and this felt like the cleanest solution.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
This commit is a refactor ahead of the next change which will make
disassembler styling available through the Python API.
Unfortunately, in order to make the styling support available, I think
the easiest solution is to make a very small change to the existing
API.
The current API relies on returning a DisassemblerResult object to
represent each disassembled instruction. Currently GDB allows the
DisassemblerResult class to be sub-classed, which could mean that a
user tries to override the various attributes that exist on the
DisassemblerResult object.
This commit removes this ability, effectively making the
DisassemblerResult class final.
Though this is a change to the existing API, I'm hoping this isn't
going to cause too many issues:
- The Python disassembler API was only added in the previous release
of GDB, so I don't expect it to be widely used yet, and
- It's not clear to me why a user would need to sub-class the
DisassemblerResult type, I allowed it in the original patch
because at the time I couldn't see any reason to NOT allow it.
Having prevented sub-classing I can now rework the tail end of the
gdbpy_print_insn function; instead of pulling the results out of the
DisassemblerResult object by calling back into Python, I now cast the
Python object back to its C++ type (disasm_result_object), and access
the fields directly from there. In later commits I will be reworking
the disasm_result_object type in order to hold information about the
styled disassembler output.
The tests that dealt with sub-classing DisassemblerResult have been
removed, and a new test that confirms that DisassemblerResult can't be
sub-classed has been added.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
I noticed many spots checking whether a dynamic property's kind is
PROP_CONST. Some spots, I think, are doing a slightly incorrect check
-- checking for != PROP_UNDEFINED where == PROP_CONST is actually
required, the key thing being that const_val may only be called for
PROP_CONST properties.
This patch adds dynamic::is_constant and then updates these checks to
use it.
Regression tested on x86-64 Fedora 36.
|
|
I noticed that gdb's DAP code did not provide a way to see register
values. DAP defines a "register" scope, which this patch implements.
This patch also adds the missing (and optional) "presentationHint" to
scopes.
|
|
Add the DisassemblerResult.__str__ method. This gives the same result
as the DisassemblerResult.string attribute, but can be useful
sometimes depending on how the user is trying to print the object.
There's a test for the new functionality.
|
|
Add a __repr__ method for the DisassembleInfo and DisassemblerResult
types, and add some tests for these new methods.
|
|
The DAP scopes request examines the symbols in a block tree, but
neglects to omit types. This patch fixes the problem.
|
|
Currently, when we add a new python sub-system to GDB,
e.g. py-inferior.c, we end up having to create a new function like
gdbpy_initialize_inferior, which then has to be called from the
function do_start_initialization in python.c.
In some cases (py-micmd.c and py-tui.c), we have two functions
gdbpy_initialize_*, and gdbpy_finalize_*, with the second being called
from finalize_python which is also in python.c.
This commit proposes a mechanism to manage these initialization and
finalization calls, this means that adding a new Python subsystem will
no longer require changes to python.c or python-internal.h, instead,
the initialization and finalization functions will be registered
directly from the sub-system file, e.g. py-inferior.c, or py-micmd.c.
The initialization and finalization functions are managed through a
new class gdbpy_initialize_file in python-internal.h. This class
contains a single global vector of all the initialization and
finalization functions.
In each Python sub-system we create a new gdbpy_initialize_file
object, the object constructor takes care of registering the two
callback functions.
Now from python.c we can call static functions on the
gdbpy_initialize_file class which take care of walking the callback
list and invoking each callback in turn.
To slightly simplify the Python sub-system files I added a new macro
GDBPY_INITIALIZE_FILE, which hides the need to create an object. We
can now just do this:
GDBPY_INITIALIZE_FILE (gdbpy_initialize_registers);
One possible problem with this change is that there is now no
guaranteed ordering of how the various sub-systems are initialized (or
finalized). To try and avoid dependencies creeping in I have added a
use of the environment variable GDB_REVERSE_INIT_FUNCTIONS, this is
the same environment variable used in the generated init.c file.
Just like with init.c, when this environment variable is set we
reverse the list of Python initialization (and finalization)
functions. As there is already a test that starts GDB with the
environment variable set then this should offer some level of
protection against dependencies creeping in - though for full
protection I guess we'd need to run all gdb.python/*.exp tests with
the variable set.
I have tested this patch with the environment variable set, and saw no
regressions, so I think we are fine right now.
One other change of note was for gdbpy_initialize_gdb_readline, this
function previously returned void. In order to make this function
have the correct signature I've updated its return type to int, and we
now return 0 to indicate success.
All of the other initialize (and finalize) functions have been made
static within their respective sub-system files.
There should be no user visible changes after this commit.
|
|
SUMMARY
The '--simple-values' argument to '-stack-list-arguments' and similar
GDB/MI commands does not take reference types into account, so that
references to arbitrarily large structures are considered "simple" and
printed. This means that the '--simple-values' argument cannot be used
by IDEs when tracing the stack due to the time taken to print large
structures passed by reference.
DETAILS
Various GDB/MI commands ('-stack-list-arguments', '-stack-list-locals',
'-stack-list-variables' and so on) take a PRINT-VALUES argument which
may be '--no-values' (0), '--all-values' (1) or '--simple-values' (2).
In the '--simple-values' case, the command is supposed to print the
name, type, and value of variables with simple types, and print only the
name and type of variables with compound types.
The '--simple-values' argument ought to be suitable for IDEs that need
to update their user interface with the program's call stack every time
the program stops. However, it does not take C++ reference types into
account, and this makes the argument unsuitable for this purpose.
For example, consider the following C++ program:
struct s {
int v[10];
};
int
sum(const struct s &s)
{
int total = 0;
for (int i = 0; i < 10; ++i) total += s.v[i];
return total;
}
int
main(void)
{
struct s s = { { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 } };
return sum(s);
}
If we start GDB in MI mode and continue to 'sum', the behaviour of
'-stack-list-arguments' is as follows:
(gdb)
-stack-list-arguments --simple-values
^done,stack-args=[frame={level="0",args=[{name="s",type="const s &",value="@0x7fffffffe310: {v = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}"}]},frame={level="1",args=[]}]
Note that the value of the argument 's' was printed, even though 's' is
a reference to a structure, which is not a simple value.
See https://github.com/microsoft/MIEngine/pull/673 for a case where this
behaviour caused Microsoft to avoid the use of '--simple-values' in
their MIEngine debug adapter, because it caused Visual Studio Code to
take too long to refresh the call stack in the user interface.
SOLUTIONS
There are two ways we could fix this problem, depending on whether we
consider the current behaviour to be a bug.
1. If the current behaviour is a bug, then we can update the behaviour
of '--simple-values' so that it takes reference types into account:
that is, a value is simple if it is neither an array, struct, or
union, nor a reference to an array, struct or union.
In this case we must add a feature to the '-list-features' command so
that IDEs can detect that it is safe to use the '--simple-values'
argument when refreshing the call stack.
2. If the current behaviour is not a bug, then we can add a new option
for the PRINT-VALUES argument, for example, '--scalar-values' (3),
that would be suitable for use by IDEs.
In this case we must add a feature to the '-list-features' command
so that IDEs can detect that the '--scalar-values' argument is
available for use when refreshing the call stack.
PATCH
This patch implements solution (1) as I think the current behaviour of
not printing structures, but printing references to structures, is
contrary to reasonable expectation.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29554
|
|
I'd like to move some things so they become methods on struct ui. But
first, I think that struct ui and the related things are big enough to
deserve their own file, instead of being scattered through top.{c,h} and
event-top.c.
Change-Id: I15594269ace61fd76ef80a7b58f51ff3ab6979bc
|
|
This changes field_is_static to be a method on struct field, and
updates all the callers. Most of this patch was written by script.
Regression tested on x86-64 Fedora 36.
|
|
This changes apply_ext_lang_type_printers to use unique_xmalloc_ptr,
removing some manual memory management. Regression tested on x86-64
Fedora 36.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit allows Frame.read_var to accept named arguments, and also
improves (I think) some of the error messages emitted when values of
the wrong type are passed to this function.
The read_var method takes two arguments, one a variable, which is
either a gdb.Symbol or a string, while the second, optional, argument
is always a gdb.Block.
I'm now using 'O!' as the format specifier for the second argument,
which allows the argument type to be checked early on. Currently, if
the second argument is of the wrong type then we get this error:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Second argument must be block.
Error while executing Python code.
(gdb)
After this commit, we now get an error like this:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 2 must be gdb.Block, not str
Error while executing Python code.
(gdb)
Changes are:
1. Exception type is TypeError not RuntimeError, this is unfortunate
as user code _could_ be relying on this, but I think the improvement
is worth the risk, user code relying on the exact exception type is
likely to be pretty rare,
2. New error message gives argument position and expected argument
type, as well as the type that was passed.
If the first argument, the variable, has the wrong type then the
previous exception was already a TypeError, however, I've updated the
text of the exception to more closely match the "standard" error
message we see above. If the first argument has the wrong type then
before this commit we saw this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Argument must be a symbol or string.
Error while executing Python code.
(gdb)
And after we see this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 1 must be gdb.Symbol or str, not int
Error while executing Python code.
(gdb)
For existing code that doesn't use named arguments and doesn't rely on
exceptions, there will be no changes after this commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Following on from the previous commit, this updates
Frame.read_register to accept named arguments. As with the previous
commit there's no huge benefit for the users in accepting named
arguments here -- this function only takes a single argument after
all.
But I do think it is worth keeping Frame.read_register method in sync
with the PendingFrame.read_register method, this allows for the
possibility that the user has some code that can operate on either a
Frame or a Pending frame.
Minor update to allow for named arguments, and an extra test to check
the new functionality.
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Update the two gdb.PendingFrame methods gdb.PendingFrame.read_register
and gdb.PendingFrame.create_unwind_info to accept keyword arguments.
There's no huge benefit for making this change, both of these methods
only take a single argument, so it is (maybe) less likely that a user
will take advantage of the keyword arguments in these cases, but I
think it's nice to be consistent, and I don't see any particular draw
backs to making this change.
For PendingFrame.read_register I've changed the argument name from
'reg' to 'register' in the documentation and used 'register' as the
argument name in GDB. My preference for APIs is to use full words
where possible, and given we didn't support named arguments before
this change should not break any existing code.
There should be no user visible changes (for existing code) after this
commit.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Update gdb.UnwindInfo.add_saved_register to accept named keyword
arguments.
As part of this update we now use gdb_PyArg_ParseTupleAndKeywords
instead of PyArg_UnpackTuple to parse the function arguments.
By switching to gdb_PyArg_ParseTupleAndKeywords, we can now use 'O!'
as the argument format for the function's value argument. This means
that we can check the argument type (is gdb.Value) as part of the
argument processing rather than manually performing the check later in
the function. One result of this is that we now get a better error
message (at least, I think so). Previously we would get something
like:
ValueError: Bad register value
Now we get:
TypeError: argument 2 must be gdb.Value, not XXXX
It's unfortunate that the exception type changed, but I think the new
exception type actually makes more sense.
My preference for argument names is to use full words where that's not
too excessive. As such, I've updated the name of the argument from
'reg' to 'register' in the documentation, which is the argument name
I've made GDB look for here.
For existing unwinder code that doesn't throw any exceptions nothing
should change with this commit. It is possible that a user has some
code that throws and catches the ValueError, and this code will break
after this commit, but I think this is going to be sufficiently rare
that we can take the risk here.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Make find_thread_ptid (the overload that takes an inferior) a method of
struct inferior.
Change-Id: Ie5b9fa623ff35aa7ddb45e2805254fc8e83c9cd4
Reviewed-By: Tom Tromey <tom@tromey.com>
|