Age | Commit message (Collapse) | Author | Files | Lines |
|
This set of changes enable support for the ARMv8.1-m PACBTI extensions [1].
The goal of the PACBTI extensions is similar in scope to that of a-profile
PAC/BTI (aarch64 only), but the underlying implementation is different.
One important difference is that the pointer authentication code is stored
in a separate register, thus we don't need to mask/unmask the return address
from a function in order to produce a correct backtrace.
The patch introduces the following modifications:
- Extend the prologue analyser for 32-bit ARM to handle some instructions
from ARMv8.1-m PACBTI: pac, aut, pacg, autg and bti. Also keep track of
return address signing/authentication instructions.
- Adds code to identify object file attributes that indicate the presence of
ARMv8.1-m PACBTI (Tag_PAC_extension, Tag_BTI_extension, Tag_PACRET_use and
Tag_BTI_use).
- Adds support for DWARF pseudo-register RA_AUTH_CODE, as described in the
aadwarf32 [2].
- Extends the dwarf unwinder to track the value of RA_AUTH_CODE.
- Decorates backtraces with the "[PAC]" identifier when a frame has signed
the return address.
- Makes GDB aware of a new XML feature "org.gnu.gdb.arm.m-profile-pacbti". This
feature is not included as an XML file on GDB's side because it is only
supported for bare metal targets.
- Additional documentation.
[1] https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension
[2] https://github.com/ARM-software/abi-aa/blob/main/aadwarf32/aadwarf32.rst
|
|
Someone at IRC spotted a bug in qRcmd handling. This looks like an oversight
or it is that way for historical reasons.
The code in gdb/remote.c:remote_target::rcmd uses isdigit instead of
isxdigit. One could argue that we are expecting decimal numbers, but further
below we use fromhex ().
Update the function to use isxdigit instead and also update the documentation.
I see there are lots of other cases of undocumented number format for error
messages, mostly described as NN instead of nn. For now I'll just update
this particular function.
|
|
The previous patch added support for the DWARF prologue-end flag in line
table. This flag can be used by DWARF producers to indicate where to
place breakpoints past a function prologue. However, this takes
precedence over prologue analyzers. So if we have to debug a program
with erroneous debug information, the overall debugging experience will
be degraded.
This commit proposes to add a maintenance command to instruct GDB to
ignore the prologue_end flag.
Tested on x86_64-gnu-linux.
Change-Id: Idda6d1b96ba887f4af555b43d9923261b9cc6f82
|
|
Add support for DW_LNS_set_prologue_end when building line-tables. This
attribute can be set by the compiler to indicate that an instruction is
an adequate place to set a breakpoint just after the prologue of a
function.
The compiler might set multiple prologue_end, but considering how
current skip_prologue_using_sal works, this commit modifies it to accept
the first instruction with this marker (if any) to be the place where a
breakpoint should be placed to be at the end of the prologue.
The need for this support came from a problematic usecase generated by
hipcc (i.e. clang). The problem is as follows: There's a function
(lets call it foo) which covers PC from 0xa800 to 0xa950. The body of
foo begins with a call to an inlined function, covering from 0xa800 to
0xa94c. The issue is that when placing a breakpoint at 'foo', GDB
inserts the breakpoint at 0xa818. The 0x18 offset is what GDB thinks is
foo's first address past the prologue.
Later, when hitting the breakpoint, GDB reports the stop within the
inlined function because the PC falls in its range while the user
expects to stop in FOO.
Looking at the line-table for this location, we have:
INDEX LINE ADDRESS IS-STMT
[...]
14 293 0x000000000000a66c Y
15 END 0x000000000000a6e0 Y
16 287 0x000000000000a800 Y
17 END 0x000000000000a818 Y
18 287 0x000000000000a824 Y
[...]
For comparison, let's look at llvm-dwarfdump's output for this CU:
Address Line Column File ISA Discriminator Flags
------------------ ------ ------ ------ --- ------------- -------------
[...]
0x000000000000a66c 293 12 2 0 0 is_stmt
0x000000000000a6e0 96 43 82 0 0 is_stmt
0x000000000000a6f8 102 18 82 0 0 is_stmt
0x000000000000a70c 102 24 82 0 0
0x000000000000a710 102 18 82 0 0
0x000000000000a72c 101 16 82 0 0 is_stmt
0x000000000000a73c 2915 50 83 0 0 is_stmt
0x000000000000a74c 110 1 1 0 0 is_stmt
0x000000000000a750 110 1 1 0 0 is_stmt end_sequence
0x000000000000a800 107 0 1 0 0 is_stmt
0x000000000000a800 287 12 2 0 0 is_stmt prologue_end
0x000000000000a818 114 59 81 0 0 is_stmt
0x000000000000a824 287 12 2 0 0 is_stmt
0x000000000000a828 100 58 82 0 0 is_stmt
[...]
The main difference we are interested in here is that llvm-dwarfdump's
output tells us that 0xa800 is an adequate place to place a breakpoint
past a function prologue. Since we know that foo covers from 0xa800 to
0xa94c, 0xa800 is the address at which the breakpoint should be placed
if the user wants to break in foo.
This commit proposes to add support for the prologue_end flag in the
line-program processing.
The processing of this prologue_end flag is made in skip_prologue_sal,
before it calls gdbarch_skip_prologue_noexcept. The intent is that if
the compiler gave information on where the prologue ends, we should use
this information and not try to rely on architecture dependent logic to
guess it.
The testsuite have been executed using this patch on GNU/Linux x86_64.
Testcases have been compiled with both gcc/g++ (verison 9.4.0) and
clang/clang++ (version 10.0.0) since at the time of writing GCC does not
set the prologue_end marker. Tests done with GCC 11.2.0 (not over the
entire testsuite) show that it does not emit this flag either.
No regression have been observed with GCC or Clang. Note that when
using Clang, this patch fixes a failure in
gdb.opt/inline-small-func.exp.
Change-Id: I720449a8a9b2e1fb45b54c6095d3b1e9da9152f8
|
|
This commit adds 'set debug tui on|off' and 'show debug tui'. This
commit adds the control variable, and the printing macros in
tui/tui.h. I've then added some uses of these in tui.c and
tui-layout.c.
To help produce more useful debug output in tui-layout.c, I've added
some helper member functions in the class tui_layout_split, and also
moved the size_info struct out of tui_layout_split::apply into the
tui_layout_split class.
If tui debug is not turned on, then there should be no user visible
changes after this commit.
One thing to note is that, due to the way that the tui terminal is
often cleared, the only way I've found this useful is when I do:
(gdb) tui enable
(gdb) set logging file /path/to/file
(gdb) set logging debugredirect on
(gdb) set logging enable on
Additionally, gdb has some quirks when it comes to setting up logging
redirect and switching interpreters. Thus, the above only really
works if the logging is enabled after the tui is enabled, and disabled
again before the tui is disabled.
Enabling logging and switching interpreters can cause undefined
results, including crashes. This is an existing bug in gdb[1], and
has nothing directly to do with tui debug, but it is worth mentioning
here I think.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28948
|
|
This commit adds a new command 'tui window width', and an alias
'winwidth'. This command is equivalent to the old 'winheight'
command (which was recently renamed 'tui window height').
Even though I recently moved the old tui commands under the tui
namespace, and I would strongly encourage all new tui commands to be
added as 'tui ....' only (users can create their own top-level aliases
if they want), I'm breaking that suggestion here, and adding a
'winwidth' alias.
Given that we already have 'winheight' and have done for years, it
just didn't seem right to no have the matching 'winwidth'.
You might notice in the test that the window resizing doesn't quite
work right. I setup a horizontal layout, then grow and shrink the
windows. At the end of the test the windows should be back to their
original size...
... they are not. This isn't my fault, honest! GDB's window resizing
is a little ... temperamental, and is prone to getting things slightly
wrong during resizes, off by 1 type things. This is true for height
resizing, as well as the new width resizing.
Later patches in this series will rework the resizing algorithm, which
should improve things in this area. For now, I'm happy that the width
resizing is as good as the height resizing, given the existing quirks.
For the docs side I include a paragraph that explains how multiple
windows are required before the width can be adjusted. For
completeness, I've added the same paragraph to the winheight
description. With the predefined layouts this extra paragraph is not
really needed for winheight, as there are always multiple windows on
the screen. However, with custom layouts, this might not be true, so
adding the paragraph seems like a good idea.
As for the changes in gdb itself, I've mostly just taken the existing
height adjustment code, changed the name to make it generic 'size'
adjustment, and added a boolean flag to indicate if we are adjusting
the width or the height.
|
|
There are a lot of tui related commands that live in the top-level
command name space, e.g. layout, focus, refresh, winheight.
Having them at the top level means less typing for the user, which is
good, but, I think, makes command discovery harder.
In this commit, I propose moving all of the above mentioned commands
into the tui namespace, so 'layout' becomes 'tui layout', etc. But I
will then add aliases so that the old commands will still work,
e.g. I'll make 'layout' an alias for 'tui layout'.
The benefit I see in this work is that tui related commands can be
more easily discovered by typing 'tui ' and then tab-completing. Also
the "official" command is now a tui-sub-command, this is visible in,
for example, the help output, e.g.:
(gdb) help layout
tui layout, layout
Change the layout of windows.
Usage: tui layout prev | next | LAYOUT-NAME
List of tui layout subcommands:
tui layout asm -- Apply the "asm" layout.
tui layout next -- Apply the next TUI layout.
tui layout prev -- Apply the previous TUI layout.
tui layout regs -- Apply the TUI register layout.
tui layout split -- Apply the "split" layout.
tui layout src -- Apply the "src" layout.
Which I think is a good thing, it makes it clearer that this is a tui
command.
I've added a NEWS entry and updated the docs to mention the new and
old command names, with the new name being mentioned first.
|
|
I noticed that GDB will display URLs in a few spots. This changes
them to be styled. Originally I thought I'd introduce a new "url"
style, but there aren't many places to use this, so I just reused
filename styling instead. This patch also changes the debuginfod URL
list to be printed one URL per line. I think this is probably a bit
easier to read.
|
|
This patch removes gdb's dbx mode. Regression tested on x86-64 Fedora
34.
|
|
New in this version:
- Add a PY_MAJOR_VERSION check in configure.ac / AC_TRY_LIBPYTHON. If
the user passes --with-python=python2, this will cause a configure
failure saying that GDB only supports Python 3.
Support for Python 2 is a maintenance burden for any patches touching
Python support. Among others, the differences between Python 2 and 3
string and integer types are subtle. It requires a lot of effort and
thinking to get something that behaves correctly on both. And that's if
the author and reviewer of the patch even remember to test with Python
2.
See this thread for an example:
https://sourceware.org/pipermail/gdb-patches/2021-December/184260.html
So, remove Python 2 support. Update the documentation to state that GDB
can be built against Python 3 (as opposed to Python 2 or 3).
Update all the spots that use:
- sys.version_info
- IS_PY3K
- PY_MAJOR_VERSION
- gdb_py_is_py3k
... to only keep the Python 3 portions and drop the use of some
now-removed compatibility macros.
I did not update the configure script more than just removing the
explicit references to Python 2. We could maybe do more there, like
check the Python version and reject it if that version is not
supported. Otherwise (with this patch), things will only fail at
compile time, so it won't really be clear to the user that they are
trying to use an unsupported Python version. But I'm a bit lost in the
configure code that checks for Python, so I kept that for later.
Change-Id: I75b0f79c148afbe3c07ac664cfa9cade052c0c62
|
|
Add a new function, gdb.format_address, which is a wrapper around
GDB's print_address function.
This method takes an address, and returns a string with the format:
ADDRESS <SYMBOL+OFFSET>
Where, ADDRESS is the original address, formatted as hexadecimal,
SYMBOL is a symbol with an address lower than ADDRESS, and OFFSET is
the offset from SYMBOL to ADDRESS in decimal.
If there's no SYMBOL suitably close to ADDRESS then the
<SYMBOL+OFFSET> part is not included.
This is useful if a user wants to write a Python script that
pretty-prints addresses, the user no longer needs to do manual symbol
lookup, or worry about correctly formatting addresses.
Additionally, there are some settings that effect how GDB picks
SYMBOL, and whether the file name and line number should be included
with the SYMBOL name, the gdb.format_address function ensures that the
users Python script also benefits from these settings.
The gdb.format_address by default selects SYMBOL from the current
inferiors program space, and address is formatted using the
architecture for the current inferior. However, a user can also
explicitly pass a program space and architecture like this:
gdb.format_address(ADDRESS, PROGRAM_SPACE, ARCHITECTURE)
In order to format an address for a different inferior.
Notes on the implementation:
In py-arch.c I extended arch_object_to_gdbarch to add an assertion for
the type of the PyObject being worked on. Prior to this commit all
uses of arch_object_to_gdbarch were guaranteed to pass this function a
gdb.Architecture object, but, with this commit, this might not be the
case.
So, with this commit I've made it a requirement that the PyObject be a
gdb.Architecture, and this is checked with the assert. And in order
that callers from other files can check if they have a
gdb.Architecture object, I've added the new function
gdbpy_is_architecture.
In py-progspace.c I've added two new function, the first
progspace_object_to_program_space, converts a PyObject of type
gdb.Progspace to the associated program_space pointer, and
gdbpy_is_progspace checks if a PyObject is a gdb.Progspace or not.
|
|
This started as a patch to implement string concatenation for Ada.
However, while working on this, I looked at how this code could
possibly be called. It turns out there are only two users of
concat_operation: Ada and D. So, in addition to implementing this for
Ada, this patch rewrites value_concat, removing the odd "concatenate
or repeat" semantics, which were completely unused. As Ada and D both
seem to represent strings using TYPE_CODE_ARRAY, this removes the
TYPE_CODE_STRING code from there as well.
|
|
This commit allows a user to create custom MI commands using Python
similarly to what is possible for Python CLI commands.
A new subclass of mi_command is defined for Python MI commands,
mi_command_py. A new file, gdb/python/py-micmd.c contains the logic
for Python MI commands.
This commit is based on work linked too from this mailing list thread:
https://sourceware.org/pipermail/gdb/2021-November/049774.html
Which has also been previously posted to the mailing list here:
https://sourceware.org/pipermail/gdb-patches/2019-May/158010.html
And was recently reposted here:
https://sourceware.org/pipermail/gdb-patches/2022-January/185190.html
The version in this patch takes some core code from the previously
posted patches, but also has some significant differences, especially
after the feedback given here:
https://sourceware.org/pipermail/gdb-patches/2022-February/185767.html
A new MI command can be implemented in Python like this:
class echo_args(gdb.MICommand):
def invoke(self, args):
return { 'args': args }
echo_args("-echo-args")
The 'args' parameter (to the invoke method) is a list
containing (almost) all command line arguments passed to the MI
command (--thread and --frame are handled before the Python code is
called, and removed from the args list). This list can be empty if
the MI command was passed no arguments.
When used within gdb the above command produced output like this:
(gdb)
-echo-args a b c
^done,args=["a","b","c"]
(gdb)
The 'invoke' method of the new command must return a dictionary. The
keys of this dictionary are then used as the field names in the mi
command output (e.g. 'args' in the above).
The values of the result returned by invoke can be dictionaries,
lists, iterators, or an object that can be converted to a string.
These are processed recursively to create the mi output. And so, this
is valid:
class new_command(gdb.MICommand):
def invoke(self,args):
return { 'result_one': { 'abc': 123, 'def': 'Hello' },
'result_two': [ { 'a': 1, 'b': 2 },
{ 'c': 3, 'd': 4 } ] }
Which produces output like:
(gdb)
-new-command
^done,result_one={abc="123",def="Hello"},result_two=[{a="1",b="2"},{c="3",d="4"}]
(gdb)
I have required that the fields names used in mi result output must
match the regexp: "^[a-zA-Z][-_a-zA-Z0-9]*$" (without the quotes).
This restriction was never written down anywhere before, but seems
sensible to me, and we can always loosen this rule later if it proves
to be a problem. Much harder to try and add a restriction later, once
people are already using the API.
What follows are some details about how this implementation differs
from the original patch that was posted to the mailing list.
In this patch, I have changed how the lifetime of the Python
gdb.MICommand objects is managed. In the original patch, these object
were kept alive by an owned reference within the mi_command_py object.
As such, the Python object would not be deleted until the
mi_command_py object itself was deleted.
This caused a problem, the mi_command_py were held in the global mi
command table (in mi/mi-cmds.c), which, as a global, was not cleared
until program shutdown. By this point the Python interpreter has
already been shutdown. Attempting to delete the mi_command_py object
at this point was causing GDB to try and invoke Python code after
finalising the Python interpreter, and we would crash.
To work around this problem, the original patch added code in
python/python.c that would search the mi command table, and delete the
mi_command_py objects before the Python environment was finalised.
In contrast, in this patch, I have added a new global dictionary to
the gdb module, gdb._mi_commands. We already have several such global
data stores related to pretty printers, and frame unwinders.
The MICommand objects are placed into the new gdb.mi_commands
dictionary, and it is this reference that keeps the objects alive.
When GDB's Python interpreter is shut down gdb._mi_commands is deleted,
and any MICommand objects within it are deleted at this point.
This change avoids having to make the mi_cmd_table global, and walk
over it from within GDB's python related code.
This patch handles command redefinition entirely within GDB's python
code, though this does impose one small restriction which is not
present in the original code (detailed below), I don't think this is a
big issue. However, the original patch relied on being able to
finish executing the mi_command::do_invoke member function after the
mi_command object had been deleted. Though continuing to execute a
member function after an object is deleted is well defined, it is
also (IMHO) risky, its too easy for someone to later add a use of the
object without realising that the object might sometimes, have been
deleted. The new patch avoids this issue.
The one restriction that is added to avoid this, is that an MICommand
object can't be reinitialised with a different command name, so:
(gdb) python cmd = MyMICommand("-abc")
(gdb) python cmd.__init__("-def")
can't reinitialize object with a different command name
This feels like a pretty weird edge case, and I'm happy to live with
this restriction.
I have also changed how the memory is managed for the command name.
In the most recently posted patch series, the command name is moved
into a subclass of mi_command, the python mi_command_py, which
inherits from mi_command is then free to use a smart pointer to manage
the memory for the name.
In this patch, I leave the mi_command class unchanged, and instead
hold the memory for the name within the Python object, as the lifetime
of the Python object always exceeds the c++ object stored in the
mi_cmd_table. This adds a little more complexity in py-micmd.c, but
leaves the mi_command class nice and simple.
Next, this patch adds some extra functionality, there's a
MICommand.name read-only attribute containing the name of the command,
and a read-write MICommand.installed attribute that can be used to
install (make the command available for use) and uninstall (remove the
command from the mi_cmd_table so it can't be used) the command. This
attribute will be automatically updated if a second command replaces
an earlier command.
This patch adds additional error handling, and makes more use the
gdbpy_handle_exception function.
Co-Authored-By: Jan Vrany <jan.vrany@labware.com>
|
|
Currently, "print/x" will display a floating-point value by first
casting it to an integer type. This yields weird results like:
(gdb) print/x 1.5
$1 = 0x1
This has confused users multiple times -- see PR gdb/16242, where
there are several dups. I've also seen some confusion from this
internally at AdaCore.
The manual says:
'x'
Regard the bits of the value as an integer, and print the integer
in hexadecimal.
... which seems more useful. So, perhaps what happened is that this
was incorrectly implemented (or maybe correctly implemented and then
regressed, as there don't seem to be any tests).
This patch fixes the bug.
There was a previous discussion where we agreed to preserve the old
behavior:
https://sourceware.org/legacy-ml/gdb-patches/2017-06/msg00314.html
However, I think it makes more sense to follow the manual.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16242
|
|
Copyright-paperwork-exempt: yes
|
|
Add a new read-only property, Type.is_signed, which is True for signed
types, and False otherwise.
This property should only be read on types for which Type.is_scalar is
true, attempting to read this property for non-scalar types will raise
a ValueError.
I chose 'is_signed' rather than 'is_unsigned' in order to match the
existing Architecture.integer_type method, which takes a 'signed'
parameter. As far as I could find, that was the only existing
signed/unsigned selector in the Python API, so it seemed reasonable to
stay consistent.
|
|
Add a new read-only property which is True for scalar types,
otherwise, it's False.
|
|
Following on from the previous commit, where the -add-inferior command
now uses the same connection as the current inferior, this commit adds
a --no-connection option to -add-inferior.
This new option matches the existing option of the same name for the
CLI version of add-inferior; the new inferior is created with no
connection.
I've added a new 'connection' field to the MI output of -add-inferior,
which includes the connection number and short name. I haven't
included the longer description field, this is the MI after all. My
expectation would be that if the frontend wanted to display all the
connection details then this would be looked up from 'info
connection' (or the MI equivalent if/when such a command is added).
The existing -add-inferior tests are updated, as are the docs.
|
|
Sometimes it is convenient to be able to specify the exact bits of a
floating-point literal. For example, you may want to set a
floating-point register to a denormalized value, or to a particular
NaN.
In C, you can do this by combining the "{}" cast with an array
literal, like:
(gdb) p {double}{0x576488BDD2AE9FFE}
$1 = 9.8765449999999996e+112
This patch adds a somewhat similar idea to Ada. It extends the lexer
to allow "l" and "f" suffixes in a based literal. The "f" indicates a
floating-point literal, and the "l"s control the size of the
floating-point type.
Note that this differs from Ada's based real literals. I believe
those can also be used to control the bits of a floating-point value,
but they are a bit more cumbersome to use (simplest is binary but
that's also very lengthy). Also, these aren't implemented in GDB.
I chose not to allow this extension to work with based integer
literals with exponents. That didn't seem very useful.
|
|
Ada allows non-ASCII identifiers, and GNAT supports several such
encodings. This patch adds the corresponding support to gdb.
GNAT encodes non-ASCII characters using special symbol names.
For character sets like Latin-1, where all characters are a single
byte, it uses a "U" followed by the hex for the character. So, for
example, thorn would be encoded as "Ufe" (0xFE being lower case
thorn).
For wider characters, despite what the manual says (it claims
Shift-JIS and EUC can be used), in practice recent versions only
support Unicode. Here, characters in the base plane are represented
using "Wxxxx" and characters outside the base plane using
"WWxxxxxxxx".
GNAT has some further quirks here. Ada is case-insensitive, and GNAT
emits symbols that have been case-folded. For characters in ASCII,
and for all characters in non-Unicode character sets, lower case is
used. For Unicode, however, characters that fit in a single byte are
converted to lower case, but all others are converted to upper case.
Furthermore, there is a bug in GNAT where two symbols that differ only
in the case of "Y WITH DIAERESIS" (and potentially others, I did not
check exhaustively) can be used in one program. I chose to omit
handling this case from gdb, on the theory that it is hard to figure
out the logic, and anyway if the bug is ever fixed, we'll regret
having a heuristic.
This patch introduces a new "ada source-charset" setting. It defaults
to Latin-1, as that is GNAT's default. This setting controls how "U"
characters are decoded -- W/WW are always handled as UTF-32.
The ada_tag_name_from_tsd change is needed because this function will
read memory from the inferior and interpret it -- and this caused an
encoding failure on PPC when running a test that tries to read
uninitialized memory.
This patch implements its own UTF-32-based case folder. This avoids
host platform quirks, and is relatively simple. A short Python
program to generate the case-folding table is included. It simply
relies on whatever version of Unicode is used by the host Python,
which seems basically acceptable.
Test cases for UTF-8, Latin-1, and Latin-3 are included. This
exercises most of the new code paths, aside from Y WITH DIAERESIS as
noted above.
|
|
PR cli/17332, filed around 8 years ago, points out a typo in the docs
-- in one example, the command and its output are obviously out of
sync. This patch fixes it. I'm checking this in as obvious.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=17332
|
|
This adds a new read-only attribute gdb.InferiorThread.details, this
attribute contains a string, the results of target_extra_thread_info
for the thread, or None, if target_extra_thread_info returns nullptr.
As the string returned by target_extra_thread_info is unstructured,
this attribute is only really useful for echoing straight through to
the user, but, if a user wants to write a command that displays the
same, or a similar 'Thread Id' to the one seen in 'info threads', then
they need access to this string.
Given that the string produced by target_extra_thread_info varies by
target, there's only minimal testing of this attribute, I check that
the attribute can be accessed, and that the return value is either
None, or a string.
|
|
This patch adds support for wild template parameter list matches, similar
to how ABI tags or function overloads are now handled.
With this patch, users will be able to "gloss over" the details of matching
template parameter lists. This is accomplished by adding (yet more) logic
to strncmp_iw_with_mode to skip parameter lists if none is explicitly given
by the user.
Here's a simple example using gdb.linespec/cpls-ops.exp:
Before
------
(gdb) ptype test_op_call
type = struct test_op_call {
public:
void operator()(void);
void operator()(int);
void operator()(long);
void operator()<int>(int *);
}
(gdb) b test_op_call::operator()
Breakpoint 1 at 0x400583: test_op_call::operator(). (3 locations)
(gdb) i b
Num Type Disp Enb Address What
1 breakpoint keep y <MULTIPLE>
1.1 y 0x400583 in test_op_call::operator()(int)
at cpls-ops.cc:43
1.2 y 0x40058e in test_op_call::operator()()
at cpls-ops.cc:47
1.3 y 0x40059e in test_op_call::operator()(long)
at cpls-ops.cc:51
The breakpoint at test_op_call::operator()<int> was never set.
After
-----
(gdb) b test_op_call::operator()
Breakpoint 1 at 0x400583: test_op_call::operator(). (4 locations)
(gdb) i b
Num Type Disp Enb Address What
1 breakpoint keep y <MULTIPLE>
1.1 y 0x400583 in test_op_call::operator()(int)
at cpls-ops.cc:43
1.2 y 0x40058e in test_op_call::operator()()
at cpls-ops.cc:47
1.3 y 0x40059e in test_op_call::operator()(long)
at cpls-ops.cc:51
1.4 y 0x4008d0 in test_op_call::operator()<int>(int*)
at cpls-ops.cc:57
Similar to how scope lookups work, passing "-qualified" to the break command
will cause a literal lookup of the symbol. In the example immediately above,
this will cause GDB to only find the three non-template functions.
|
|
This commit adds styling support to the disassembler output, as such
two new commands are added to GDB:
set style disassembler enabled on|off
show style disassembler enabled
In this commit I make use of the Python Pygments package to provide
the styling. I did investigate making use of libsource-highlight,
however, I found the highlighting results to be inferior to those of
Pygments; only some mnemonics were highlighted, and highlighting of
register names such as r9d and r8d (on x86-64) was incorrect.
To enable disassembler highlighting via Pygments, I've added a new
extension language hook, which is then implemented for Python. This
hook is very similar to the existing hook for source code
colorization.
One possibly odd choice I made with the new hook is to pass a
gdb.Architecture through, even though this is currently unused. The
reason this argument is not used is that, currently, styling is
performed identically for all architectures.
However, even though the Python function used to perform styling of
disassembly output is not part of any documented API, I don't want
to close the door on a user overriding this function to provide
architecture specific styling. To do this, the user would inevitably
require access to the gdb.Architecture, and so I decided to add this
field now.
The styling is applied within gdb_disassembler::print_insn, to achieve
this, gdb_disassembler now writes its output into a temporary buffer,
styling is then applied to the contents of this buffer. Finally the
gdb_disassembler buffer is copied out to its final destination stream.
There's a new test to check that the disassembler output includes some
escape sequences, though I don't check for specific colours; the
precise colors will depend on which instructions are in the
disassembler output, and, I guess, how pygments is configured.
The only negative change with this commit is how we currently style
addresses in GDB.
Currently, when the disassembler wants to print an address, we call
back into GDB, and GDB prints the address value using the `address`
styling, and the symbol name using `function` styling. After this
commit, if pygments is used, then all disassembler styling is done
through pygments, and this include the address and symbol name parts
of the disassembler output.
I don't know how much of an issue this will be for people. There's
already some precedent for this in GDB when we look at source styling.
For example, function names in styled source listings are not styled
using the `function` style, but instead, either GNU Source Highlight,
or pygments gets to decide how the function name should be styled.
If the Python pygments library is not present then GDB will continue
to behave as it always has, the disassembler output is mostly
unstyled, but the address and symbols are styled using the `address`
and `function` styles, as they are today.
However, if the user does `set style disassembler enabled off`, then
all disassembler styling is switched off. This obviously covers the
use of pygments, but also includes the minimal styling done by GDB
when pygments is not available.
|
|
This commit adds initial target description support for LoongArch.
Signed-off-by: Zhensong Liu <liuzhensong@loongson.cn>
Signed-off-by: Qing zhang <zhangqing@loongson.cn>
Signed-off-by: Youling Tang <tangyouling@loongson.cn>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
|
|
Add a new argument to the gdb.Value.format_string method, 'styling'.
This argument is False by default.
When this argument is True, then the returned string can contain output
styling escape sequences.
When this argument is False, then the returned string will not contain
any styling escape sequences.
If the returned string is going to be printed to the user, then it is
often nice to retain the GDB styling.
For the testing, we need to adjust the TERM environment variable, as
we do for all the styling tests. I'm now running all of the C tests
in gdb.python/py-format-string.exp in an environment where styling
could be generated, but only my new test should actually produce
styled output, hopefully this will catch the case where a bug might
cause format_string to always produce styled output.
|
|
GDB already has a flag to suppress printing notification events, such
as thread and inferior context switches, on the CLI. This is used
internally when executing commands. Make the flag available to the
user via a new command. This is expected to be useful in scripts.
For instance, suppose that when Inferior 1 gets to a certain state,
you want to add and set up a new inferior using the commands below,
but you also want to have a reduced/clean output.
define do-setup
printf "Setting up Inferior 2...\n"
add-inferior -exec a.out
inferior 2
break file.c:3
run
inferior 1
printf "Done\n"
end
Currently, GDB prints
(gdb) do-setup
Setting up Inferior 2...
[New inferior 2]
Added inferior 2 on connection 1 (native)
[Switching to inferior 2 [<null>] (/tmp/a.out)]
Breakpoint 2 at 0x1155: file file.c, line 3.
Thread 2.1 "a.out" hit Breakpoint 2, main () at file.c:3
3 return 0;
[Switching to inferior 1 [process 7670] (/tmp/test)]
[Switching to thread 1.1 (process 7670)]
#0 main () at test.c:2
2 int a = 1;
Done
GDB's Python API make it possible to capture and return GDB's output,
but this does not work for all the streams. In particular, CLI
notification events are not captured:
(gdb) python gdb.execute("do-setup", False, True)
[Switching to inferior 2 [<null>] (/tmp/a.out)]
Thread 2.1 "a.out" hit Breakpoint 2, main () at file.c:3
3 return 0;
[Switching to inferior 1 [process 8263] (/tmp/test)]
[Switching to thread 1.1 (process 8263)]
#0 main () at test.c:2
2 int a = 1;
You can use the new "set suppress-cli-notifications" command to
suppress the output:
(gdb) set suppress-cli-notifications on
(gdb) do-setup
Setting up Inferior 2...
[New inferior 2]
Added inferior 2 on connection 1 (native)
Breakpoint 2 at 0x1155: file file.c, line 3.
Done
|
|
This started by noticing that the docs for 'winheight' are out of
date, the docs currently give a specific list of possible window
names. However, now that windows can be implemented in Python, it is
not possible to list all possible names.
I now link the user to a mechanism by which they can discover the
valid names for themselves at run time (by using 'info win'). That,
and the fact that gdb provides tab-completion of the name at the
command line, feels good enough.
Finally, I noticed that the docs for 'win info' don't explicitly say
that the name of the window is given in the output. This could
probably have been inferred, but given I'm now linking to this as a
mechanism to find the window name, I'd prefer to mention that the name
can be found in the output.
|
|
This commit attempts to improve the help text that is generated for
gdb.Parameter objects when the user fails to provide their own
documentation.
Documentation for a gdb.Parameter is currently pulled from two
sources: the class documentation string, and the set_doc/show_doc
class attributes. Thus, a fully documented parameter might look like
this:
class Param_All (gdb.Parameter):
"""This is the class documentation string."""
show_doc = "Show the state of this parameter"
set_doc = "Set the state of this parameter"
def get_set_string (self):
val = "on"
if (self.value == False):
val = "off"
return "Test Parameter has been set to " + val
def __init__ (self, name):
super (Param_All, self).__init__ (name, gdb.COMMAND_DATA, gdb.PARAM_BOOLEAN)
self._value = True
Param_All ('param-all')
Then in GDB we see this:
(gdb) help set param-all
Set the state of this parameter
This is the class documentation string.
Which is fine. But, if the user skips both of the documentation parts
like this:
class Param_None (gdb.Parameter):
def get_set_string (self):
val = "on"
if (self.value == False):
val = "off"
return "Test Parameter has been set to " + val
def __init__ (self, name):
super (Param_None, self).__init__ (name, gdb.COMMAND_DATA, gdb.PARAM_BOOLEAN)
self._value = True
Param_None ('param-none')
Now in GDB we see this:
(gdb) help set param-none
This command is not documented.
This command is not documented.
That's not great, the duplicated text looks a bit weird. If we drop
different parts we get different results. Here's what we get if the
user drops the set_doc and show_doc attributes:
(gdb) help set param-doc
This command is not documented.
This is the class documentation string.
That kind of sucks, we say it's undocumented, then proceed to print
the documentation. Finally, if we drop the class documentation but
keep the set_doc and show_doc:
(gdb) help set param-set-show
Set the state of this parameter
This command is not documented.
That seems OK.
So, I think there's room for improvement.
With this patch, for the four cases above we now see this:
# All values provided by the user, no change in this case:
(gdb) help set param-all
Set the state of this parameter
This is the class documentation string.
# Nothing provided by the user, the first string is now different:
(gdb) help set param-none
Set the current value of 'param-none'.
This command is not documented.
# Only the class documentation is provided, the first string is
# changed as in the previous case:
(gdb) help set param-doc
Set the current value of 'param-doc'.
This is the class documentation string.
# Only the set_doc and show_doc are provided, this case is unchanged
# from before the patch:
(gdb) help set param-set-show
Set the state of this parameter
This command is not documented.
The one place where this change might be considered a negative is when
dealing with prefix commands. If we create a prefix command but don't
supply the set_doc / show_doc strings, then this is what we saw before
my patch:
(gdb) python Param_None ('print param-none')
(gdb) help set print
set print, set pr, set p
Generic command for setting how things print.
List of set print subcommands:
... snip ...
set print param-none -- This command is not documented.
... snip ...
And after my patch:
(gdb) python Param_None ('print param-none')
(gdb) help set print
set print, set pr, set p
Generic command for setting how things print.
List of set print subcommands:
... snip ...
set print param-none -- Set the current value of 'print param-none'.
... snip ...
This seems slightly less helpful than before, but I don't think its
terrible.
Additionally, I've changed what we print when the get_show_string
method is not provided in Python.
Back when gdb.Parameter was first added to GDB, we didn't provide a
show function when registering the internal command object within
GDB. As a result, GDB would make use of its "magic" mangling of the
show_doc string to create a sentence that would display the current
value (see deprecated_show_value_hack in cli/cli-setshow.c).
However, when we added support for the get_show_string method to
gdb.Parameter, there was an attempt to maintain backward compatibility
by displaying the show_doc string with the current value appended, see
get_show_value in py-param.c. Unfortunately, this isn't anywhere
close to what deprecated_show_value_hack does, and the results are
pretty poor, for example, this is GDB before my patch:
(gdb) show param-none
This command is not documented. off
I think we can all agree that this is pretty bad.
After my patch, we how show this:
(gdb) show param-none
The current value of 'param-none' is "off".
Which at least is a real sentence, even if it's not very informative.
This patch does change the way that the Python API behaves slightly,
but only in the cases when the user has missed providing GDB with some
information. In most cases I think the new behaviour is a lot better,
there's the one case (noted above) which is a bit iffy, but I think is
still OK.
I've updated the existing gdb.python/py-parameter.exp test to cover
the modified behaviour.
Finally, I've updated the documentation to (I hope) make it clearer
how the various bits of help text come together.
|
|
Add a new function gdb.history_count to the Python api, this function
returns an integer, the number of items in GDB's value history.
This is useful if you want to pull items from the history by their
absolute number, for example, if you wanted to show a complete history
list. Previously we could figure out how many items are in the
history list by trying to fetch the items, and then catching the
exception when the item is not available, but having this function
seems nicer.
|
|
It's sometimes useful to temporarily set some gdb parameter from
Python. Now that the 'endian' crash is fixed, and now that the
current language is no longer captured by the Python layer, it seems
reasonable to add a helper function for this situation.
This adds a new gdb.with_parameter function. This creates a context
manager which temporarily sets some parameter to a specified value.
The old value is restored when the context is exited. This is most
useful with the Python "with" statement:
with gdb.with_parameter('language', 'ada'):
... do Ada stuff
This also adds a simple function to set a parameter,
gdb.set_parameter, as suggested by Andrew.
This is PR python/10790.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=10790
|
|
The description of the Window.click method doesn't mention where the
coordinates are anchored (it's the top left corner).
This minor tweak just mentions this point.
|
|
I noticed two places in the docs where we appear to be missing @r.
makeinfo seems to do the correct things despite these being
missing (at least, I couldn't see any difference in the pdf or info
output), but it doesn't hurt to have the @r in place.
|
|
We already have gdb.target_charset and gdb.target_wide_charset. This
commit adds gdb.host_charset along the same lines.
|
|
In a later commit I want to address an issue with the Python pygments
based code styling solution. As this approach is only used when the
GNU Source Highlight library is not available, testing bugs in this
area can be annoying, as it requires GDB to be rebuilt with use of GNU
Source Highlight disabled.
This commit adds a pair of new maintenance commands:
maintenance set gnu-source-highlight enabled on|off
maintenance show gnu-source-highlight enabled
these commands can be used to disable use of the GNU Source Highlight
library, allowing me, in a later commit, to easily test bugs that
would otherwise be masked by GNU Source Highlight being used.
I made this a maintenance command, rather than a general purpose
command, as it didn't seem like this was something a general user
would need to adjust. We can always convert the maintenance command
to a general command later if needed.
There's no test for this here, but this feature will be used in a
later commit.
|
|
This commit adds a new 'maint flush source-cache' command, this
flushes the cache of source file contents.
After flushing GDB is forced to reread source files the next time any
source lines are to be displayed.
I've added a test for this new feature. The test is a little weird,
in that it modifies a source file after compilation, and makes use of
the cache flush so that the changes show up when listing the source
file. I'm not sure when such a situation would ever crop up in real
life, but maybe we can imagine such cases.
In reality, this command is useful for testing the syntax highlighting
within GDB, we can adjust the syntax highlighting settings, flush the
cache, and then get the file contents re-highlighted using the new
settings.
|
|
Rename 'set debug lin-lwp' to 'set debug linux-nat' and 'show debug
lin-lwp' to 'show debug linux-nat'.
I've updated the documentation and help text to match, as well as
making it clear that the debug that is coming out relates to all
aspects of Linux native inferior support, not just the LWP aspect of
it.
The boundary between general "native" target debug, and the lwp
specific part of that debug was always a little blurry, but the actual
debug variable inside GDB is debug_linux_nat, and the print routine
linux_nat_debug_printf, is used throughout the linux-nat.c file, not
just for lwp related debug, so the new name seems to make more sense.
|
|
Building on the previous commit, this makes use of a trailing @ to
split long @deffn lines in the guile.texi source file. This splitting
doesn't change how the document is laid out by texinfo.
I have also wrapped keyword and argument name pairs in @w{...} to
prevent line breaks appearing between the two. I've currently only
done this for the longer @deffn lines, where a line break is
possible. This makes the @deffn lines much nicer to read in the
generated pdf.
|
|
Most guile procedures in the guile.texi file are defined like:
@deffn {Scheme Procedure} name arg1 arg2 arg3
But there are two places where we do this:
@deffn {Scheme Procedure} (name arg1 arg2 arg3)
Notice the added (...). Though this does represent how a procedure
call is written in scheme, it's not the normal style throughout the
manual. I also checked the 'info guile' info page to see how they
wrote there declarations, and they use the first style too.
The second style also has the drawback that index entries are added as
'(name', and so they are grouped in the '(' section of the index,
which is not very user friendly.
In this commit I've changed the definitions of make-command and
make-parameter to use the first style.
The procedure declaration lines can get pretty long with all of the
arguments, and this was true for both of the procedures I am changing
in this commit. I have made use of a trailing '@' to split the deffn
lines, and keep them under 80 characters in the texi source. This
makes no difference to how the final document looks.
Finally, our current style for keyword arguments, appears to be:
[#:keyword-name argument-name]
I don't really understand the reason for this, 'info guile' just seems
to use:
[#:keyword-name]
which seems just as good to me. But I don't propose to change
that just now. What I do notice though, is that sometimes, texinfo
will place a line break between the keyword-name and the
argument-name, for example, the pdf of make-command is:
make-command name [#:invoke invoke] [#:command-class
command-class] [#:completer-class completer] [#:prefix? prefix] [#:doc
doc-string]
Notice the line break after '#:command-class' and after '#:doc',
neither of which are ideal. And so, for the two commands I am
changing in this commit, I have made use of @w{...} to prevent line
breaks between the keyword-name and the argument-name. Now the pdf
looks like this:
make-command name [#:invoke invoke]
[#:command-class command-class] [#:completer-class completer]
[#:prefix? prefix] [#:doc doc-string]
Which seems much better. I'll probably update the other deffn lines
at some point.
|
|
This commit updates the copyright year in some files where
we have a copyright year outside of the copyright year,
and thus are not included in gdb's copyright.py script.
|
|
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
This commit ensures that the following settings are cloned from one
inferior to the new one when processing the clone-inferior command:
- inferior-tty
- environment variables
- cwd
- args
Some of those parameters can be passed as command line arguments to GDB
(-args and -tty), so one could expect the clone-inferior to respect
those flags. The following debugging session illustrates that:
gdb -nx -quiet -batch \
-ex "show args" \
-ex "show inferior-tty" \
-ex "clone-inferior" \
-ex "inferior 2" \
-ex "show args" \
-ex "show inferior-tty" \
-tty=/some/tty \
-args echo foo bar
Argument list to give program being debugged when it is started is "foo bar".
Terminal for future runs of program being debugged is "/some/tty".
[New inferior 2]
Added inferior 2.
[Switching to inferior 2 [<null>] (/bin/echo)]
Argument list to give program being debugged when it is started is "".
Terminal for future runs of program being debugged is "".
The other properties this commit copies on clone (i.e. CWD and the
environment variables) are included since they are related (in the sense
that they influence the runtime behavior of the program) even if they
cannot be directly set using command line switches.
There is a chance that this patch changes existing user workflow. I
think that this change is mostly harmless. If users want to start a new
inferior based on an existing one, they probably already propagate those
settings to the new inferior in some way.
Tested on x86_64-linux.
Change-Id: I3b1f28b662f246228b37bb24c2ea1481567b363d
|
|
I noticed that the mi-async setting was not referenced from the index
in any way, this commit tries to rectify that a bit.
The @cindex lines I think are not controversial, these same index
entries are used elsewhere in the manual for async related topics (see
@node Background Execution).
The only bit that might be controversial is that I've added a @kindex
entry for 'set mi-async' when the command is documented as '-gdb-set
mi-async' (with a similar difference for the show/-gdb-show).
My reasoning here is that nothing else is indexed under -gdb-set or
-gdb-show, and as -gdb-set/-gdb-show are just the MI equivalent for
set/show anything that is documented under set/show can be adjusted
using -gdb-set/-gdbshow, and so, I've tried to keep the index
consistent for mi-async.
|
|
Add new commands:
set debug threads on|off
show debug threads
Prints additional debug information relating to thread creation and
deletion.
GDB already announces when threads are created of course.... most of
the time, but sometimes threads are added silently, in which case this
debug message is the only mechanism to see the thread being added.
Also, though GDB does announce when a thread exits, it doesn't
announce when the thread object is deleted, I've added a debug message
for that.
Additionally, having message printed through the debug system will
cause the messages to be nested to an appropriate depth when other
debug sub-systems are turned on (especially things like `infrun` and
`lin-lwp`).
|
|
This command adds the "exit" command as an alias for the "quit"
command, as requested in PR gdb/28406.
The documentation is also updated to mention this new command.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28406
|
|
The documentation suggests that we implement gdb.Value.__init__,
however, this is not currently true, we really implement
gdb.Value.__new__. This will cause confusion if a user tries to
sub-class gdb.Value. They might write:
class MyVal (gdb.Value):
def __init__ (self, val):
gdb.Value.__init__(self, val)
obj = MyVal(123)
print ("Got: %s" % obj)
But, when they source this code they'll see:
(gdb) source ~/tmp/value-test.py
Traceback (most recent call last):
File "/home/andrew/tmp/value-test.py", line 7, in <module>
obj = MyVal(123)
File "/home/andrew/tmp/value-test.py", line 5, in __init__
gdb.Value.__init__(self, val)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)
(gdb)
The reason for this is that, as we don't implement __init__ for
gdb.Value, Python ends up calling object.__init__ instead, which
doesn't expect any arguments.
The Python docs suggest that the reason why we might take this
approach is because we want gdb.Value to be immutable:
https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_new
But I don't see any reason why we should require gdb.Value to be
immutable when other types defined in GDB are not. This current
immutability can be seen in this code:
obj = gdb.Value(1234)
print("Got: %s" % obj)
obj.__init__ (5678)
print("Got: %s" % obj)
Which currently runs without error, but prints:
Got: 1234
Got: 1234
In this commit I propose that we switch to using __init__ to
initialize gdb.Value objects.
This does introduce some additional complexity, during the __init__
call a gdb.Value might already be associated with a gdb value object,
in which case we need to cleanly break that association before
installing the new gdb value object. However, the cost of doing this
is not great, and the benefit - being able to easily sub-class
gdb.Value seems worth it.
After this commit the first example above works without error, while
the second example now prints:
Got: 1234
Got: 5678
In order to make it easier to override the gdb.Value.__init__ method,
I have tweaked the definition of gdb.Value.__init__. The second,
optional argument to __init__ is a gdb.Type, if this argument is not
present then GDB figures out a suitable type.
However, if we want to override the __init__ method in a sub-class,
and still support the default argument, it is easier to write:
class MyVal (gdb.Value):
def __init__ (self, val, type=None):
gdb.Value.__init__(self, val, type)
Currently, passing None for the Type will result in an error:
TypeError: type argument must be a gdb.Type.
After this commit I now allow the type argument to be None, in which
case GDB figures out a suitable type just as if the type had not been
passed at all.
Unless a user is trying to reinitialize a value, or create sub-classes
of gdb.Value, there should be no user visible changes after this
commit.
|
|
This adds a 'task apply' command, which is the Ada tasking analogue of
'thread apply'. Unlike 'thread apply', it doesn't offer the
'ascending' flag; but otherwise it's essentially the same.
|
|
Breakpoints in gdb can be made specific to an Ada task using the
"task" qualifier. This patch applies this same idea to watchpoints.
|
|
This commits adds a new sub-class of gdb.TargetConnection,
gdb.RemoteTargetConnection. This sub-class is created for all
'remote' and 'extended-remote' targets.
This new sub-class has one additional method over its base class,
'send_packet'. This new method is equivalent to the 'maint
packet' CLI command, it allows a custom packet to be sent to a remote
target.
The outgoing packet can either be a bytes object, or a Unicode string,
so long as the Unicode string contains only ASCII characters.
The result of calling RemoteTargetConnection.send_packet is a bytes
object containing the reply that came from the remote.
|
|
In a later commit I will add a Python API to access the 'maint packet'
functionality, that is, sending a user specified packet to the target.
To make implementing this easier, this commit refactors how this
command is currently implemented so that the packet_command function
is now global.
The new global send_remote_packet function takes an object that is an
implementation of an abstract interface. Two functions within this
interface are then called, one just before a packet is sent to the
remote target, and one when the reply has been received from the
remote target. Using an interface object in this way allows (1) for
the error checking to be done before the first callback is made, this
means we only print out what packet it being sent once we know we are
going to actually send it, and (2) we don't need to make a copy of the
reply if all we want to do is print it.
One user visible changes after this commit are the error
messages, which I've changed to be less 'maint packet' command
focused, this will make them (I hope) better for when
send_remote_packet can be called from Python code.
So: "command can only be used with remote target"
Becomes: "packets can only be sent to a remote target"
And: "remote-packet command requires packet text as argument"
Becomes: "a remote packet must not be empty"
Additionally, in this commit, I've added support for packet replies
that contain binary data. Before this commit, the code that printed
the reply treated the reply as a C string, it assumed that the string
only contained printable characters, and had a null character only at
the end.
One way to show the problem with this is if we try to read the auxv
data from a remote target, the auxv data is binary, so, before this
commit:
(gdb) target remote :54321
...
(gdb) maint packet qXfer:auxv:read::0,1000
sending: "qXfer:auxv:read::0,1000"
received: "l!"
(gdb)
And after this commit:
(gdb) target remote :54321
...
(gdb) maint packet qXfer:auxv:read::0,1000
sending: "qXfer:auxv:read::0,1000"
received: "l!\x00\x00\x00\x00\x00\x00\x00\x00\xf0\xfc\xf7\xff\x7f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\xff\xf>
(gdb)
The binary contents of the reply are now printed as escaped hex.
|