Age | Commit message (Collapse) | Author | Files | Lines |
|
The DAP spec recently changed to add a new scope for the return value
from a "stepOut" request. This new scope uses the "returnValue"
presentation hint. See:
https://github.com/microsoft/debug-adapter-protocol/issues/458
This patch implements this for gdb.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31945
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds the GDB side of target_ops::fileio_stat. There's an
implementation for inf_child_target, which just calls 'lstat', and
there's an implementation for remote_target, which sends a new
vFile:stat packet.
The new packet is documented.
There's still no users of target_fileio_stat as I have not yet added
support for vFile::stat to gdbserver. If these packets are currently
sent to gdbserver then they will be reported as not supported and the
ENOSYS error code will be returned.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
In commit 1a7d840a216 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI"), in absense of
osabi settings for newlib and uclibc for arm, I chose a best-effort approach
using ifdefs.
Post-commit review [1] pointed out that this may be causing more problems than
it's worth.
Fix this by removing the ifdefs and simply defining ARM_LINUX_JB_PC_EABI to 1.
Rebuild on x86_64-linux with --enable-targets=all.
Fixes: 1a7d840a216 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI")
[1] https://sourceware.org/pipermail/gdb-patches/2024-June/209779.html
|
|
A co-worker requested that the DAP code emit a scope for global
variables. It's not really practical to do this for all globals, but
it seemed reasonable to do this for globals coming from the frame's
compilation unit. For Ada in particular, this is convenient as it
exposes package-scoped variables.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit a new section for the next release branch, and renames
the section of the current branch, now that it has been cut.
|
|
Currently, when a user tries to list the current location, there are 2
different error messages that can happen, either:
(gdb) list .
No symbol table is loaded. Use the "file" command.
or
(gdb) list .
No debug information available to print source lines.
The difference here is if gdb can find any symtabs at all or not, which
is not something too important for end-users - and isn't informative at
all. This commit changes it so that the error always says that there
isn't debug information available, with these two variants:
(gdb) list .
Insufficient debug info for showing source lines at current PC (0x55555555511d).
or
(gdb) list .
Insufficient debug info for showing source lines at default location.
The difference now is if the inferior has started already, which is
controlled by the user and may be useful.
Unfortunately, it isn't as easy to differentiate if the symtab found for
other list parameters is correct, so other invocations, such as "list +"
still retain their original error message.
Co-Authored-By: Simon Marchi <simark@simark.ca>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit documents the qIsAddressTagged packet.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Eli Zaretskii <eliz@gnu.org>
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
I spotted that the two functions:
record_full_open_1
record_full_core_open_1
both took two arguments, neither of which are used.
I stumbled onto this while reviewing how filename_completer is used.
The 'record full restore' command uses filename_completer and invokes
the cmd_record_full_restore function.
The cmd_record_full_restore function calls core_file_command and then
record_full_open, which then calls one of the above functions.
As 'record full restore' takes a filename, this is passed to
cmd_record_full_restore, which forwards the filename to both
core_file_command and record_full_open. However, record_full_open
never actually uses the filename that is passed in.
The record_full_open function is also used for 'target record-full'.
I propose that record_full_open should no longer expect to see any
user supplied arguments passed in (it doesn't use any). In fact, I've
added a check that if we do get any user supplied arguments we'll
throw an error.
Now that we know record_full_open isn't being passed any user
arguments we can stop passing the arguments to record_full_open_1 and
record_full_core_open_1, this will make no user visible difference as
these arguments were not used.
It is possible that a user was previously doing:
(gdb) target record-full blah blah blah
And this previously would work fine, the 'blah blah blah' was
ignored. Now this will give an error. Other than this case there
should be no user visible changes after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
We now have unwind-on-timeout and unwind-on-terminating-exception, and
then the odd one out unwindonsignal.
I'm not a great fan of these squashed together command names, so in
this commit I propose renaming this to unwind-on-signal.
Obviously I've added the hidden alias unwindonsignal so any existing
GDB scripts will keep working.
There's one test that I've extended to test the alias works, but in
most of the other test scripts I've changed over to use the new name.
The docs are updated to reference the new name.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
Now that inferior function calls can timeout (see the recent
introduction of direct-call-timeout and indirect-call-timeout), this
commit adds a new setting unwind-on-timeout.
This new setting is just like the existing unwindonsignal and
unwind-on-terminating-exception, but the new setting will cause GDB to
unwind the stack if an inferior function call times out.
The existing inferior function call timeout tests have been updated to
cover the new setting.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
In the previous commits I have been working on improving inferior
function call support. One thing that worries me about using inferior
function calls from a conditional breakpoint is: what happens if the
inferior function call fails?
If the failure is obvious, e.g. the thread performing the call
crashes, or hits a breakpoint, then this case is already well handled,
and the error is reported to the user.
But what if the thread performing the inferior call just deadlocks?
If the user made the call from a 'print' or 'call' command, then the
user might have some expectation of when the function call should
complete, and, when this time limit is exceeded, the user
will (hopefully) interrupt GDB and regain control of the debug
session.
But, when the inferior function call is from a breakpoint condition it
is much harder to understand that GDB is deadlocked within an inferior
call. Maybe the breakpoint hasn't been hit yet? Or maybe the
condition was always false? Or maybe GDB is deadlocked in an inferior
call? The only way to know for sure is for the user to periodically
interrupt the inferior, check on the state of all the threads, and
then continue.
Additionally, the focus of the previous commit was inferior function
calls, from a conditional breakpoint, in a multi-threaded inferior.
This opens up a whole new set of potential failure conditions. For
example, what if the function called relies on interaction with some
other thread, and the other thread crashes? Or hits a breakpoint?
Given how inferior function calls work (in a synchronous manner), a
stop event in some other thread is going to be ignored while the
inferior function call is being executed as part of a breakpoint
condition, and this means that GDB could get stuck waiting for the
original condition thread, which will now never complete.
In this commit I propose a solution to this problem. A timeout. For
targets that support async-mode we can install an event-loop timer
before starting the inferior function call. When the timer expires we
will stop the thread performing the inferior function call. With this
mechanism in place a user can be sure that any inferior call they make
will either complete, or timeout eventually.
Adding a timer like this is obviously a change in behaviour for the
more common 'call' and 'print' uses of inferior function calls, so, in
this patch, I propose having two different timers. One I call the
'direct-call-timeout', which is used for 'call' and 'print' commands.
This timeout is by default set to unlimited, which, not surprisingly,
means there is no timeout in place.
A second timer, which I've called 'indirect-call-timeout', is used for
inferior function calls from breakpoint conditions. This timeout has
a default value of 30 seconds. This is a reasonably long time to
wait, and hopefully should be enough in most cases to allow the
inferior call to complete. An inferior call that takes more than 30
seconds, which is installed on a breakpoint condition is really going
to slow down the debug session, so hopefully this is not a common use
case.
The user is, of course, free to reduce, or increase the timeout value,
and can always use Ctrl-c to interrupt an inferior function call, but
this timeout will ensure that GDB will stop at some point.
The new commands added by this commit are:
set direct-call-timeout SECONDS
show direct-call-timeout
set indirect-call-timeout SECONDS
show indirect-call-timeout
These new timeouts do depend on async-mode, so, if async-mode is
disabled (maint set target-async off), or not supported (e.g. target
sim), then the timeout is treated as unlimited (that is, no timeout is
set).
For targets that "fake" non-async mode, e.g. Linux native, where
non-async mode is really just async mode, but then we park the target
in a sissuspend, we could easily fix things so that the timeouts still
work, however, for targets that really are not async aware, like the
simulator, fixing things so that timeouts work correctly would be a
much bigger task - that effort would be better spent just making the
target async-aware. And so, I'm happy for now that this feature will
only work on async targets.
The two new show commands will display slightly different text if the
current target is a non-async target, which should allow users to
understand what's going on.
There's a somewhat random test adjustment needed in gdb.base/help.exp,
the test uses a regexp with the apropos command, and expects to find a
single result. Turns out the new settings I added also matched the
regexp, which broke the test. I've updated the regexp a little to
exclude my new settings.
Reviewed-By: Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
This commit teaches GDB's gcore command to generate sparse core files
(if supported by the filesystem).
To create a sparse file, all you have to do is skip writing zeros to
the file, instead lseek'ing-ahead over them.
The sparse logic is applied when writing the memory sections, as
that's where the bulk of the data and the zeros are.
The commit also tweaks gdb.base/bigcore.exp to make it exercise
gdb-generated cores in addition to kernel-generated cores. We
couldn't do that before, because GDB's gcore on that test's program
would generate a multi-GB non-sparse core (16GB on my system).
After this commit, gdb.base/bigcore.exp generates, when testing with
GDB's gcore, a much smaller core file, roughly in line with what the
kernel produces:
real sizes:
$ du --hu testsuite/outputs/gdb.base/bigcore/bigcore.corefile.*
2.2M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb
2.0M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel
apparent sizes:
$ du --hu --apparent-size testsuite/outputs/gdb.base/bigcore/bigcore.corefile.*
16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb
16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel
Time to generate the core also goes down significantly. On my machine, I get:
when writing to an SSD, from 21.0s, down to 8.0s
when writing to an HDD, from 31.0s, down to 8.5s
The changes to gdb.base/bigcore.exp are smaller than they look at
first sight. It's basically mostly refactoring -- moving most of the
code to a new procedure which takes as argument who should dump the
core, and then calling the procedure twice. I purposely did not
modernize any of the refactored code in this patch.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31494
Reviewed-By: Lancelot Six <lancelot.six@amd.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Change-Id: I2554a6a4a72d8c199ce31f176e0ead0c0c76cff1
|
|
This patch deprecates the MPX commands "show/set mpx bound".
Intel listed Intel(R) Memory Protection Extensions (MPX) as removed
in 2019. Following gcc v9.1, the linux kernel v5.6 and glibc v2.35,
deprecate MPX in GDB.
|
|
This documents the new Python and Guile constants introduced earlier
in this series.
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
The gdb.Objfile, gdb.Progspace, gdb.Type, and gdb.Inferior Python
types already have a __dict__ attribute, which allows users to create
user defined attributes within the objects. This is useful if the
user wants to cache information within an object.
This commit adds the same functionality to the gdb.InferiorThread
type.
After this commit there is a new gdb.InferiorThread.__dict__
attribute, which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> t = gdb.selected_thread()
>>> t._user_attribute = 123
>>> t._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdb.Objfile, gdb.Progspace, and gdb.Type Python types already have
a __dict__ attribute, which allows users to create user defined
attributes within the objects. This is useful if the user wants to
cache information within an object.
This commit adds the same functionality to the gdb.Inferior type.
After this commit there is a new gdb.Inferior.__dict__ attribute,
which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> i = gdb.selected_inferior()
>>> i._user_attribute = 123
>>> i._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that it is possible for the user to create a new
gdb.Progspace object, like this:
(gdb) pi
>>> p = gdb.Progspace()
>>> p
<gdb.Progspace object at 0x7ffad4219c10>
>>> p.is_valid()
False
As the new gdb.Progspace object is not associated with an actual C++
program_space object within GDB core, then the new gdb.Progspace is
created invalid, and there is no way in which the new object can ever
become valid.
Nor do I believe there's anywhere in the Python API where it makes
sense to consume an invalid gdb.Progspace created in this way, for
example, the gdb.Progspace could be passed as the locus to
register_type_printer, but all that would happen is that the
registered printer would never be used.
In this commit I propose to remove the ability to create new
gdb.Progspace objects. Attempting to do so now gives an error, like
this:
(gdb) pi
>>> gdb.Progspace()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create 'gdb.Progspace' instances
Of course, there is a small risk here that some existing user code
might break ... but if that happens I don't believe the user code can
have been doing anything useful, so I see this as a small risk.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit adds a new InferiorThread.ptid_string attribute. This
read-only attribute contains the string returned by target_pid_to_str,
which actually converts a ptid (not pid) to a string.
This is the string that appears (at least in part) in the output of
'info threads' in the 'Target Id' column, but also in the thread
exited message that GDB prints.
Having access to this string from Python is useful for allowing
extensions identify threads in a similar way to how GDB core would
identify the thread.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
For testing, it's sometimes convenient to be able to request that
DWARF reading be done synchronously. This patch adds a new "maint"
setting for this purpose.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds a mechanism for GDB to detect the linetable opcode
DW_LNS_set_epilogue_begin. This opcode is set by compilers to indicate
that a certain instruction marks the point where the frame is destroyed.
While the standard allows for multiple points marked with epilogue_begin
in the same function, for performance reasons, the function that
searches for the epilogue address will only find the last address that
sets this flag for a given block.
This commit also changes amd64_stack_frame_destroyed_p_1 to attempt to
use the epilogue begin directly, and only if an epilogue can't be found
will it attempt heuristics based on the current instruction.
Finally, this commit also changes the dwarf assembler to be able to emit
epilogue-begin instructions, to make it easier to test this patch
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This adds a new parameter to control the DAP logging level. By
default, "expected" exceptions are not logged, but the parameter lets
the user change this when more logging is desired.
This also changes a couple of spots to avoid logging the stack trace
for a DAPException.
This patch also documents the existing DAP logging parameter. I
forgot to document this before.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
In many cases, it's not possible for gdb to discover the executable
when a DAP 'attach' request is used. This patch lets the IDE supply
this information.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This implements DAP cancellation. A new object is introduced that
handles the details of cancellation. While cancellation is inherently
racy, this code attempts to make it so that gdb doesn't inadvertently
cancel the wrong request.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30472
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP cancellation needs a way to interrupt whatever is happening on
gdb's main thread -- whether that is the inferior, a gdb CLI command,
or Python code.
This patch adds a new gdb.interrupt() function for this purpose. It
simply sets the quit flag and lets gdb do the rest.
No tests in this patch -- instead this is tested via the DAP
cancellation tests.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This changes Python stop events to carry a "details" dictionary, that
holds any relevant information about the stop. The details are
constructed using more or less the same procedure as is done for MI.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13587
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Now that DAP is in GDB 14, significant changes to it should be noted
in NEWS. This patch adds a note for a fix that's already gone in. I
started a new section in NEWS because more changes are coming.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30473
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
Building on the last commit, which added a general --debug=COMPONENT
option to the gdbserver command line, this commit updates the monitor
command to allow for general:
(gdb) monitor set debug COMPONENT off|on
style commands. Just like with the previous commit, the COMPONENT can
be any one of all, threads, remote, event-loop, and correspond to the
same set of global debug flags.
While on the command line it is possible to do:
--debug=remote,event-loop,threads
the components have to be entered one at a time with the monitor
command. I guess there's no reason why we couldn't allow component
grouping within the monitor command, but (to me) what I have here
seemed more in the spirit of GDB's existing 'set debug ...' commands.
If people want it then we can always add component grouping later.
Notice in the above that I use 'off' and 'on' instead of '0' and '1',
which is what the 'monitor set debug' command used to use. The 0/1
can still be used, but I now advertise off/on in all the docs and help
text, again, this feels more inline with GDB's existing boolean
settings.
I have removed the two existing monitor commands:
monitor set remote-debug 0|1
monitor set event-loop-debug 0|1
These are replaced by:
monitor set debug remote off|on
monitor set debug event-loop off|on
respectively.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Currently, gdbserver has the following command line options related to
debugging output:
--debug
--remote-debug
--event-loop-debug
This doesn't scale well. If I want an extra debug component I need to
add another command line flag.
This commit changes --debug to take a list of components.
The currently supported components are: all, threads, remote, and
event-loop. The 'threads' component represents the debug we currently
get from the --debug option. And if --debug is used without a
component list then the threads component is assumed as the default.
Currently the threads component actually includes a lot of output that
is not really threads related. In the future I'd like to split this
up into some new, separate components. But that is not part of this
commit, or even this series.
The special component 'all' does what you'd expect: enables debug
output from all supported components.
The component list is parsed left to write, and you can prefix a
component with '-' to disable that component, so I can write:
target> gdbserver --debug=all,-event-loop
to get debug for all components except the event-loop component.
I've removed the existing --remote-debug and --event-loop-debug
command line options, these are equivalent to --debug=remote and
--debug=event-loop respectively, or --debug=remote,event-loop to
enable both components.
In this commit I've only update the command line options, in the next
commit I'll update the monitor commands to support a similar
interface.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Spotted that we'd gained two 'New commands' sections in our 'Changes
since GDB 14' NEWS file block. I've merged them together.
Also, one of these two sections was actually called 'New Commands',
however, all the other places in the NEWS file use 'New commands', so
I've changed to use that.
|
|
This adds a new gdb.Frame.static_link method to the gdb Python layer.
This can be used to find the static link frame for a given frame.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit documents in both manual and NEWS:
- the new remote clone event stop reply,
- the new QThreadOptions packet and its current defined options,
- the associated "set/show remote thread-events-packet" command,
- and the associated QThreadOptions qSupported feature.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Change-Id: Ic1c8de1fefba95729bbd242969284265de42427e
|
|
If scheduler-locking is in effect, e.g., with "set scheduler-locking
on", and you step over a function that spawns a new thread, the new
thread is allowed to run free, at least until some event is hit, at
which point, whether the new thread is re-resumed depends on a number
of seemingly random factors. E.g., if the target is all-stop, and the
parent thread hits a breakpoint, and GDB decides the breakpoint isn't
interesting to report to the user, then the parent thread is resumed,
but the new thread is left stopped.
I think that letting the new threads run with scheduler-locking
enabled is a defect. This commit fixes that, making use of the new
clone events on Linux, and of target_thread_events() on targets where
new threads have no connection to the thread that spawned them.
Testcase and documentation changes included.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: Ie12140138b37534b7fc1d904da34f0f174aa11ce
|
|
This adds a maintenance command that lets you list all the LWPs under
control of the linux-nat target.
For example:
(gdb) maint info linux-lwps
LWP Ptid Thread ID
560948.561047.0 None
560948.560948.0 1.1
This shows that "560948.561047.0" LWP doesn't map to any thread_info
object, which is bogus. We'll be using this in a testcase in a
following patch.
Co-Authored-By: Pedro Alves <pedro@palves.net>
Change-Id: Ic4e9e123385976e5cd054391990124b7a20fb3f5
|
|
The disassembler gained a new /b flag in this commit:
commit d4ce49b7ac077a9882d6a5e689e260300045ca88
Date: Tue Jun 21 20:23:35 2022 +0100
gdb: disassembler opcode display formatting
The /b and /r flags result in the instruction opcodes displayed in
different formats, so it's not possible to have both at the same
time. Currently the /b flag overrides the /r flag.
We have a similar situation with the /m and /s flags, but here, if the
user tries to use both flags then they will get an error.
I think the error is clearer, so in this commit I propose that we add
an error if /r and /b are both used.
Obviously this change breaks backwards compatibility. I don't have a
compelling argument for why we should make the change beyond my
feeling that it was a mistake not to add this error from the start,
and that the new behaviour is better.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This patch proposes to require a C++17 compiler to build gdb /
gdbsupport / gdbserver. Before this patch, GDB required a C++11
compiler.
The general policy regarding bumping C++ language requirement in GDB (as
stated in [1]) is:
Our general policy is to wait until the oldest compiler that
supports C++NN is at least 3 years old.
Rationale: We want to ensure reasonably widespread compiler
availability, to lower barrier of entry to GDB contributions, and to
make it easy for users to easily build new GDB on currently
supported stable distributions themselves. 3 years should be
sufficient for latest stable releases of distributions to include a
compiler for the standard, and/or for new compilers to appear as
easily installable optional packages. Requiring everyone to build a
compiler first before building GDB, which would happen if we
required a too-new compiler, would cause too much inconvenience.
See the policy proposal and discussion
[here](https://sourceware.org/ml/gdb-patches/2016-10/msg00616.html).
The first GCC release which with full C++17 support is GCC-9[2],
released in 2019[3], which is over 4 years ago. Clang has had C++17
support since Clang-5[4] released in 2018[5].
A discussions with many distros showed that a C++17-able compiler is
always available, meaning that this no hard requirement preventing us to
require it going forward.
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-C-Coding-Standards#When_is_GDB_going_to_start_requiring_C.2B-.2B-NN_.3F
[2] https://gcc.gnu.org/projects/cxx-status.html#cxx17
[3] https://gcc.gnu.org/gcc-9/
[4] https://clang.llvm.org/cxx_status.html
[5] https://releases.llvm.org/
Change-Id: Id596f5db17ea346e8a978668825787b3a9a443fd
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
|
|
Add a gdb.Value.bytes attribute. This attribute contains the bytes of
the value (assuming the complete bytes of the value are available).
If the bytes of the gdb.Value are not available then accessing this
attribute raises an exception.
The bytes object returned from gdb.Value.bytes is cached within GDB so
that the same bytes object is returned each time. The bytes object is
created on-demand though to reduce unnecessary work.
For some values we can of course obtain the same information by
reading inferior memory based on gdb.Value.address and
gdb.Value.type.sizeof, however, not every value is in memory, so we
don't always have an address.
The gdb.Value.bytes attribute will convert any value to a bytes
object, so long as the contents are available. The value can be one
created purely in Python code, the value could be in a register,
or (of course) the value could be in memory.
The Value.bytes attribute can also be assigned too. Assigning to this
attribute is similar to calling Value.assign, the value of the
underlying value is updated within the inferior. The value assigned
to Value.bytes must be a buffer which contains exactly the correct
number of bytes (i.e. unlike value creation, we don't allow oversized
buffers).
To support this assignment like behaviour I've factored out the core
of valpy_assign. I've also updated convert_buffer_and_type_to_value
so that it can (for my use case) check the exact buffer length.
The restrictions for when the Value.bytes can or cannot be written too
are exactly the same as for Value.assign.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13267
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed a few more style issues in commit 8b9c08eddac ("[gdb/symtab] Add
name_of_main and language_of_main to the DWARF index"), after checking it
with gcc's check_GNU_style.{sh,py}.
Fix these.
Build on x86_64-linux.
|
|
Andry pointed out that the DAP code did not properly handle
gdb.LazyString results from a pretty-printer, yielding:
TypeError: Object of type LazyString is not JSON serializable
This patch fixes the problem, partly with a small patch in varref.py,
but mainly by implementing tp_str for LazyString.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds a new Python function, gdb.notify_mi, that can be used
to emit custom async notification to MI channel. This can be used, among
other things, to implement notifications about events MI does not support,
such as remote connection closed or register change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This patch adds a new section to the DWARF index containing the name
and the language of the main function symbol, gathered from
`cooked_index::get_main`, if available. Currently, for lack of a better name,
this section is called the "shortcut table". The way this name is both saved and
applied upon an index being loaded in mirrors how it is done in
`cooked_index_functions`, more specifically, the full name of the main function
symbol is saved and `set_objfile_main_name` is used to apply it after it is
loaded.
The main use case for this patch is in improving startup times when dealing with
large binaries. Currently, when an index is used, GDB has to expand symtabs
until it finds out what the language of the main function symbol is. For some
large executables, this may take a considerable amount of time to complete,
slowing down startup. This patch bypasses that operation by having both the name
and language of the main function symbol be provided ahead of time by the index.
In my testing (a binary with about 1.8GB worth of DWARF data) this change brings
startup time down from about 34 seconds to about 1.5 seconds.
When testing the patch with target board cc-with-gdb-index, test-case
gdb.fortran/nested-funcs-2.exp starts failing, but this is due to a
pre-existing issue, filed as PR symtab/30946.
Tested on x86_64-linux, with target board unix and cc-with-gdb-index.
PR symtab/24549
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=24549
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
This commit a new section for the next release branch, and renames
the section of the current branch, now that it has been cut.
|
|
I spotted two entries in the NEWS file that I believe are in the wrong
place, these are:
- An entry about MI v1 being deprecated, this feels like it should
be the first entry under the 'MI changes' heading, and
- An entry for the $_shell convenience function which is currently
under the 'New commands' heading (sort of), when I think this
should be listed in the general news section.
|
|
If you want to install GDB in a custom prefix, have it look for debug info
in that prefix but also in the distro's default location (typically,
/usr/lib/debug) and run the GDB testsuite before doing "make install", you
have a bit of a problem:
Configuring GDB with '--prefix=$PREFIX' sets the GDB 'debug-file-directory'
parameter to $PREFIX/lib/debug. Unfortunately this precludes GDB from
looking for distro-installed debug info in /usr/lib/debug. For regular GDB
use you could set debug-file-directory to $PREFIX:/usr/lib/debug in
$PREFIX/etc/gdbinit so that GDB will look in both places, but if you want
to run the testsuite then that doesn't help because in that case GDB runs
with the '-nx' option.
There's the configure option '--with-separate-debug-dir' to set the default
value for 'debug-file-directory', but it accepts only one directory and not
a list. I considered modifying it to accept a list, but it's not obvious
how to do that because its value is also used by BFD, as well as processed
for "relocatability".
I thought it was simpler to add a new option to specify a list of
additional directories that will be appended to the debug-file-directory
setting.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Document changes introduced by gdb's SME2 support.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
|
|
Provide documentation for the SME feature and other information that
should be useful for users that need to debug a SME-capable target.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-by: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
|
|
Initially I just wanted a Python event for when GDB removes a program
space, I'm writing a Python extension that caches information for each
program space, and need to know when I should discard entries for a
particular program space.
But, it seemed easy enough to also add an event for when GDB adds a
new program space, so I went ahead and added both new events.
Of course, we don't currently have an observable for program space
addition or removal, so I first needed to add these. After that it's
pretty simple to add two new Python events and have these trigger.
The two new event registries are:
events.new_progspace
events.free_progspace
These emit NewProgspaceEvent and FreeProgspaceEvent objects
respectively, each of these new event types has a 'progspace'
attribute that contains the relevant gdb.Progspace object.
There's a couple of things to be mindful of.
First, it is not possible to catch the NewProgspaceEvent for the very
first program space, the one that is created when GDB first starts, as
this program space is created before any Python scripts are sourced.
In order to allow this event to be caught we would need to defer
creating the first program space, and as a consequence the first
inferior, until some later time. But, existing scripts could easily
depend on there being an initial inferior, so I really don't think we
should change that -- and so, we end up with the consequence that we
can't catch the event for the first program space.
The second, I think minor, issue, is that GDB doesn't clean up its
program spaces upon exit -- or at least, they are not cleaned up
before Python is shut down. As a result, any program spaces in use at
the time GDB exits don't generate a FreeProgspaceEvent. I'm not
particularly worried about this for my use case, I'm using the event
to ensure that a cache doesn't hold stale entries within a single GDB
session. It's also easy enough to add a Python at-exit callback which
can do any final cleanup if needed.
Finally, when testing, I did hit a slightly weird issue with some of
the remote boards (e.g. remote-stdio-gdbserver). As a consequence of
this issue I see some output like this in the gdb.log:
(gdb) PASS: gdb.python/py-progspace-events.exp: inferior 1
step
FreeProgspaceEvent: <gdb.Progspace object at 0x7fb7e1d19c10>
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/libc.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
warning: cannot close "target:/lib64/ld-linux-x86-64.so.2": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
do_parent_stuff () at py-progspace-events.c:41
41 ++global_var;
(gdb) PASS: gdb.python/py-progspace-events.exp: step
The 'FreeProgspaceEvent ...' line is expected, that's my test Python
extension logging the event. What isn't expected are all the blocks
like:
warning: cannot close "target:/lib64/libm.so.6": Cannot execute this command while the target is running.
Use the "interrupt" command to stop the target
and then try again.
It turns out that this has nothing to do with my changes, this is just
a consequence of reading files over the remote protocol. The test
forks a child process which GDB stays attached too. When the child
exits, GDB cleans up by calling prune_inferiors, which in turn can
result in GDB trying to close some files that are open because of the
inferior being deleted.
If the prune_inferiors call occurs when the remote target is
running (and in non-async mode) then GDB will try to send a fileio
packet while the remote target is waiting for a stop reply, and the
remote target will throw an error, see remote_target::putpkt_binary in
remote.c for details.
I'm going to look at fixing this, but, as I said, this is nothing to
do with this change, I just mention it because I ended up needing to
account for these warning messages in one of my tests, and it all
looks a bit weird.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
I ran across this site:
https://no-color.org/
... which lobbies for tools to recognize the NO_COLOR environment
variable and disable any terminal styling when it is seen.
This patch implements this for gdb.
Regression tested on x86-64 Fedora 38.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-by: Kevin Buettner <kevinb@redhat.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit makes the executable_changed observable available through
the Python API as an event. There's nothing particularly interesting
going on here, it just follows the same pattern as many of the other
Python events we support.
The new event registry is called events.executable_changed, and this
emits an ExecutableChangedEvent object which has two attributes, a
gdb.Progspace called 'progspace', this is the program space in which
the executable changed, and a Boolean called 'reload', which is True
if the same executable changed on disk and has been reloaded, or is
False when a new executable has been loaded.
One interesting thing did come up during testing though, you'll notice
the test contains a setup_kfail call. During testing I observed that
the executable_changed event would trigger twice when GDB restarted an
inferior. However, the ExecutableChangedEvent object is identical for
both calls, so the wrong information is never sent out, we just see
one too many events.
I tracked this down to how the reload_symbols function (symfile.c)
takes care to also reload the executable, however, I've split fixing
this into a separate commit, so see the next commit for details.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a new Progspace.executable_filename attribute that contains the
path to the executable for this program space, or None if no
executable is set.
The path within this attribute will be set by the "exec-file" and/or
"file" commands.
Accessing this attribute for an invalid program space will raise an
exception.
This new attribute is similar too, but not the same as the existing
gdb.Progspace.filename attribute. If I could change the past, I'd
change the 'filename' attribute to 'symbol_filename', which is what it
actually represents. The old attribute will be set by the
'symbol-file' command, while the new attribute is set by the
'exec-file' command. Obviously the 'file' command sets both of these
attributes.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|