Age | Commit message (Collapse) | Author | Files | Lines |
|
I noticed that the list of general NEWS items seemed to have gotten
mixed up a bit in the NEWS file. This commit just moves things around
so that the general items all appear at the start of the 'Changes
since GDB 15' section. I've not changed any of the actual content.
|
|
This commit updates GDB so that thread or inferior specific
breakpoints are only inserted into the program space in which the
specific thread or inferior is running.
In terms of implementation, getting this basically working is easy
enough, now that a breakpoint's thread or inferior field is setup
prior to GDB looking for locations, we can easily use this information
to find a suitable program_space and pass this to as a filter when
creating the sals.
Or we could if breakpoint_ops::create_sals_from_location_spec allowed
us to pass in a filter program_space.
So, this commit extends breakpoint_ops::create_sals_from_location_spec
to take a program_space argument, and uses this to filter the set of
returned sals. This accounts for about half the change in this patch.
The second set of changes starts from breakpoint_set_thread and
breakpoint_set_inferior, this is called when the thread or inferior
for a breakpoint changes, e.g. from the Python API.
Previously this call would never result in the locations of a
breakpoint changing, after all, locations were inserted in every
program space, and we just use the thread or inferior variable to
decide when we should stop. Now though, changing a breakpoint's
thread or inferior can mean we need to figure out a new set of
breakpoint locations.
To support this I've added a new breakpoint_re_set_one function, which
is like breakpoint_re_set, but takes a single breakpoint, and just
updates the locations for that one breakpoint. We only need to call
this function if the program_space in which a breakpoint's thread (or
inferior) is running actually changes. If the program_space does
change then we call the new breakpoint_re_set_one function passing in
the program_space which should be used to filter the new locations (or
nullptr to indicate we should set locations in all program spaces).
This filter program_space needs to propagate down to all the re_set
methods, this accounts for the remaining half of the changes in this
patch.
There were a couple of existing tests that created thread or inferior
specific breakpoints and then checked the 'info breakpoints' output,
these needed updating. These were:
gdb.mi/user-selected-context-sync.exp
gdb.multi/bp-thread-specific.exp
gdb.multi/multi-target-continue.exp
gdb.multi/multi-target-ping-pong-next.exp
gdb.multi/tids.exp
gdb.mi/new-ui-bp-deleted.exp
gdb.multi/inferior-specific-bp.exp
gdb.multi/pending-bp-del-inferior.exp
I've also added some additional tests to:
gdb.multi/pending-bp.exp
I've updated the documentation and added a NEWS entry.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
The initial motivation for this commit was to allow thread or inferior
specific breakpoints to only be inserted within the appropriate
inferior's program-space. The benefit of this is that inferiors for
which the breakpoint does not apply will no longer need to stop, and
then resume, for such breakpoints. This commit does not make this
change, but is a refactor to allow this to happen in a later commit.
The problem we currently have is that when a thread-specific (or
inferior-specific) breakpoint is created, the thread (or inferior)
number is only parsed by calling find_condition_and_thread_for_sals.
This function is only called for non-pending breakpoints, and requires
that we know the locations at which the breakpoint will be placed (for
expression checking in case the breakpoint is also conditional).
A consequence of this is that by the time we figure out the breakpoint
is thread-specific we have already looked up locations in all program
spaces. This feels wasteful -- if we knew the thread-id earlier then
we could reduce the work GDB does by only looking up locations within
the program space for which the breakpoint applies.
Another consequence of how find_condition_and_thread_for_sals is
called is that pending breakpoints don't currently know they are
thread-specific, nor even that they are conditional! Additionally, by
delaying parsing the thread-id, pending breakpoints can be created for
non-existent threads, this is different to how non-pending
breakpoints are handled, so I can do this:
$ gdb -q ./gdb/testsuite/outputs/gdb.multi/pending-bp/pending-bp
Reading symbols from ./gdb/testsuite/outputs/gdb.multi/pending-bp/pending-bp...
(gdb) break foo thread 99
Function "foo" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (foo thread 99) pending.
(gdb) r
Starting program: /tmp/gdb/testsuite/outputs/gdb.multi/pending-bp/pending-bp
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Error in re-setting breakpoint 1: Unknown thread 99.
[Inferior 1 (process 3329749) exited normally]
(gdb)
GDB only checked the validity of 'thread 99' at the point the 'foo'
location became non-pending. In contrast, if I try this:
$ gdb -q ./gdb/testsuite/outputs/gdb.multi/pending-bp/pending-bp
Reading symbols from ./gdb/testsuite/outputs/gdb.multi/pending-bp/pending-bp...
(gdb) break main thread 99
Unknown thread 99.
(gdb)
GDB immediately checks if 'thread 99' exists. I think inconsistencies
like this are confusing, and should be fixed if possible.
In this commit the create_breakpoint function is updated so that the
extra_string, which contains the thread, inferior, task, and/or
condition information, is parsed immediately, even for pending
breakpoints.
Obviously, the condition still can't be validated until the breakpoint
becomes non-pending, but the thread, inferior, and task information
can be pulled from the extra-string, and can be validated early on,
even for pending breakpoints. The -force-condition flag is also
parsed as part of this early parsing change.
There are a couple of benefits to doing this:
1. Printing of breakpoints is more consistent now. Consider creating
a conditional breakpoint before this commit:
(gdb) set breakpoint pending on
(gdb) break pendingfunc if (0)
Function "pendingfunc" not defined.
Breakpoint 1 (pendingfunc if (0)) pending.
(gdb) break main if (0)
Breakpoint 2 at 0x401198: file /tmp/hello.c, line 18.
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y <PENDING> pendingfunc if (0)
2 breakpoint keep y 0x0000000000401198 in main at /tmp/hello.c:18
stop only if (0)
(gdb)
And after this commit:
(gdb) set breakpoint pending on
(gdb) break pendingfunc if (0)
Function "pendingfunc" not defined.
Breakpoint 1 (pendingfunc) pending.
(gdb) break main if (0)
Breakpoint 2 at 0x401198: file /home/andrew/tmp/hello.c, line 18.
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y <PENDING> pendingfunc
stop only if (0)
2 breakpoint keep y 0x0000000000401198 in main at /home/andrew/tmp/hello.c:18
stop only if (0)
(gdb)
Notice that the display of the condition is now the same for the
pending and non-pending breakpoints.
The same is true for the thread, inferior, or task information in
thread, inferior, or task specific breakpoints; this information is
displayed on its own line rather than being part of the 'What'
field.
2. We can check that the thread exists as soon as the pending
breakpoint is created. Currently there is a weird difference
between pending and non-pending breakpoints when creating a
thread-specific breakpoint.
A pending thread-specific breakpoint only checks its thread when it
becomes non-pending, at which point the thread the breakpoint was
intended for might have exited. Here's the behaviour before this
commit:
(gdb) set breakpoint pending on
(gdb) break foo thread 2
Function "foo" not defined.
Breakpoint 2 (foo thread 2) pending.
(gdb) c
Continuing.
[Thread 0x7ffff7c56700 (LWP 2948835) exited]
Error in re-setting breakpoint 2: Unknown thread 2.
[Inferior 1 (process 2948832) exited normally]
(gdb)
Notice the 'Error in re-setting breakpoint 2: Unknown thread 2.'
line, this was triggered when GDB tried to make the breakpoint
non-pending, and GDB discovers that the thread no longer exists.
Compare that to the behaviour after this commit:
(gdb) set breakpoint pending on
(gdb) break foo thread 2
Function "foo" not defined.
Breakpoint 2 (foo) pending.
(gdb) c
Continuing.
[Thread 0x7ffff7c56700 (LWP 2949243) exited]
Thread-specific breakpoint 2 deleted - thread 2 no longer in the thread list.
[Inferior 1 (process 2949240) exited normally]
(gdb)
Now the behaviour for pending breakpoints is identical to
non-pending breakpoints, the thread specific breakpoint is removed
as soon as the thread the breakpoint is associated with exits.
There is an additional change; when the pending breakpoint is
created prior to this patch we see this line:
Breakpoint 2 (foo thread 2) pending.
While after this patch we get this line:
Breakpoint 2 (foo) pending.
Notice that 'thread 2' has disappeared. This might look like a
regression, but I don't think it is. That we said 'thread 2'
before was just a consequence of the lazy parsing of the breakpoint
specification, while with this patch GDB understands, and has
parsed away the 'thread 2' bit of the spec. If folk think the old
information was useful then this would be trivial to add back in
code_breakpoint::say_where.
As a result of this commit the breakpoints 'extra_string' field is now
only used by bp_dprintf type breakpoints to hold the printf format and
arguments. This string should always be empty for other breakpoint
types. This allows some cleanup in print_breakpoint_location.
In code_breakpoint::code_breakpoint I've changed an error case into an
assert. This is because the error is now handled earlier in
create_breakpoint. As a result we know that by this point, the
extra_string will always be nullptr for anything other than a
bp_dprintf style breakpoint.
The find_condition_and_thread_for_sals function is now no longer
needed, this was previously doing the delayed splitting of the extra
string into thread, task, and condition, but this is now all done in
create_breakpoint, so find_condition_and_thread_for_sals can be
deleted, and the code that calls this in
code_breakpoint::location_spec_to_sals can be removed. With this
update this code would only ever be reached for bp_dprintf style
breakpoints, and in these cases the extra_string should not contain
anything other than format and args.
The most interesting changes are all in create_breakpoint and in the
new file break-cond-parse.c. We have a new block of code early on in
create_breakpoint that is responsible for splitting the extra_string
into its component parts by calling create_breakpoint_parse_arg_string
a function in the new break-cond-parse.c file. This means that some
of the later code can be simplified a little.
The new break-cond-parse.c file implements the splitting up the
extra_string and finding all the parts, as well as some self-tests for
the new function.
Finally, now we know all the breakpoint details, these can be stored
within the breakpoint object if we end up creating a deferred
breakpoint. Additionally, if we are creating a deferred bp_dprintf we
can parse the extra_string to build the printf command.
The implementation here aims to maintain backwards compatibility as
much as possible, this means that:
1. We support abbreviations of 'thread', 'task', and 'inferior' in
some places on the breakpoint line. The handling of abbreviations
has (before this patch) been a little weird, so this works:
(gdb) break *main th 1
And creates a breakpoint at '*main' for thread 1 only, while this
does not work:
(gdb) break main th 1
In this case GDB will try to find the symbol 'main th 1'. This
weirdness exists before and after this patch.
2. The handling of '-force-condition' is odd, if this flag appears
immediately after a condition then it will be treated as part of the
condition, e.g.:
(gdb) break main if 0 -force-condition
No symbol "force" in current context.
But we are fine with these alternatives:
(gdb) break main if 0 thread 1 -force-condition
(gdb) break main -force-condition if 0
Again, this is just a quirk of how the breakpoint line used to be
parsed, but I've maintained this for backward compatibility. During
review it was suggested that -force-condition should become an
actual breakpoint flag (i.e. only valid after the 'break' command
but before the function name), and I don't think that would be a
terrible idea, however, that's not currently a trivial change, and I
think should be done as a separate piece of work. For now, this
patch just maintains the current behaviour.
The implementation works by first splitting the breakpoint condition
string (everything after the location specification) into a list of
tokens, each token has a type and a value. (e.g. we have a THREAD
token where the value is the thread-id string). The list of tokens is
validated, and in some cases, tokens are merged. Then the values are
extracted from the remaining token list.
Consider this breakpoint command:
(gdb) break main thread 1 if argc == 2
The condition string passed to create_breakpoint_parse_arg_string is
going to be 'thread 1 if argc == 2', which is then split into the
tokens:
{ THREAD: "1" } { CONDITION: "argc == 2" }
The thread-id (1) and the condition string 'argc == 2' are extracted
from these tokens and returns back to create_breakpoint.
Now consider this breakpoint command:
(gdb) break some_function if ( some_var == thread )
Here the user wants a breakpoint if 'some_var' is equal to the
variable 'thread'. However, when this is initially parsed we will
find these tokens:
{ CONDITION: "( some_var == " } { THREAD: ")" }
This is a consequence of how we have to try and figure out the
contents of the 'if' condition without actually parsing the
expression; parsing the expression requires that we know the location
in order to lookup the variables by name, and this can't be done for
pending breakpoints (their location isn't known yet), and one of the
points of this work is that we extract things like thread-id for
pending breakpoints.
And so, it is in this case that token merging takes place. We check
if the value of a token appearing immediately after the CONDITION
token looks valid. In this case, does ')' look like a valid
thread-id. Clearly, in this case ')' does not, and so me merge the
THREAD token into the condition token, giving:
{ CONDITION: "( some_var == thread )" }
Which is what we want.
I'm sure that we might still be able to come up with some edge cases
where the parser makes the wrong choice. I think long term the best
way to work around these would be to move the thread, inferior, task,
and -force-condition flags to be "real" command options for the break
command. I am looking into doing this, but can't guarantee if/when
that work would be completed, so this patch should be reviewed assume
that the work will never arrive (though I hope it will).
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit changes the 'target ...' commands that accept a filename
to take a quoted or escaped filename rather than a literal filename.
What this means in practice is that if you are specifying a filename
that contains no white space or quote characters, then nothing should
change, e.g.:
target exec /path/to/some/file
works both before and after this commit.
However, if a user wishes to specify a file containing white space
then either the entire filename needs to be quoted, or the special
white space needs to be escaped. Before this patch a user could
write:
target exec /path/to a file/containing spaces
But after this commit the user would have to choose one of:
target exec "/path/to a file/containing spaces"
or
target exec /path/to\ a\ file/containing\ spaces
Obviously this is a potentially breaking change. The benefit of
making this change is consistency. Commands that take multiple
arguments (one of which is a filename) or in the future, commands that
take filename options, will always need to use quoted/escaped
filenames, so converting all unquoted filename commands to use quoting
or escaping makes the UI more consistent.
Additionally (though this is probably not a common problem), GDB
strips trailing white space from commands that the user enters. As
such it is not possible to reference any file that ends in white space
unless the quoting / escaping style is used. Though I suspect very
few users run into this problem!
The downside obviously is that this is a UI breaking change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit changes how GDB processes command arguments for the
following commands:
compile file
maint print c-tdesc
save gdb-index
After this commit these commands will now expect their single filename
argument to be (optionally) quoted if it contains any special
characters (e.g. whit space or quotes).
If the filename does not contain any special characters then nothing
changes. As an example:
(gdb) save gdb-index /path/to/some/directory/
will work before and after this patch. However, if the directory
name contains a white space then before this patch a user would write:
(gdb) save gdb-index /path/to some/directory/
But this will now fail as GDB will consider this as two arguments,
'/path/to' and 'some/directory/'. To pass this single directory name
a user must now do one of these:
(gdb) save gdb-index "/path/to some/directory/"
(gdb) save gdb-index '/path/to some/directory/'
(gdb) save gdb-index /path/to\ some/directory/
This brings these commands into line with commands like 'file' and
'symbol-file', which have supported quoted filenames for a while.
The motivation for this change is to make handling of filename
arguments consistent throughout GDB. We can't move to all commands
taking non-quoted filenames as the non-quoted style only allows for a
single argument. Additionally, the non-quoted style doesn't allow for
filenames that end in white space (though this is probably pretty
rare). So, if we want to have consistency the only choice is to move
towards supporting quote filenames.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
The 'remove-symbol-file' command doesn't currently offer command
completion. This commit addresses this.
The 'remove-symbol-file' uses gdb_argv to split its command arguments,
this means that the filename the command expects can be quoted.
However, the 'remove-symbol-file' command is a little weird in that it
also has a '-a' option, if this option is passed then the command
expects not a filename, but an address.
Currently the remove_symbol_file_command function splits the command
args using gdb_argv, checks for a '-a' flag by looking at the first
argument value, and then expects the filename or address to occupy a
single entry in the gdb_argv array.
The first thing I do is handle the '-a' flag using GDB's option
system. I model this option as a flag_option_def (a boolean option).
I've dropped the use of gdb_argv and instead use the new(ish) function
extract_single_filename_arg, which was added a couple of commits back,
to parse the filename argument (when '-a' is not given).
If '-a' is given the the remove-symbol-file command expects an address
rather than a filename. As we previously split the arguments using
gdb_argv this meant the address needed to appear as a single
argument. So a user could write:
(gdb) remove-symbol-file 0x1234
Or they could write:
(gdb) remove-symbol-file some_function
Both of these would work fine. But a user could not write:
(gdb) remove-symbol-file some_function + 0x1000
As only the 'some_function' part would be processed. Now the user
could do this:
(gdb) remove-symbol-file "some_function + 0x1000"
By enclosing the address expression in quotes this would be handled as
a single argument. However, this is a little weird, that's not how
commands like 'print' or 'x' work. Also this functionality was
neither documented, or tested.
And so, in this commit, by removing the use of gdb_argv I bring the
'remove-symbol-file' command inline with GDB's other commands that
take an expression, the quotes are no longer needed.
Usually in a completer we call 'complete_options', but don't actually
capture the option values. But for remove-symbol-file I do. This
allows me to spot when the '-a' option has been given, I can then
complete the rest of the command line as either a filename or an
expression.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
In commit:
commit 3055e3d2f13bb84db90b9c19d427c362053775d2
Date: Tue May 21 15:58:02 2024 +0100
gdb: add GDB side target_ops::fileio_stat implementation
I managed to place a NEWS entry in the wrong place. I put the entry
in 'Changes in GDB 15' rather than 'Changes since GDB 15'. This
commit moves the entry to the correct place.
|
|
While reviewing a patch I wanted to understand which blocks existed at
a given address.
The 'maint print symbols' command does provide some of this
information, but that command displays all blocks within a given
symtab. If I want to know which blocks are at a given address I have
to figure that out for myself based on the output of 'maint print
symbols' ... and I'm too lazy for that!
So this command lists just those blocks at a given address, along with
information about the blocks type. This new command doesn't list the
symbols within each block, for that my expectation is that you'd cross
reference the output with that of 'maint print symbols'.
The new command format is:
maintenance info blocks
maintenance info blocks ADDRESS
This lists the blocks at ADDRESS, or at the current $pc if ADDRESS is
not given. Blocks are listed starting at the global block, then the
static block, and then the progressively narrower scoped blocks.
For each block we list the internal block pointer (which allows easy
cross referencing with 'maint print symbols'), the inferior address
range, along with other useful information.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
While reviewing a patch I wanted to view GDB's inline frame state. I
don't believe there's currently a maintenance command to view this
information, so in this commit I've added one.
The new command is:
maintenance info inline-frames
maintenance info inline-frames ADDRESS
The command lists the inline frames that start at ADDRESS, or at the
current $pc if no ADDRESS is given. The command also displays the
"outer" function in which the inline functions are present.
An example of the command output:
(gdb) maintenance info inline-frames
Cached inline state information for thread 1.
program counter = 0x401137
skipped frames = 1
bar
> foo
main
(gdb)
This tells us that function 'main' called 'foo' which called 'bar'.
The functions 'foo' and 'bar' are both inline and both start at the
address 0x401137. Currently GDB considers the inferior to be stopped
in frame 'foo' (note the '>' marker), this means that there is 1
skipped frame (function 'bar').
The function 'main' is the outer function. The outer function might
not start at 0x401137, it is simply the function that contains the
inline functions.
If the user does a 'step' then GDB will not actually move the inferior
forward, but will instead simply tell the user that the inferior
entered 'bar'. The output of 'maint info inline-frames' will change
like this:
(gdb) step
bar () at inline.c:6
6 ++global_counter;
(gdb) maintenance info inline-frames
Cached inline state information for thread 1.
program counter = 0x401137
skipped frames = 0
> bar
foo
main
(gdb)
Now GDB is in function 'bar' and there are no skipped frames.
I have renamed skipped_symbols to function symbols within the
inline_state class. We are now going to carry the "outer"
function (the function that contains all the inlined functions) within
this list (as the last entry), so the old name didn't really make
sense. As a consequence of this rename I've updated some comments.
I've changed stopped_by_user_bp_inline_frame to take a symbol rather
than a block. Previously we just used the block to access the
associated function symbol. After this commit we can just pass in the
function symbol directly, so lets do that.
New function gather_inline_frames contains some of the logic pulled
from skip_inline_frames. This new function builds the list of all
symbols of inlined functions that start at a given $pc value and also
the "outer" function that contains all of the inlined functions.
In skip_inline_frames I've split the loop logic into two. The loop to
build the function symbol list has moved to gather_inline_frames. The
loop to figure out how many of the inlined functions we are skipping
remains in skip_inline_frames and uses the result of calling
gather_inline_frames.
In inline_skipped_symbol there are some minor updates to the comment,
and I've tweaked one of the asserts now that the function symbols list
also contains the "outer" function (a <= becomes <).
The maintenance_info_inline_frames function is now and implements the
new maintenance command.
And _initialize_inline_frame is updated to register the new command.
I've added a basic test for the new command. Please excuse the file
name for the new test, in the next commit I'll be adding additional
tests and at that point the file name will make sense.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
In a record session, when we move backward, GDB switches from normal
execution to simulation. Moving forward again, the emulation continues
until the end of the reverse history. When the end is reached, the
execution stops, and a warning message is shown. This message has been
modified to indicate that the forward emulation has reached the end, but
the execution can continue as normal, and the recording will also continue.
Before this patch, the warning message shown in that case was the same as
in the reverse case. This meant that when the end of history was reached in
either backward or forward emulation, the same message was displayed:
"No more reverse-execution history."
This message has changed for these two cases. Backward emulation:
"Reached end of recorded history; stopping.
Backward execution from here not possible."
Forward emulation:
"Reached end of recorded history; stopping.
Following forward execution will be added to history."
The reason for this change is that the initial message was deceiving, for
the forward case, making the user believe that forward debugging could not
continue.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31224
Reviewed-By: Markus T. Metzger <markus.t.metzger@intel.com> (btrace)
Approved-By: Guinevere Larsen <blarsen@redhat.com>
|
|
Call the ptwrite filter function whenever a ptwrite event is decoded.
The returned string is written to the aux_data string table and a
corresponding auxiliary instruction is appended to the function segment.
Approved-By: Markus Metzger <markus.t.metzger@intel.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This function allows to clear the trace data from python, forcing to
re-decode the trace for successive commands.
This will be used in future ptwrite patches, to trigger re-decoding when
the ptwrite filter changes.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Markus Metzger <markus.t.metzger@intel.com>
|
|
QNX Neutrino support was removed here [1], but I forgot to mention in in
NEWS.
[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=36fb20fa93484b104d91e42e38930ee8629192ab
Change-Id: I8db7957acdd0be3c1e0b751c7c245870c4cd7101
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit moves aarch64_linux_memtag_matches_p,
aarch64_linux_set_memtags, aarch64_linux_get_memtag, and
aarch64_linux_memtag_to_string hooks (plus the aarch64_mte_get_atag
function used by them), along with the setting of the memtag granule
size, from aarch64-linux-tdep.c to aarch64-tdep.c, making MTE available
on baremetal targets. Since the aarch64-linux-tdep.c layer inherits
these hooks from aarch64-tdep.c, there is no effective change for
aarch64-linux targets.
Helpers used both by aarch64-tdep.c and by aarch64-linux-tdep.c were
moved from arch/aarch64-mte-linux.{c,h} to new arch/aarch64-mte.{c,h}
files.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
The DAP spec recently changed to add a new scope for the return value
from a "stepOut" request. This new scope uses the "returnValue"
presentation hint. See:
https://github.com/microsoft/debug-adapter-protocol/issues/458
This patch implements this for gdb.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31945
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds the GDB side of target_ops::fileio_stat. There's an
implementation for inf_child_target, which just calls 'lstat', and
there's an implementation for remote_target, which sends a new
vFile:stat packet.
The new packet is documented.
There's still no users of target_fileio_stat as I have not yet added
support for vFile::stat to gdbserver. If these packets are currently
sent to gdbserver then they will be reported as not supported and the
ENOSYS error code will be returned.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
In commit 1a7d840a216 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI"), in absense of
osabi settings for newlib and uclibc for arm, I chose a best-effort approach
using ifdefs.
Post-commit review [1] pointed out that this may be causing more problems than
it's worth.
Fix this by removing the ifdefs and simply defining ARM_LINUX_JB_PC_EABI to 1.
Rebuild on x86_64-linux with --enable-targets=all.
Fixes: 1a7d840a216 ("[gdb/tdep] Fix ARM_LINUX_JB_PC_EABI")
[1] https://sourceware.org/pipermail/gdb-patches/2024-June/209779.html
|
|
A co-worker requested that the DAP code emit a scope for global
variables. It's not really practical to do this for all globals, but
it seemed reasonable to do this for globals coming from the frame's
compilation unit. For Ada in particular, this is convenient as it
exposes package-scoped variables.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit a new section for the next release branch, and renames
the section of the current branch, now that it has been cut.
|
|
Currently, when a user tries to list the current location, there are 2
different error messages that can happen, either:
(gdb) list .
No symbol table is loaded. Use the "file" command.
or
(gdb) list .
No debug information available to print source lines.
The difference here is if gdb can find any symtabs at all or not, which
is not something too important for end-users - and isn't informative at
all. This commit changes it so that the error always says that there
isn't debug information available, with these two variants:
(gdb) list .
Insufficient debug info for showing source lines at current PC (0x55555555511d).
or
(gdb) list .
Insufficient debug info for showing source lines at default location.
The difference now is if the inferior has started already, which is
controlled by the user and may be useful.
Unfortunately, it isn't as easy to differentiate if the symtab found for
other list parameters is correct, so other invocations, such as "list +"
still retain their original error message.
Co-Authored-By: Simon Marchi <simark@simark.ca>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit documents the qIsAddressTagged packet.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Eli Zaretskii <eliz@gnu.org>
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
I spotted that the two functions:
record_full_open_1
record_full_core_open_1
both took two arguments, neither of which are used.
I stumbled onto this while reviewing how filename_completer is used.
The 'record full restore' command uses filename_completer and invokes
the cmd_record_full_restore function.
The cmd_record_full_restore function calls core_file_command and then
record_full_open, which then calls one of the above functions.
As 'record full restore' takes a filename, this is passed to
cmd_record_full_restore, which forwards the filename to both
core_file_command and record_full_open. However, record_full_open
never actually uses the filename that is passed in.
The record_full_open function is also used for 'target record-full'.
I propose that record_full_open should no longer expect to see any
user supplied arguments passed in (it doesn't use any). In fact, I've
added a check that if we do get any user supplied arguments we'll
throw an error.
Now that we know record_full_open isn't being passed any user
arguments we can stop passing the arguments to record_full_open_1 and
record_full_core_open_1, this will make no user visible difference as
these arguments were not used.
It is possible that a user was previously doing:
(gdb) target record-full blah blah blah
And this previously would work fine, the 'blah blah blah' was
ignored. Now this will give an error. Other than this case there
should be no user visible changes after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
We now have unwind-on-timeout and unwind-on-terminating-exception, and
then the odd one out unwindonsignal.
I'm not a great fan of these squashed together command names, so in
this commit I propose renaming this to unwind-on-signal.
Obviously I've added the hidden alias unwindonsignal so any existing
GDB scripts will keep working.
There's one test that I've extended to test the alias works, but in
most of the other test scripts I've changed over to use the new name.
The docs are updated to reference the new name.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
Now that inferior function calls can timeout (see the recent
introduction of direct-call-timeout and indirect-call-timeout), this
commit adds a new setting unwind-on-timeout.
This new setting is just like the existing unwindonsignal and
unwind-on-terminating-exception, but the new setting will cause GDB to
unwind the stack if an inferior function call times out.
The existing inferior function call timeout tests have been updated to
cover the new setting.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
In the previous commits I have been working on improving inferior
function call support. One thing that worries me about using inferior
function calls from a conditional breakpoint is: what happens if the
inferior function call fails?
If the failure is obvious, e.g. the thread performing the call
crashes, or hits a breakpoint, then this case is already well handled,
and the error is reported to the user.
But what if the thread performing the inferior call just deadlocks?
If the user made the call from a 'print' or 'call' command, then the
user might have some expectation of when the function call should
complete, and, when this time limit is exceeded, the user
will (hopefully) interrupt GDB and regain control of the debug
session.
But, when the inferior function call is from a breakpoint condition it
is much harder to understand that GDB is deadlocked within an inferior
call. Maybe the breakpoint hasn't been hit yet? Or maybe the
condition was always false? Or maybe GDB is deadlocked in an inferior
call? The only way to know for sure is for the user to periodically
interrupt the inferior, check on the state of all the threads, and
then continue.
Additionally, the focus of the previous commit was inferior function
calls, from a conditional breakpoint, in a multi-threaded inferior.
This opens up a whole new set of potential failure conditions. For
example, what if the function called relies on interaction with some
other thread, and the other thread crashes? Or hits a breakpoint?
Given how inferior function calls work (in a synchronous manner), a
stop event in some other thread is going to be ignored while the
inferior function call is being executed as part of a breakpoint
condition, and this means that GDB could get stuck waiting for the
original condition thread, which will now never complete.
In this commit I propose a solution to this problem. A timeout. For
targets that support async-mode we can install an event-loop timer
before starting the inferior function call. When the timer expires we
will stop the thread performing the inferior function call. With this
mechanism in place a user can be sure that any inferior call they make
will either complete, or timeout eventually.
Adding a timer like this is obviously a change in behaviour for the
more common 'call' and 'print' uses of inferior function calls, so, in
this patch, I propose having two different timers. One I call the
'direct-call-timeout', which is used for 'call' and 'print' commands.
This timeout is by default set to unlimited, which, not surprisingly,
means there is no timeout in place.
A second timer, which I've called 'indirect-call-timeout', is used for
inferior function calls from breakpoint conditions. This timeout has
a default value of 30 seconds. This is a reasonably long time to
wait, and hopefully should be enough in most cases to allow the
inferior call to complete. An inferior call that takes more than 30
seconds, which is installed on a breakpoint condition is really going
to slow down the debug session, so hopefully this is not a common use
case.
The user is, of course, free to reduce, or increase the timeout value,
and can always use Ctrl-c to interrupt an inferior function call, but
this timeout will ensure that GDB will stop at some point.
The new commands added by this commit are:
set direct-call-timeout SECONDS
show direct-call-timeout
set indirect-call-timeout SECONDS
show indirect-call-timeout
These new timeouts do depend on async-mode, so, if async-mode is
disabled (maint set target-async off), or not supported (e.g. target
sim), then the timeout is treated as unlimited (that is, no timeout is
set).
For targets that "fake" non-async mode, e.g. Linux native, where
non-async mode is really just async mode, but then we park the target
in a sissuspend, we could easily fix things so that the timeouts still
work, however, for targets that really are not async aware, like the
simulator, fixing things so that timeouts work correctly would be a
much bigger task - that effort would be better spent just making the
target async-aware. And so, I'm happy for now that this feature will
only work on async targets.
The two new show commands will display slightly different text if the
current target is a non-async target, which should allow users to
understand what's going on.
There's a somewhat random test adjustment needed in gdb.base/help.exp,
the test uses a regexp with the apropos command, and expects to find a
single result. Turns out the new settings I added also matched the
regexp, which broke the test. I've updated the regexp a little to
exclude my new settings.
Reviewed-By: Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Tested-By: Luis Machado <luis.machado@arm.com>
Tested-By: Keith Seitz <keiths@redhat.com>
|
|
This commit teaches GDB's gcore command to generate sparse core files
(if supported by the filesystem).
To create a sparse file, all you have to do is skip writing zeros to
the file, instead lseek'ing-ahead over them.
The sparse logic is applied when writing the memory sections, as
that's where the bulk of the data and the zeros are.
The commit also tweaks gdb.base/bigcore.exp to make it exercise
gdb-generated cores in addition to kernel-generated cores. We
couldn't do that before, because GDB's gcore on that test's program
would generate a multi-GB non-sparse core (16GB on my system).
After this commit, gdb.base/bigcore.exp generates, when testing with
GDB's gcore, a much smaller core file, roughly in line with what the
kernel produces:
real sizes:
$ du --hu testsuite/outputs/gdb.base/bigcore/bigcore.corefile.*
2.2M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb
2.0M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel
apparent sizes:
$ du --hu --apparent-size testsuite/outputs/gdb.base/bigcore/bigcore.corefile.*
16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb
16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel
Time to generate the core also goes down significantly. On my machine, I get:
when writing to an SSD, from 21.0s, down to 8.0s
when writing to an HDD, from 31.0s, down to 8.5s
The changes to gdb.base/bigcore.exp are smaller than they look at
first sight. It's basically mostly refactoring -- moving most of the
code to a new procedure which takes as argument who should dump the
core, and then calling the procedure twice. I purposely did not
modernize any of the refactored code in this patch.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31494
Reviewed-By: Lancelot Six <lancelot.six@amd.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
Change-Id: I2554a6a4a72d8c199ce31f176e0ead0c0c76cff1
|
|
This patch deprecates the MPX commands "show/set mpx bound".
Intel listed Intel(R) Memory Protection Extensions (MPX) as removed
in 2019. Following gcc v9.1, the linux kernel v5.6 and glibc v2.35,
deprecate MPX in GDB.
|
|
This documents the new Python and Guile constants introduced earlier
in this series.
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
The gdb.Objfile, gdb.Progspace, gdb.Type, and gdb.Inferior Python
types already have a __dict__ attribute, which allows users to create
user defined attributes within the objects. This is useful if the
user wants to cache information within an object.
This commit adds the same functionality to the gdb.InferiorThread
type.
After this commit there is a new gdb.InferiorThread.__dict__
attribute, which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> t = gdb.selected_thread()
>>> t._user_attribute = 123
>>> t._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdb.Objfile, gdb.Progspace, and gdb.Type Python types already have
a __dict__ attribute, which allows users to create user defined
attributes within the objects. This is useful if the user wants to
cache information within an object.
This commit adds the same functionality to the gdb.Inferior type.
After this commit there is a new gdb.Inferior.__dict__ attribute,
which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> i = gdb.selected_inferior()
>>> i._user_attribute = 123
>>> i._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that it is possible for the user to create a new
gdb.Progspace object, like this:
(gdb) pi
>>> p = gdb.Progspace()
>>> p
<gdb.Progspace object at 0x7ffad4219c10>
>>> p.is_valid()
False
As the new gdb.Progspace object is not associated with an actual C++
program_space object within GDB core, then the new gdb.Progspace is
created invalid, and there is no way in which the new object can ever
become valid.
Nor do I believe there's anywhere in the Python API where it makes
sense to consume an invalid gdb.Progspace created in this way, for
example, the gdb.Progspace could be passed as the locus to
register_type_printer, but all that would happen is that the
registered printer would never be used.
In this commit I propose to remove the ability to create new
gdb.Progspace objects. Attempting to do so now gives an error, like
this:
(gdb) pi
>>> gdb.Progspace()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create 'gdb.Progspace' instances
Of course, there is a small risk here that some existing user code
might break ... but if that happens I don't believe the user code can
have been doing anything useful, so I see this as a small risk.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit adds a new InferiorThread.ptid_string attribute. This
read-only attribute contains the string returned by target_pid_to_str,
which actually converts a ptid (not pid) to a string.
This is the string that appears (at least in part) in the output of
'info threads' in the 'Target Id' column, but also in the thread
exited message that GDB prints.
Having access to this string from Python is useful for allowing
extensions identify threads in a similar way to how GDB core would
identify the thread.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
For testing, it's sometimes convenient to be able to request that
DWARF reading be done synchronously. This patch adds a new "maint"
setting for this purpose.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit adds a mechanism for GDB to detect the linetable opcode
DW_LNS_set_epilogue_begin. This opcode is set by compilers to indicate
that a certain instruction marks the point where the frame is destroyed.
While the standard allows for multiple points marked with epilogue_begin
in the same function, for performance reasons, the function that
searches for the epilogue address will only find the last address that
sets this flag for a given block.
This commit also changes amd64_stack_frame_destroyed_p_1 to attempt to
use the epilogue begin directly, and only if an epilogue can't be found
will it attempt heuristics based on the current instruction.
Finally, this commit also changes the dwarf assembler to be able to emit
epilogue-begin instructions, to make it easier to test this patch
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This adds a new parameter to control the DAP logging level. By
default, "expected" exceptions are not logged, but the parameter lets
the user change this when more logging is desired.
This also changes a couple of spots to avoid logging the stack trace
for a DAPException.
This patch also documents the existing DAP logging parameter. I
forgot to document this before.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
In many cases, it's not possible for gdb to discover the executable
when a DAP 'attach' request is used. This patch lets the IDE supply
this information.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This implements DAP cancellation. A new object is introduced that
handles the details of cancellation. While cancellation is inherently
racy, this code attempts to make it so that gdb doesn't inadvertently
cancel the wrong request.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30472
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP cancellation needs a way to interrupt whatever is happening on
gdb's main thread -- whether that is the inferior, a gdb CLI command,
or Python code.
This patch adds a new gdb.interrupt() function for this purpose. It
simply sets the quit flag and lets gdb do the rest.
No tests in this patch -- instead this is tested via the DAP
cancellation tests.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
|
|
This changes Python stop events to carry a "details" dictionary, that
holds any relevant information about the stop. The details are
constructed using more or less the same procedure as is done for MI.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=13587
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Now that DAP is in GDB 14, significant changes to it should be noted
in NEWS. This patch adds a note for a fix that's already gone in. I
started a new section in NEWS because more changes are coming.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30473
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
Building on the last commit, which added a general --debug=COMPONENT
option to the gdbserver command line, this commit updates the monitor
command to allow for general:
(gdb) monitor set debug COMPONENT off|on
style commands. Just like with the previous commit, the COMPONENT can
be any one of all, threads, remote, event-loop, and correspond to the
same set of global debug flags.
While on the command line it is possible to do:
--debug=remote,event-loop,threads
the components have to be entered one at a time with the monitor
command. I guess there's no reason why we couldn't allow component
grouping within the monitor command, but (to me) what I have here
seemed more in the spirit of GDB's existing 'set debug ...' commands.
If people want it then we can always add component grouping later.
Notice in the above that I use 'off' and 'on' instead of '0' and '1',
which is what the 'monitor set debug' command used to use. The 0/1
can still be used, but I now advertise off/on in all the docs and help
text, again, this feels more inline with GDB's existing boolean
settings.
I have removed the two existing monitor commands:
monitor set remote-debug 0|1
monitor set event-loop-debug 0|1
These are replaced by:
monitor set debug remote off|on
monitor set debug event-loop off|on
respectively.
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Currently, gdbserver has the following command line options related to
debugging output:
--debug
--remote-debug
--event-loop-debug
This doesn't scale well. If I want an extra debug component I need to
add another command line flag.
This commit changes --debug to take a list of components.
The currently supported components are: all, threads, remote, and
event-loop. The 'threads' component represents the debug we currently
get from the --debug option. And if --debug is used without a
component list then the threads component is assumed as the default.
Currently the threads component actually includes a lot of output that
is not really threads related. In the future I'd like to split this
up into some new, separate components. But that is not part of this
commit, or even this series.
The special component 'all' does what you'd expect: enables debug
output from all supported components.
The component list is parsed left to write, and you can prefix a
component with '-' to disable that component, so I can write:
target> gdbserver --debug=all,-event-loop
to get debug for all components except the event-loop component.
I've removed the existing --remote-debug and --event-loop-debug
command line options, these are equivalent to --debug=remote and
--debug=event-loop respectively, or --debug=remote,event-loop to
enable both components.
In this commit I've only update the command line options, in the next
commit I'll update the monitor commands to support a similar
interface.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Spotted that we'd gained two 'New commands' sections in our 'Changes
since GDB 14' NEWS file block. I've merged them together.
Also, one of these two sections was actually called 'New Commands',
however, all the other places in the NEWS file use 'New commands', so
I've changed to use that.
|
|
This adds a new gdb.Frame.static_link method to the gdb Python layer.
This can be used to find the static link frame for a given frame.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit documents in both manual and NEWS:
- the new remote clone event stop reply,
- the new QThreadOptions packet and its current defined options,
- the associated "set/show remote thread-events-packet" command,
- and the associated QThreadOptions qSupported feature.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Change-Id: Ic1c8de1fefba95729bbd242969284265de42427e
|
|
If scheduler-locking is in effect, e.g., with "set scheduler-locking
on", and you step over a function that spawns a new thread, the new
thread is allowed to run free, at least until some event is hit, at
which point, whether the new thread is re-resumed depends on a number
of seemingly random factors. E.g., if the target is all-stop, and the
parent thread hits a breakpoint, and GDB decides the breakpoint isn't
interesting to report to the user, then the parent thread is resumed,
but the new thread is left stopped.
I think that letting the new threads run with scheduler-locking
enabled is a defect. This commit fixes that, making use of the new
clone events on Linux, and of target_thread_events() on targets where
new threads have no connection to the thread that spawned them.
Testcase and documentation changes included.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: Ie12140138b37534b7fc1d904da34f0f174aa11ce
|
|
This adds a maintenance command that lets you list all the LWPs under
control of the linux-nat target.
For example:
(gdb) maint info linux-lwps
LWP Ptid Thread ID
560948.561047.0 None
560948.560948.0 1.1
This shows that "560948.561047.0" LWP doesn't map to any thread_info
object, which is bogus. We'll be using this in a testcase in a
following patch.
Co-Authored-By: Pedro Alves <pedro@palves.net>
Change-Id: Ic4e9e123385976e5cd054391990124b7a20fb3f5
|
|
The disassembler gained a new /b flag in this commit:
commit d4ce49b7ac077a9882d6a5e689e260300045ca88
Date: Tue Jun 21 20:23:35 2022 +0100
gdb: disassembler opcode display formatting
The /b and /r flags result in the instruction opcodes displayed in
different formats, so it's not possible to have both at the same
time. Currently the /b flag overrides the /r flag.
We have a similar situation with the /m and /s flags, but here, if the
user tries to use both flags then they will get an error.
I think the error is clearer, so in this commit I propose that we add
an error if /r and /b are both used.
Obviously this change breaks backwards compatibility. I don't have a
compelling argument for why we should make the change beyond my
feeling that it was a mistake not to add this error from the start,
and that the new behaviour is better.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This patch proposes to require a C++17 compiler to build gdb /
gdbsupport / gdbserver. Before this patch, GDB required a C++11
compiler.
The general policy regarding bumping C++ language requirement in GDB (as
stated in [1]) is:
Our general policy is to wait until the oldest compiler that
supports C++NN is at least 3 years old.
Rationale: We want to ensure reasonably widespread compiler
availability, to lower barrier of entry to GDB contributions, and to
make it easy for users to easily build new GDB on currently
supported stable distributions themselves. 3 years should be
sufficient for latest stable releases of distributions to include a
compiler for the standard, and/or for new compilers to appear as
easily installable optional packages. Requiring everyone to build a
compiler first before building GDB, which would happen if we
required a too-new compiler, would cause too much inconvenience.
See the policy proposal and discussion
[here](https://sourceware.org/ml/gdb-patches/2016-10/msg00616.html).
The first GCC release which with full C++17 support is GCC-9[2],
released in 2019[3], which is over 4 years ago. Clang has had C++17
support since Clang-5[4] released in 2018[5].
A discussions with many distros showed that a C++17-able compiler is
always available, meaning that this no hard requirement preventing us to
require it going forward.
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-C-Coding-Standards#When_is_GDB_going_to_start_requiring_C.2B-.2B-NN_.3F
[2] https://gcc.gnu.org/projects/cxx-status.html#cxx17
[3] https://gcc.gnu.org/gcc-9/
[4] https://clang.llvm.org/cxx_status.html
[5] https://releases.llvm.org/
Change-Id: Id596f5db17ea346e8a978668825787b3a9a443fd
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Pedro Alves <pedro@palves.net>
|