Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit:
commit 288712bbaca36bff6578bc839ebcdc3707662f81
Date: Mon Nov 22 15:16:27 2021 +0000
gdb/remote: use scoped_restore to control starting_up flag
introduced a use after free bug. The scoped restore added in the
above commit resets a flag within a remote_target's remote_state
object.
However, in some situations, the remote_target can be unpushed before
the error is thrown. If the only reference to the target is the one
in the target stack, then unpushing the target will cause the
remote_target to be deleted, which, in turn, will delete the
remote_state object. The scoped restore will then try to reset the
flag within a deleted object.
This problem was caught in the gdb.server/server-connect.exp test,
which, when run with the address sanitizer enabled, highlights the
write after free bug described above.
This commit resolves this issue by adding a new class specifically for
the purpose of managing the starting_up flag. As well as setting, and
then clearing the starting_up flag, this new class increments, and
then decrements the reference count on the remote_target object. This
prevents the remote_target from being deleted until after the flag has
been reset.
The gdb.server/server-connect.exp now runs cleanly with the address
sanitizer enabled.
|
|
That xstrdup is not correct, since we are assigning an std::string. The
result of xstrdup is used to initialize the string, and then lost
forever. Remove it.
Change-Id: Ief7771055e4bfd643ef3b285ec9fb7b1bfd14335
|
|
Commit ab557072b8ec ("gdb: use actual DWARF version in compunit's
debugformat field") changes the debug format string in "info source" to
show the actual DWARF version, rather than always show "DWARF 2".
However, it failed to consider that some tests checked for the "DWARF 2"
string to see if the test program is compiled with DWARF debug
information. Since everything is compiled with DWARF 4 or 5 nowadays,
that changed the behavior of those tests. Notably, it prevent the
tests using skip_inline_var_tests to run.
Grep through the testsuite for "DWARF 2" and change all occurrences I
could find to use "DWARF [0-9]" instead (that string is passed to TCL's
string match).
Change-Id: Ic7fb0217fb9623880c6f155da6becba0f567a885
|
|
In the gdb.ada/fixed_points_function.exp testcase, we have the following
Ada code...
type FP1_Type is delta 0.1 range -1.0 .. +1.0; -- Ordinary
function Call_FP1 (F : FP1_Type) return FP1_Type is
begin
FP1_Arg := F;
return FP1_Arg;
end Call_FP1;
... used as follow:
F1 : FP1_Type := 1.0;
F1 := Call_FP1 (F1);
The testcase, among other things, verifies that "return" works
properly as follow:
| (gdb) return 1.0
| Make pck.call_fp1 return now? (y or n) y
| [...]
| 9 F1 := Call_FP1 (F1);
| (gdb) next
| (gdb) print f1
| $1 = 0.0625
The output of the last command shows that we returned the wrong
value. The value printed gives a clue about the problem, since
it is 1/16th of the value we expected, where 1/16 is FP1_Type's
scaling factor.
The problem, here, comes from the fact that the function
handling return values for base types (ppc64_sysv_abi_return_value_base)
writes the return value using unpack_long which, upon seeing that
the value being unpacked is a fixed point type, applies the scaling
factor, to get the integer-representation of our fixed-point value
(similar to what it does with floats, for instance).
So, the fix consists in teaching ppc64_sysv_abi_return_value_base
about fixed-point types, and to avoid the unwanted application
of the scaling factor.
Note that the "finish" function, on the other hand, does not
suffer from this issue, simply becaue the value returned by
the function is read from register without the use of a type,
thus avoiding an unwanted application of a scaling factor.
No test added, as this change is already tested by
gdb.ada/fixed_points_function.exp.
Co-Authored-By: Tristan Gingold <gingold@adacore.com>
|
|
This commit adds support for TYPE_CODE_FIXED_POINT types for
"finish" and "return" commands.
Consider the following Ada code...
type FP1_Type is delta 0.1 range -1.0 .. +1.0; -- Ordinary
function Call_FP1 (F : FP1_Type) return FP1_Type is
begin
FP1_Arg := F;
return FP1_Arg;
end Call_FP1;
... used as follow:
F1 : FP1_Type := 1.0;
F1 := Call_FP1 (F1);
"finish" currently behaves as follow:
| (gdb) finish
| [...]
| Value returned is $1 = 0
We expect the returned value to be "1".
Similarly, "return" makes the function return the wrong value:
| (gdb) return 1.0
| Make pck.call_fp1 return now? (y or n) y
| [...]
| 9 F1 := Call_FP1 (F1);
| (gdb) next
| (gdb) print f1
| $1 = 0.0625
(we expect it to print "1" instead).
This problem comes from the handling of integral return values
when the return value is actually fixed point type. Our type
here is actually a range of a fixed point type, but the same
principles should also apply to pure fixed-point types. For
the record, here is what the debugging info looks like:
<1><238>: Abbrev Number: 2 (DW_TAG_subrange_type)
<239> DW_AT_lower_bound : -16
<23a> DW_AT_upper_bound : 16
<23b> DW_AT_name : pck__fp1_type
<23f> DW_AT_type : <0x248>
<1><248>: Abbrev Number: 4 (DW_TAG_base_type)
<249> DW_AT_byte_size : 1
<24a> DW_AT_encoding : 13 (signed_fixed)
<24b> DW_AT_binary_scale: -4
<24c> DW_AT_name : pck__Tfp1_typeB
<250> DW_AT_artificial : 1
... where the scaling factor is 1/16.
Looking at the "finish" command, what happens is that riscv_arg_location
determines that our return value should be returned by parameter using
an integral convention (via builtin type long). And then,
riscv_return_value uses a cast to that builtin type long to
store the value of into a buffer with the right register size.
This doesn't work in our case, because the underlying value
returned by the function is unscaled, which means it is 16,
and thus the cast is like doing:
arg_val = (FP1_Type) 16
... In other words, it is trying to create an FP1_Type enty whose
value is 16. Applying the scaling factor, that's 256, and because
the size of FP1_Type is 1 byte, we overflow and thus it ends up
being zero.
The same happen with the "return" function, but the other way around.
The fix consists in handling fixed-point types separately from
integral types.
|
|
Consider the following Ada code:
type FP1_Type is delta 0.1 range -1.0 .. +1.0; -- Ordinary
FP1_Arg : FP1_Type := 0.0;
function Call_FP1 (F : FP1_Type) return FP1_Type is
begin
FP1_Arg := F;
return FP1_Arg;
end Call_FP1;
After having stopped inside function Call_FP1 as follow:
Breakpoint 1, pck.call_fp1 (f=1) at /[...]/pck.adb:5
5 FP1_Arg := F;
Returning from that function call using "finish" should show
that the function return "1.0" (the same value as was passed
as an argument). However, this is not the case:
(gdb) finish
Run till exit from #0 pck.call_fp1 (f=1)
[...]
9 F1 := Call_FP1 (F1);
Value returned is $1 = 0
This patch enhances the extraction of the return value to know about
fixed point types.
|
|
Consider the following code:
type FP1_Type is delta 0.1 range -1.0 .. +1.0; -- Ordinary
function Call_FP1 (F : FP1_Type) return FP1_Type is
begin
return F;
end Call_FP1;
When the default in GCC is to generate proper DWARF info for fixed point
types, then in gdb, printing the result of a call to call_fp1 with a
decimal parameter leads to:
(gdb) p call_fp1(0.5)
$1 = 0
The displayed value is wrong, and we actually expected:
(gdb) p call_fp1(0.5)
$1 = 0.5
What happened is that our fixed point type parameter got promoted to a
32bit integer because we detected that the length of that object was less
than 4 bytes. The compiler does not perform this promotion and therefore
GDB should not either.
This patch fixes the behavior described above.
|
|
This adds a 'task apply' command, which is the Ada tasking analogue of
'thread apply'. Unlike 'thread apply', it doesn't offer the
'ascending' flag; but otherwise it's essentially the same.
|
|
Breakpoints in gdb can be made specific to an Ada task using the
"task" qualifier. This patch applies this same idea to watchpoints.
|
|
When introducing this code, I forgot that we had some macros for this.
Replace some "manual" pragma diagnostic with some DIAGNOSTIC_* macros,
provided by include/diagnostics.h.
In diagnostics.h:
- Add DIAGNOSTIC_ERROR, to enable a diagnostic at error level.
- Add DIAGNOSTIC_ERROR_SWITCH, to enable -Wswitch at error level, for
both gcc and clang.
Additionally, using DIAGNOSTIC_PUSH, DIAGNOSTIC_ERROR_SWITCH and
DIAGNOSTIC_POP seems to misbehave with g++ 4.8, where we see these
errors:
CXX ada-tasks.o
/home/smarchi/src/binutils-gdb/gdb/ada-tasks.c: In function void read_known_tasks():
/home/smarchi/src/binutils-gdb/gdb/ada-tasks.c:998:10: error: enumeration value ADA_TASKS_UNKNOWN not handled in switch [-Werror=switch]
switch (data->known_tasks_kind)
^
Because of the POP, the diagnostic should go back to being disabled,
since it was disabled in the beginning, but that's not what we see
here. Versions of GCC >= 5 compile correctly.
Work around this by making DIAGNOSTIC_ERROR_SWITCH a no-op for GCC < 5.
Note that this code (already as it exists in master today) enables
-Wswitch at the error level even if --disable-werror is passed. It
shouldn't be a problem, as it's not like a new enumerator will appear
out of nowhere and cause a build error if building with future
compilers. Still, for correctness, we would ideally want to ask the
compiler to enable -Wswitch at its default level (as if the user had
passed -Wswitch on the command-line). There doesn't seem to be a way to
do this.
Change-Id: Id33ebec3de39bd449409ea0bab59831289ffe82d
|
|
The "info source" command, with a DWARF-compile program, always show
that the debug info is "DWARF 2":
(gdb) info source
Current source file is test.c
Compilation directory is /home/smarchi/build/binutils-gdb/gdb
Located in /home/smarchi/build/binutils-gdb/gdb/test.c
Contains 2 lines.
Source language is c.
Producer is GNU C17 9.3.0 -mtune=generic -march=x86-64 -g3 -gdwarf-5 -O0 -fasynchronous-unwind-tables -fstack-protector-strong -fstack-clash-protection -fcf-protection.
Compiled with DWARF 2 debugging format.
Includes preprocessor macro info.
Change it to display the actual DWARF version:
(gdb) info source
Current source file is test.c
Compilation directory is /home/smarchi/build/binutils-gdb/gdb
Located in /home/smarchi/build/binutils-gdb/gdb/test.c
Contains 2 lines.
Source language is c.
Producer is GNU C17 9.3.0 -mtune=generic -march=x86-64 -g3 -gdwarf-5 -O0 -fasynchronous-unwind-tables -fstack-protector-strong -fstack-clash-protection -fcf-protection.
Compiled with DWARF 5 debugging format.
Includes preprocessor macro info.
The comp_unit_head::version field is guaranteed to be between 2 and 5,
thanks to the check in read_comp_unit_head. So we can still use static
strings to pass to record_debugformat, and keep it efficient.
In the future, when somebody will update GDB to support DWARF 6, they'll
hit this assert and have to update this code.
Change-Id: I3270b7ebf5e9a17b4215405bd2e365662a4d6172
|
|
With gdb.multi/multi-arch-exec.exp I run into:
...
Running src/gdb/testsuite/gdb.multi/multi-arch-exec.exp ...
ERROR: tcl error sourcing src/gdb/testsuite/gdb.multi/multi-arch-exec.exp.
ERROR: wrong # args: extra words after "else" clause in "if" command
while executing
"if [istarget "powerpc64*-*-*"] {
set march "-m64"
} else if [istarget "s390*-*-*"] {
set march "-m31"
} else {
set march "-m32"
}"
...
Fix the else if -> elseif typo.
Tested on x86_64-linux.
|
|
When running test-case gdb.arch/i386-pkru.exp on a machine with "Memory
Protection Keys for Userspace" support, we run into:
...
(gdb) PASS: gdb.arch/i386-pkru.exp: probe PKRU support
print $pkru^M
$2 = 1431655764^M
(gdb) FAIL: gdb.arch/i386-pkru.exp: pkru register
...
The test-case expects the $pkru register to have the default value 0, matching
the "init state" of 0 defined by the XSAVE hardware.
Since linux kernel version v4.9 containing commit acd547b29880 ("x86/pkeys:
Default to a restrictive init PKRU"), the register is set to 0x55555554 by
default (which matches the printed decimal value above).
Fix the FAIL by accepting this value for linux.
Tested on x86_64-linux.
|
|
This commit makes use of a scoped_restore object to control the
remote_state::starting_up flag within the remote_target::start_remote
method.
Ideally I would have liked to create the scoped_restore inside
start_remote and just leave the restore in place until the end of the
scope, however, I'm worried that doing this would change the behaviour
of GDB. Specifically, in start_remote, the following code is executed
once the starting_up flag has been restored to its previous value:
if (breakpoints_should_be_inserted_now ())
insert_breakpoints ();
I think (but am not 100% sure) that calling install_breakpoints could
end up back inside remote_target::can_download_tracepoint, which does
check the value of remote_state::starting_up. And so, I'm concerned
that leaving the scoped_restore in place until the end of start_remote
will cause a possible change in behaviour.
To avoid this, and to leave things as close to the current behaviour
as possible, I've split remote_target::start_remote into two, there's
the main function body which moves into remote_target::start_remote_1,
this function uses the scoped_restore to change the ::starting_up
flag, then there's the old remote_target::start_remote, which now just
calls ::start_remote_1, and then does the insert_breakpoints call.
There should be no user visible changes after this commit, unless
there's a situation where the ::starting_up flag could previously have
been left set, if this was the case, then this situation should no
longer be possible.
|
|
core file
When my system isn't properly configured to generate core files in the
local directory, I see these DUPLICATEs:
DUPLICATE: gdb.base/corefile-buildid.exp: could not generate core file
Fix that by having a single with_test_prefix around that message and
what follows.
Change-Id: I4ac245fcce1c666db56e3bad3582aa17f183dcba
|
|
The expect file has a procedure append_arch_options which sets march based
the istarget. The current if / else statement does not check for
powerpc64. The else statement is hit which sets march to -m32. This
results in compilation errors on 64-bit PowerPC.
This patch adds an if statement to check for powerpc64 and if true sets mach
to -m64.
The patch was tested on a Power 10 system. No compile errors were generated.
The test completes with 1 expected pass and no failures.
|
|
When running the gdb.python/py-arch.exp tests on a GDB built
against Python 2 I ran into some errors. The problem is that this
test script exercises the gdb.Architecture.integer_type method, and
this method uses 'p' as an argument format specifier in a call to
gdb_PyArg_ParseTupleAndKeywords.
Unfortunately this specified was only added in Python 3.3, so will
cause an error for earlier versions of Python.
This commit switches to use the 'O' specifier to collect a PyObject,
and then uses PyObject_IsTrue to convert the object to a boolean.
An earlier version of this patch incorrectly switched from using 'p'
to use 'i', however, it was pointed out during review that this would
cause some changes in behaviour, for example both of these will work
with 'p', but not with 'i':
gdb.selected_inferior().architecture().integer_type(32, None)
gdb.selected_inferior().architecture().integer_type(32, "foo")
The new approach of using 'O' works fine with these cases. I've added
some new tests to cover both of the above.
There should be no user visible changes after this commit.
|
|
When running test-case gdb.base/style.exp with a gdb build using
stub-termcap.c, we run into:
...
(gdb) PASS: gdb.base/style.exp: all styles enabled: frame when width=20
^M<et width 30^M
(gdb) FAIL: gdb.base/style.exp: all styles enabled: set width 30
...
The problem is that we're trying to issue the command "set width 30" while
width is set to 20, which causes horizontal scrolling.
Fix this by resetting the width to 0 before issuing the "set width 30"
command.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=24582
|
|
The gdb.python/py-inferior-leak.exp test makes use of the tracemalloc
module. When running the Python tests with a GDB built against Python
2 I ran into a test failure due to the tracemalloc module not being
available.
This commit adds a new helper function to lib/gdb-python.exp that
checks if a named module is available. Using this we can then skip
the py-inferior-leak.exp test when the tracemalloc module is not
available.
|
|
After this commit:
commit 76b43c9b5c2b275cbf4f927bfc25984410cb5dd5
Date: Tue Oct 5 15:10:12 2021 +0100
gdb: improve error reporting from the disassembler
We started seeing FAILs in the gdb.base/all-architectures*.exp tests,
when running on a 32-bit ARM target, though I suspect running on any
target that compiles such that bfd_vma is 32-bits would also trigger
the failures.
The problem is that the test is expected GDB's disassembler to print
an error like this:
Cannot access memory at address 0x0
However, after the above commit we see an error like:
unknown disassembler error (error = -1)
The reason for this is this code in opcodes/i386-dis.c (in the
print_insn function):
if (address_mode == mode_64bit && sizeof (bfd_vma) < 8)
{
(*info->fprintf_func) (info->stream,
_("64-bit address is disabled"));
return -1;
}
This code effectively disallows us from ever disassembling 64-bit x86
code if we compiled GDB with a 32-bit bfd_vma. Notice we return
-1 (indicating a failure to disassemble), but never call the
memory_error_func callback.
Prior to the above commit GDB, when it received the -1 return value
would assume that a memory error had occurred and just print whatever
value happened to be in the memory error address variable, the default
value of 0 just happened to be fine because the test had asked GDB to
do this 'disassemble 0x0,+4'.
If we instead change the test to do 'disassemble 0x100,+4' then GDB
would (previously) have still reported:
Cannot access memory at address 0x0
which makes far less sense.
In this commit I propose to fix this issue by changing the test to
accept either the "Cannot access memory ..." string, or the newer
"unknown disassembler error ..." string. With this change done the
test now passes.
However, there is one weakness with this strategy; if GDB broke such
that we _always_ reported "unknown disassembler error ..." we would
never notice. This clearly would be bad. To avoid this issue I have
adjusted the all-architectures*.exp tests so that, when we disassemble
for the default architecture (the one selected by "auto") we _only_
expect to get the "Cannot access memory ..." error string.
[ Note: In an ideal world we should be able to disassemble any
architecture at all times. There's no reason why the 64-bit x86
disassembler requires a 64-bit bfd_vma, other than the code happens
to be written that way. We could rewrite the disassemble to not
have this requirement, but, I don't plan to do that any time soon. ]
Further, I have changed the all-architectures*.exp test so that we now
disassemble at address 0x100, this should avoid us being able to pass
by printing a default address of 0x0. I did originally change the
address we disassembled at to 0x4, however, some architectures,
e.g. ia64, have a default instruction alignment that is greater than
4, so would still round down to 0x0. I could have just picked 0x8 as
an address, but I figured that 0x100 was likely to satisfy most
architectures alignment requirements.
|
|
This commits adds a new sub-class of gdb.TargetConnection,
gdb.RemoteTargetConnection. This sub-class is created for all
'remote' and 'extended-remote' targets.
This new sub-class has one additional method over its base class,
'send_packet'. This new method is equivalent to the 'maint
packet' CLI command, it allows a custom packet to be sent to a remote
target.
The outgoing packet can either be a bytes object, or a Unicode string,
so long as the Unicode string contains only ASCII characters.
The result of calling RemoteTargetConnection.send_packet is a bytes
object containing the reply that came from the remote.
|
|
In a later commit I will add a Python API to access the 'maint packet'
functionality, that is, sending a user specified packet to the target.
To make implementing this easier, this commit refactors how this
command is currently implemented so that the packet_command function
is now global.
The new global send_remote_packet function takes an object that is an
implementation of an abstract interface. Two functions within this
interface are then called, one just before a packet is sent to the
remote target, and one when the reply has been received from the
remote target. Using an interface object in this way allows (1) for
the error checking to be done before the first callback is made, this
means we only print out what packet it being sent once we know we are
going to actually send it, and (2) we don't need to make a copy of the
reply if all we want to do is print it.
One user visible changes after this commit are the error
messages, which I've changed to be less 'maint packet' command
focused, this will make them (I hope) better for when
send_remote_packet can be called from Python code.
So: "command can only be used with remote target"
Becomes: "packets can only be sent to a remote target"
And: "remote-packet command requires packet text as argument"
Becomes: "a remote packet must not be empty"
Additionally, in this commit, I've added support for packet replies
that contain binary data. Before this commit, the code that printed
the reply treated the reply as a C string, it assumed that the string
only contained printable characters, and had a null character only at
the end.
One way to show the problem with this is if we try to read the auxv
data from a remote target, the auxv data is binary, so, before this
commit:
(gdb) target remote :54321
...
(gdb) maint packet qXfer:auxv:read::0,1000
sending: "qXfer:auxv:read::0,1000"
received: "l!"
(gdb)
And after this commit:
(gdb) target remote :54321
...
(gdb) maint packet qXfer:auxv:read::0,1000
sending: "qXfer:auxv:read::0,1000"
received: "l!\x00\x00\x00\x00\x00\x00\x00\x00\xf0\xfc\xf7\xff\x7f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\xff\xf>
(gdb)
The binary contents of the reply are now printed as escaped hex.
|
|
This commit adds a new object type gdb.TargetConnection. This new
type represents a connection within GDB (a connection as displayed by
'info connections').
There's three ways to find a gdb.TargetConnection, there's a new
'gdb.connections()' function, which returns a list of all currently
active connections.
Or you can read the new 'connection' property on the gdb.Inferior
object type, this contains the connection for that inferior (or None
if the inferior has no connection, for example, it is exited).
Finally, there's a new gdb.events.connection_removed event registry,
this emits a new gdb.ConnectionEvent whenever a connection is removed
from GDB (this can happen when all inferiors using a connection exit,
though this is not always the case, depending on the connection type).
The gdb.ConnectionEvent has a 'connection' property, which is the
gdb.TargetConnection being removed from GDB.
The gdb.TargetConnection has an 'is_valid()' method. A connection
object becomes invalid when the underlying connection is removed from
GDB (as discussed above, this might be when all inferiors using a
connection exit, or it might be when the user explicitly replaces a
connection in GDB by issuing another 'target' command).
The gdb.TargetConnection has the following read-only properties:
'num': The number for this connection,
'type': e.g. 'native', 'remote', 'sim', etc
'description': The longer description as seen in the 'info
connections' command output.
'details': A string or None. Extra details for the connection, for
example, a remote connection's details might be
'hostname:port'.
|
|
The Rust compiler plans to change the encoding of a Rust 'char' type
to use DW_ATE_UTF. You can see the discussion here:
https://github.com/rust-lang/rust/pull/89887
However, this fails in gdb. I looked into this, and it turns out that
the handling of DW_ATE_UTF is currently fairly specific to C++. In
particular, the code here assumes the C++ type names, and it creates
an integer type.
This comes from commit 53e710acd ("GDB thinks char16_t and char32_t
are signed in C++"). The message says:
Both places need fixing. But since I couldn't tell why dwarf2read.c
needs to create a new type, I've made it use the per-arch built-in
types instead, so that the types are only created once per arch
instead of once per objfile. That seems to work fine.
... which is fine, but it seems to me that it's also correct to make a
new character type; and this approach is better because it preserves
the type name as well. This does use more memory, but first we
shouldn't be too concerned about the memory use of types coming from
debuginfo; and second, if we are, we should implement type interning
anyway.
Changing this code to use a character type revealed a couple of
oddities in the C/C++ handling of TYPE_CODE_CHAR. This patch fixes
these as well.
I filed PR rust/28637 for this issue, so that this patch can be
backported to the gdb 11 branch.
|
|
During debuginfod downloads, ctrl-c should result in the download
being cancelled and skipped. However in some cases, ctrl-c fails to
get delivered to gdb during downloading. This can result in downloads
being unskippable.
Fix this by ensuring that target_terminal::ours is in effect for the
duration of each download.
Co-authored-by: Tom de Vries <tdevries@suse.de>
https://sourceware.org/bugzilla/show_bug.cgi?id=27026#c3
|
|
PR28539 describes a segfault in lambda function search_one_symtab due to
psymbol_functions::expand_symtabs_matching calling expansion_notify with a
nullptr symtab:
...
struct compunit_symtab *symtab =
psymtab_to_symtab (objfile, ps);
if (expansion_notify != NULL)
if (!expansion_notify (symtab))
return false;
...
This happens as follows. The partial symtab ps is a dwarf2_include_psymtab
for some header file:
...
(gdb) p ps.filename
$5 = 0x64fcf80 "/usr/include/c++/11/bits/stl_construct.h"
...
The includer of ps is a shared symtab for a partial unit, with as user:
...
(gdb) p ps.includer().user.filename
$11 = 0x64fc9f0 \
"/usr/src/debug/llvm13-13.0.0-1.2.x86_64/tools/clang/lib/AST/Decl.cpp"
...
The call to psymtab_to_symtab expands the Decl.cpp symtab (and consequently
the shared symtab), but returns nullptr because:
...
struct dwarf2_include_psymtab : public partial_symtab
{
...
compunit_symtab *get_compunit_symtab (struct objfile *objfile) const override
{
return nullptr;
}
...
Fix this by returning the Decl.cpp symtab instead, which fixes the segfault
in the PR.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28539
|
|
Proc lines contains a typo:
...
string_form { set $_line_string_form $value }
...
Remove the incorrect '$' in '$_line_string_form'.
Tested on x86_64-linux.
|
|
While debugging a problem in gdb.dwarf2/dw2-lines.exp, I realized that the
test-case generates all executables and associated temporary files using the
same filenames.
Fix this by adding a new proc prefix_id in lib/gdb.exp, and using it in the
test-case.
Tested on x86_64-linux.
|
|
When running test-case gdb.dwarf2/dw2-lines.exp with target board -unix/-m32,
we run into another instance of PR28383, where the dwarf assembler generates
64-bit relocations which are not supported by the 32-bit assembler:
...
dw2-lines-dw.S: Assembler messages:^M
outputs/gdb.dwarf2/dw2-lines/dw2-lines-dw.S:76: Error: \
cannot represent relocation type BFD_RELOC_64^M
...
Fix this by using _op_offset in _line_finalize_header.
Tested on x86_64-linux.
|
|
The variable names used to restore CFLAGS and LDFLAGS here don't quite
match the names used above, resulting in losing the original CFLAGS and
LDFLAGS. Fix that.
Change-Id: I9cc2c3b48b1dc30c31a7143563c893fd6f426a0a
|
|
In commit f8080fb7a44 "[gdb/testsuite] Add gdb.base/include-main.exp" a
file gdb.base/main.c was added, which caused the following regression:
...
(gdb) list^M
<gdb.base/main.c>
(gdb) FAIL: gdb.base/list-missing-source.exp: list
...
The problem is that the test-case does not expect to find a file main.c, but
now it finds gdb.base/main.c.
Fix this by using the more specific file name list-missing-source.c.
Tested on x86_64-linux.
|
|
The test-case gdb.ada/dgopt.exp uses the -gnatD switch, in combination with
-gnatG.
This causes the source file $src/gdb/testsuite/gdb.ada/dgopt/x.adb to be
expanded into $build/gdb/testsuite/outputs/gdb.ada/dgopt/x.adb.dg, and the
debug information should refer to the x.adb.dg file.
That is the case for the .debug_line part:
...
The Directory Table is empty.
The File Name Table (offset 0x1c):
Entry Dir Time Size Name
1 0 0 0 x.adb.dg
...
but not for the .debug_info part:
...
<11> DW_AT_name : $src/gdb/testsuite/gdb.ada/dgopt/x.adb
<15> DW_AT_comp_dir : $build/gdb/testsuite/outputs/gdb.ada/dgopt
...
Filed as PR gcc/103436.
In C we can generate similar debug information, using a source file that does
not contain any code, but includes another one that does:
...
$ cat gdb/testsuite/gdb.base/include-main.c
#include "main.c"
...
such that in the .debug_line part we have:
...
The Directory Table (offset 0x1c):
1 /home/vries/gdb_versions/devel/src/gdb/testsuite/gdb.base
The File Name Table (offset 0x57):
Entry Dir Time Size Name
1 1 0 0 main.c
...
and in the .debug_info part:
...
<11> DW_AT_name : $src/gdb/testsuite/gdb.base/include-main.c
<15> DW_AT_comp_dir : $build/gdb/testsuite
...
Add a C test-case that mimics gdb.ada/dgopt.exp, that is:
- generate debug info as described above,
- issue a list of a line in include-main.c, while the corresponding
CU is not expanded yet.
Tested on x86_64-linux.
|
|
This commit adds support for RISC-V disassembler options to GDB. This
commit is based on this patch which was never committed:
https://sourceware.org/pipermail/binutils/2021-January/114944.html
All of the binutils refactoring has been moved to a separate, earlier,
commit, so this commit is pretty straight forward, just registering
the required gdbarch hooks.
Co-authored-by: Simon Cook <simon.cook@embecosm.com>
|
|
In this commit:
commit c6a6aad52d9e839d6a84ac31cabe2b7e1a2a31a0
Date: Mon Oct 25 17:25:45 2021 +0100
gdb/python: make some global variables static
building without Python was broken. The extension_language_python
global was moved from being always defined, to only being defined when
the HAVE_PYTHON macro was defined. As a consequence, building without
Python support would result in errors like:
/usr/bin/ld: extension.o:(.rodata+0x120): undefined reference to `extension_language_python'
This commit fixes the problem by moving the definition of
extension_language_python outside of the HAVE_PYTHON macro protection.
|
|
This commit introduced a test failure in gdb.server/attach-flag.exp.
I didn't spot this failure originally as the problem is fixed by this,
as yet unpushed patch:
https://sourceware.org/pipermail/gdb-patches/2021-November/183768.html
I unfortunately didn't test each patch in the original series
independently. I'll repost this patch after the above patch has been
merged.
This reverts commit 32b1f5e8d6b8ddd3be6e471c26dd85a1dac31dda.
|
|
Basic ambiguity detection assumes that when 2 fields with the same name
have the same byte offset, it must be an unambiguous request. This is not
always correct. Consider the following code:
class empty { };
class A {
public:
[[no_unique_address]] empty e;
};
class B {
public:
int e;
};
class C: public A, public B { };
if we tried to use c.e in code, the compiler would warn of an ambiguity,
however, since A::e does not demand an unique address, it gets the same
address (and thus byte offset) of the members, making A::e and B::e have the
same address. however, "print c.e" would fail to report the ambiguity,
and would instead print it as an empty class (first path found).
The new code solves this by checking for other found_fields that have
different m_struct_path.back() (final class that the member was found
in), despite having the same byte offset.
The testcase gdb.cp/ambiguous.exp was also changed to test for this
behavior.
|
|
In a later commit I plan to add disassembler styling. In the same way
that we have a source_styling_changed observer I would need to add a
disassembler_styling_changed observer.
However, currently, these observers would only be notified from
cli-style.c:set_style_enabled, and observed in tui-winsource.c,
tui_source_window::style_changed, as a result, having two observers
seems unnecessary right now, so, in this commit, I plan to rename
source_styling_changed to just styling_changed, then, in the later
commit, when disassembler styling is added, I can use the same
observer for both source styling, and disassembler styling.
There should be no user visible changes after this commit.
|
|
Make a couple of global variables static in python/python.c. To do
this I had to move the definition of extension_language_python to
later in the file.
There should be no user visible changes after this commit.
|
|
While working on another patch I ended up in a situation where I had
async mode disabled (with 'maint set target-async off'), but the async
event token got marked anyway.
In this situation GDB was continually calling into
remote_target::wait, however, the async token would never become
unmarked as the unmarking is guarded by target_is_async_p.
We could just unconditionally unmark the token, but that would feel
like just ignoring a bug, so, instead, lets assert that if
!target_is_async_p, then the async token should not be marked.
This assertion would have caught my earlier mistake.
There should be no user visible changes with this commit.
|
|
This commit simplifies remote_target::is_async_p by removing the
target_async_permitted check.
In previous commits I have added additional assertions around the
target_async_permitted flag into target.c, as a result we should now
be confident that if target_can_async_p returns false, a target will
never have async mode enabled. Given this, it should not be necessary
to check target_async_permitted in remote_target::is_async_p, if this
flag is false ::is_async_p should return false anyway. There is an
assert to this effect in target_is_async_p.
There should be no user visible change after this commit.
|
|
The target_async_permitted flag allows a user to override whether a
target can act in async mode or not. In previous commits I have moved
the checking of this flag out of the various ::can_async_p methods and
into the common target.c code.
In this commit I will add some additional assertions into
target_is_async_p and target_async. The rules these assertions are
checking are:
1. A target that returns false for target_can_async_p should never
become "async enabled", and so ::is_async_p should always return
false. This is being checked in target_is_async_p.
2. GDB should never try to enable async mode for a target that
returns false for target_can_async_p, this is checked in
target_async.
There are a few places where we call the ::is_async_p method directly,
in these cases we will obviously not pass through the assert in
target_is_async_p, however, there are also plenty of places where we
do call target_is_async_p so if GDB starts to misbehave we should
catch it quickly enough.
There should be no user visible changes after this commit.
|
|
This commit moves the target_async_permitted check out of each targets
::can_async_p method and into the target_can_async_p wrapper function.
I've left some asserts in the two ::can_async_p methods that I
changed, which will hopefully catch any direct calls to these methods
that might be added in the future.
There should be no user visible changes after this commit.
|
|
There are a few places where we call the target_ops::can_async_p
member function directly, instead of using the target_can_async_p
wrapper.
In some of these places this is because we need to ask before the
target has been pushed, and in another location (in target.c) it seems
unnecessary to go through the wrapper when we are already in target.c
code.
However, in the next commit I'd like to hoist some common checks out
of target specific code into target.c. To achieve this, in this
commit, I introduce a new overload of target_can_async_p which takes a
target_ops pointer, and calls the ::can_async_p method directly. I
then make use of the new overload where appropriate.
There should be no user visible changes after this commit.
|
|
Before commit 3b6acaee895 "Update more calls to add_prefix_cmd" we had the
following output for "show logging file":
...
$ gdb -q -batch -ex "set trace-commands on" \
-ex "set logging off" \
-ex "show logging file" \
-ex "set logging on" \
-ex "show logging file"
+set logging off
+show logging file
Future logs will be written to gdb.txt.
+set logging on
+show logging file
Currently logging to "gdb.txt".
...
After that commit we have instead:
...
+set logging off
+show logging file
The current logfile is "gdb.txt".
+set logging on
+show logging file
The current logfile is "gdb.txt".
...
Before the commit, whether logging is enabled or not can be deduced from the
output of the command. After the commit, the message is unified and it's no
longer clear whether logging is enabled or not.
Fix this by:
- adding a new command "show logging enabled"
- adding a corresponding new command "set logging enabled on/off"
- making the commands "set logging on/off" deprecated aliases of the
"set logging enabled on/off" command.
Update the docs and testsuite to use "set logging enabled". Mention the new
and deprecated commands in NEWS.
Tested on x86_64-linux.
|
|
Currently we have:
...
$ gdb -q -batch -ex "help set logging overwrite"
Set whether logging overwrites or appends to the log file.
If set, logging overrides the log file.
...
Fix overrides -> overwrites typo.
|
|
When implementing this command, I put "help doc" as a placeholder for
the help string, and forgot to update it. Change it for a real help
string.
Change-Id: Id23c2142c5073dc570bd8a706e9ec6fa8c40eb09
|
|
This reverts (par of) commit ab198279120fe7937c0970a8bb881922726678f9.
This commit changed what the test expects when catching the execve
syscall based on the behavior seen on a Linux PowerPC machine. That is,
we get an "entry" event, but no "return" event. This is not what we get
on Linux with other architectures, though, and it seems like a
PowerPC-specific bug.
Revert the part of the patch related to this, but not the other hunk.
Change-Id: I4248776e4299f10999487be96d4acd1b33639996
|
|
In commit:
commit 633cf2548bcd3dafe297e21a1dd3574240280d48
Date: Wed May 9 15:42:28 2018 -0600
Remove cleanups from mdebugread.c
the following change was made in the function parse_partial_symbols in
mdebugread.c:
- fdr_to_pst = XCNEWVEC (struct pst_map, hdr->ifdMax + 1);
- old_chain = make_cleanup (xfree, fdr_to_pst);
+ gdb::def_vector<struct pst_map> fdr_to_pst_holder (hdr->ifdMax + 1);
+ fdr_to_pst = fdr_to_pst_holder.data ();
The problem with this change is that XCNEWVEC calls xcalloc, which in
turn calls calloc, and calloc zero initializes the allocated memory.
In contrast, the new line gdb::def_vector<struct pst_map> specifically
does not initialize the underlying memory.
This is a problem because, later on in this same function, we
increment the n_globals field within 'struct pst_map' objects stored
in the vector. The incrementing is now being done from an
uninitialized starting point.
In this commit we switch from using gdb::def_vector to using
std::vector, this alone should be enough to ensure that the fields are
initialized to zero.
However, for extra clarity, I have also added initial values in the
'struct pst_map' to make it crystal clear how the struct will start
up.
This issue was reported on the mailing list here:
https://sourceware.org/pipermail/gdb-patches/2021-November/183693.html
Co-Authored-By: Lightning <lightningth@gmail.com>
|
|
When readline development package is missing make fails with
"configure: error: system readline is not new enough" which
might be confusing. This patch checks for the readline.h explicitly
and makes make to warn about the missing package.
|
|
This fixes compile errors like
../../gdb-11.1/gdb/gnu-nat.c: In function void add_task_commands():
../../gdb-11.1/gdb/gnu-nat.c:3204:17: error: no matching function for call to add_cmd(const char [8], command_class, cmd_list_element*&, char*, cmd_list_element**)
3204 | &setlist);
| ^
In file included from ../../gdb-11.1/gdb/completer.h:21,
from ../../gdb-11.1/gdb/symtab.h:36,
from ../../gdb-11.1/gdb/infrun.h:21,
from ../../gdb-11.1/gdb/target.h:42,
from ../../gdb-11.1/gdb/inf-child.h:23,
from ../../gdb-11.1/gdb/gnu-nat.h:38,
from ../../gdb-11.1/gdb/gnu-nat.c:24:
../../gdb-11.1/gdb/command.h:160:33: note: candidate: cmd_list_element* add_cmd(const char*, command_class, void (*)(const char*, int), const char*, cmd_list_element**)
160 | extern struct cmd_list_element *add_cmd (const char *, enum command_class,
| ^~~~~~~
../../gdb-11.1/gdb/command.h:161:30: note: no known conversion for argument 3 from cmd_list_element* to void (*)(const char*, int)
161 | cmd_const_cfunc_ftype *fun,
| ~~~~~~~~~~~~~~~~~~~~~~~^~~
../../gdb-11.1/gdb/command.h:167:33: note: candidate: cmd_list_element* add_cmd(const char*, command_class, const char*, cmd_list_element**)
167 | extern struct cmd_list_element *add_cmd (const char *, enum command_class,
| ^~~~~~~
../../gdb-11.1/gdb/command.h:167:33: note: candidate expects 4 arguments, 5 provided
../../gdb-11.1/gdb/gnu-nat.c:3210:18: error: no matching function for call to add_cmd(const char [8], command_class, cmd_list_element*&, char*, cmd_list_element**)
3210 | &showlist);
| ^
Change-Id: Ie9029363d3fb40e34e8f5b1ab503745bc44bfe3f
|