Age | Commit message (Collapse) | Author | Files | Lines |
|
This test exercises musl_link_map_to_tls_module_id() and
glibc_link_map_to_tls_module_id(), both of which are in solib-svr4.c.
Prior to writing this test, I had only written what is now named
'musl_link_map_to_tls_module_id' and it worked for both GLIBC and
MUSL. Once I wrote this new test, tls-dlobj.exp, there were a number
of tests which didn't work with GLIBC. This led me to write a
GLIBC-specific link map to module id function, i.e,
'glibc_link_map_to_tls_module_id'.
It only has one compilation scenario, in which the pthread(s) library
is used - as noted in a comment, it became too much of a hassle to try
to KFAIL things, though it certainly could have been done in much the
same was as was done in gdb.base/multiobj.exp. It didn't seem that
important to do so, however, since I believe that the other tests
have adequate coverage for different compilation scenarios.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
This test exercises GDB's internal TLS support when both the main
program and several shared libraries have TLS variables. It also
tests existing (non-internal) TLS support too.
It tests using two compilation scenarios, "default", in which
libpthread is not linked with the program and libraries as well
as one which does use libpthread.
It tests link map address to module id mapping code in GDB
in addition to the ability of GDB to traverse TLS data structures
with several libraries in play.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
This commit introduces a new test, gdb.base/tls-nothreads.exp.
It has a test case, a C file, which has several TLS variables in the
main program, which, once compiled and linked, should end up (in ELF
files) in .tdata and .tbss. The test compiles the program in a number
of different ways, making sure that each variable is accessible
regardless of how it was compiled.
Note that some of the compilation scenarios end up with a statically
linked executable. Prior to this series of commits, accessing TLS
variables from a statically linked program on Linux did not work.
For certain targets (x86_64, aarch64, s390x, riscv, and ppc64),
all on Linux, support has been added to GDB for accessing thread
local storage in statically linked executables. This test is
important for testing those build scenarios.
But it's also important to make sure that GDB's internal TLS support
works for other scenarios too. In order to accomplish that, the
tests are also run in a mode which forces the internal support to
be used.
It also adds a new file, gdb.base/tls-common.exp.tcl, which includes
some common definitions used by the three new TLS tests, including
the one added by this commit. In particular, it sets a TCL variable,
'internal_tls_linux_targets' which list the targets mentioned earlier.
This means that as internal TLS support is added for other targets,
the target should be listed in just one file as opposed to three
(or more if other tests using tls-common.exp.tcl are added).
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
The patches later in the series add GDB-internal TLS support for
certain targets. This commit updates the "print foo" test in
gdb.server/no-thread-db.exp to accept either a TLS failure (when
libthread_db isn't available) or printing the correct answer, which
will occur when GDB's internal TLS address resolution can be used.
I'm making this change prior to the commits which actually add
the GDB-internal TLS support in order to avoid tripping regression
testers.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
This commit fixes two bugs, one of which is Bug 25807, which occurs
when target_translate_tls_address() is called from
language_defn::read_var_value in findvar.c. I found it while testing on
aarch64; it turned a KFAIL for gdb.threads/tls.exp: print a_thread_local
into a FAIL due to a GDB internal error. Now, with this commit in place,
the KFAIL/FAIL turns into a PASS.
In addition to the existing test just noted, I've also added a test to
the new test case gdb.base/tls-nothreads.exp. It'll be tested, using
different scenarios, up to 8 times:
PASS: gdb.base/tls-nothreads.exp: default: force_internal_tls=false: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: default: force_internal_tls=true: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: static: force_internal_tls=false: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: static: force_internal_tls=true: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads: force_internal_tls=false: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads: force_internal_tls=true: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads-static: force_internal_tls=false: after exit: print tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads-static: force_internal_tls=true: after exit: print tls_tbss_1
There is a related problem that occurs when target_translate_tls_address
is called from find_minsym_type_and_address() in minsyms.c. It can be
observed when debugging a program without debugging symbols when the
program is not executing. I've written a new test for this, but it's
(also) included in the new test case gdb.base/tls-nothreads.exp, found
later in this series. Depending on the target, it can run up to 8
times using different scenarios. E.g., on aarch64, I'm seeing these
PASSes, all of which test this change:
PASS: gdb.base/tls-nothreads.exp: default: force_internal_tls=false: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: default: force_internal_tls=true: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: static: force_internal_tls=false: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: static: force_internal_tls=true: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads: force_internal_tls=false: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads: force_internal_tls=true: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads-static: force_internal_tls=false: stripped: after exit: print (int) tls_tbss_1
PASS: gdb.base/tls-nothreads.exp: pthreads-static: force_internal_tls=true: stripped: after exit: print (int) tls_tbss_1
In an earlier version of this commit (v4), I was checking whether the
target has registers in language_defn::read_var_value in findvar.c and
in find_minsym_type_and_address in minsyms.c, printing suitable error
messages in each case. In his review of this commit for the v4
series, Tom Tromey asked whether it would be better to do this check
in target_translate_tls_address. I had considered doing that for the
v4 (and earlier) series, but I wanted to print slightly different
messages at each check. Also, read_var_value in findvar.c was already
printing a message in some cases and I had arranged for the later
check in that function to match the original message.
However, while I had added a target-has-registers check at two of the
call sites for target_translate_tls_address, I hadn't added it at the
third call site which is in dwarf_expr_context::execute_stack_op() in
dwarf2/expr.c. I believe that in most cases, this is handled by the
early check in language_defn::read_var_value...
else if (sym_need == SYMBOL_NEEDS_REGISTERS && !target_has_registers ())
error (_("Cannot read `%s' without registers"), var->print_name ());
...but it's entirely possible that dwarf_expr_context::execute_stack_op()
might get called in some other context. So it makes sense to do the
target-has-registers check for that case too. And rather than add yet
another check at that call site, I decided that moving the check and
error message to target_translate_tls_address makes sense.
I had to make the error messages that it prints somewhat more generic.
In particular, when called from language_defn::read_var_value, the
message printed by target_translate_tls_address no longer matches the
earlier message that could be printed (as shown above). That meant
that the test cases which check for this message, gdb.threads/tls.exp,
and gdb.base/tls-nothreads.exp had to be adjusted to account for the
new message. Also, I think it's valuable to the user to know (if
possible) the name of the variable that caused the error, so I've
added an optional parameter to target_translate_tls_address, providing
the name of the variable, if it's known. Therefore, the message
that's printed when the target-has-registers test fails is one of the
following:
When the TLS variable isn't known (due to being called from
dwarf_expr_context::execute_stack_op):
"Cannot translate TLS address without registers"
When the TLS variable is known (from either of the other two call sites
for target_translate_tls_address):
"Cannot find address of TLS symbol `%s' without registers"
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=25807
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
Completing fields inside an anonymous struct does not work. With:
struct commit_counters_hot {
union {
struct {
long owner;
};
char padding[16];
};
};
I get:
(gdb) complete print cc_hot.
print cc_hot.padding
After this patch, I get:
(gdb) complete print cc_hot.
print cc_hot.owner
print cc_hot.padding
Update break1.c to include an anonymous struct. The tests that complete
"z_field" inside gdb.base/completion.exp would start to fail without the
fix.
Change-Id: I46b65a95ad16b0825de58dfa241777fe57acc361
Reviewed-By: Keith Seitz <keiths@redhat.com>
|
|
Running `pre-commit run --all-files` introduces these fixes.
Change-Id: I2e363fdf988b66d83008265b3ca8d1120f84b95d
|
|
GDB's Python documentation does make it clear that keywords arguments
are supported for functions that take 2 or more arguments. The
documentation makes no promise for keyword argument support on
functions that only take a single argument.
That said, I'm a fan of keyword arguments, I think they help document
the code, and make intentions clearer, even for single argument
functions.
As I'm changing gdb.Color anyway (see previous commit), I'd like to
add keyword argument support to gdb.Color.escape_sequence, even though
this is a single argument method. This should be harmless for anyone
who doesn't want to use keywords, but adds the option for those of us
that do.
I've also removed a redundant check that the 'self' argument was a
gdb.Color object; Python already ensures this is the case.
And I have folded the check that the single argument is a bool into
the gdb_PyArg_ParseTupleAndKeywords call, this means that the error
message will include the incorrect type name now, which should make
debugging issues easier.
Tests have been extended to cover both cases -- it appears the
incorrect argument type error was not previously tested, so it is
now.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python API documentation is clear:
Functions and methods which have two or more optional arguments allow
them to be specified using keyword syntax.
The gdb.Color.__init__ method matches this description, but doesn't
support keyword arguments.
This commit fixes this by adding keyword argument support.
There's a new test to cover this functionality.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I've been reviewing all uses of PyObject_IsInstance, and I believe
that the use of PyObject_IsInstance in py-unwind.c is not entirely
correct. The use of PyObject_IsInstance is in this code in
frame_unwind_python::sniff:
if (PyObject_IsInstance (pyo_unwind_info,
(PyObject *) &unwind_info_object_type) <= 0)
error (_("A Unwinder should return gdb.UnwindInfo instance."));
The problem is that PyObject_IsInstance can return -1 to indicate an
error, in which case a Python error will have been set. Now, the
above code appears to handle this case, it checks for '<= 0', however,
frame_unwind_python::sniff has this near the start:
gdbpy_enter enter_py (gdbarch);
And looking in python.c at 'gdbpy_enter::~gdbpy_enter ()', you'll
notice that if an error is set then the error is printed, but also, we
get a warning about an unhandled Python exception. Clearly, all
exceptions should have been handled by the time the gdbpy_enter
destructor is called.
I've added a test as part of this commit that exposes this problem,
the current output is:
(gdb) backtrace
Python Exception <class 'RuntimeError'>: error in Blah.__class__
warning: internal error: Unhandled Python exception
Python Exception <class 'gdb.error'>: A Unwinder should return gdb.UnwindInfo instance.
#0 corrupt_frame_inner () at /home/andrew/projects/binutils-gdb/build.dev-g/gdb/testsuite/../../../src.dev-g/gdb/test>
(gdb)
An additional observation is that we use PyObject_IsInstance to check
that the return value is a gdb.UnwindInfo, or a sub-class. However,
gdb.UnwindInfo lacks the Py_TPFLAGS_BASETYPE flag, and so cannot be
sub-classed. As such, PyObject_IsInstance is not really needed, we
could use PyObject_TypeCheck instead. The PyObject_TypeCheck function
only returns 0 or 1, there is no -1 error case. Switching to
PyObject_TypeCheck then, fixes the above problem.
There's a new test that exposes the problems that originally existed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In python/py-registers.c we make use of PyObject_IsInstance. The
PyObject_IsInstance can return -1 for an error, 0 for false, or 1 for
true.
In py-registers.c we treat the return value from PyObject_IsInstance
as a boolean, which means both -1 and 1 will be treated as true.
If PyObject_IsInstance returns -1 for an error, this will be treated
as true, we will then invoke undefined behaviour as the pyo_reg_id
object will be treated as a gdb.RegisterDescriptor, even though it
might not be.
I noticed that the gdb.RegisterDescriptor class does not have the
Py_TPFLAGS_BASETYPE flag, and therefore cannot be inherited from. As
such, using PyObject_IsInstance is not necessary, we can use
PyObject_TypeCheck instead. The PyObject_TypeCheck function only
returns 0 or 1, so we don't need to worry about the error case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Because it runs so many variations, the test
gdb.dwarf2/macro-source-path.exp takes about 2:40 minutes to run for me,
in a non-optimized build. These days I often run all tests under
gdb.dwarf2, as a sanity test for my changes, and so I often have to wait
for this test to complete.
Split the test, to allow it to complete faster when running the
testsuite in parallel. After this patch, running all the
gdb.dwarf2/macro-source-path-*.exp tests in parallel takes me about 1
minute. It's more that I would expect, I would expect the time to be
divided by nearly 5, but it's already better than what we have now.
Change-Id: I07e4e1f234cf57d9b0c1c027f08061615714a4d5
Acked-By: Tom de Vries <tdevries@suse.de>
|
|
With a gdb 16.2 based package, I ran into:
...
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: input still accepted
interrupt
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: interrupt
set var do_wait=0
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: set var do_wait=0
continue&
Cannot execute this command while the selected thread is running.
(gdb)
Program received signal SIGINT, Interrupt.
PASS: gdb.base/bg-execution-repeat.exp: c 1&: continue&
0x00007ffff7cf1503 in clock_nanosleep@GLIBC_2.2.5 () from /lib64/libc.so.6
FAIL: gdb.base/bg-execution-repeat.exp: c 1&: breakpoint hit 2 (timeout)
...
Fix this by waiting for "Program received signal SIGINT, Interrupt" after
issuing the interrupt command.
Tested on x86_64-linux.
|
|
The gdbpy_is_color function uses PyObject_IsInstance, and converts the
return from PyObject_IsInstance to a bool.
Unfortunately, PyObject_IsInstance can return -1, 0, or 1, for error,
failure, or success respectively. When converting to a bool both -1
and 1 will convert to true.
Additionally, when PyObject_IsInstance returns -1 an error will be
set.
What this means is that, if gdbpy_is_color is called with a non
gdb.Color object, and the PyObject_IsInstance check raises an error,
then (a) GDB will continue as if the object is a gdb.Color object,
which is likely going to invoke undefined behaviour, see
gdbpy_get_color for example, and (b) when GDB eventually returns to
the Python interpreter, due to an error being set, we'll see:
Python Exception <class 'SystemError'>: PyEval_EvalFrameEx returned a result with an error set
Error occurred in Python: PyEval_EvalFrameEx returned a result with an error set
However, after the previous commit, gdb.Color can no longer be
sub-classed, this means that fixing the above problems is easy, we can
replace the PyObject_IsInstance check with a PyObject_TypeCheck, the
PyObject_TypeCheck function only returns 0 or 1, there's no -1 error
case.
It's also worth noting that PyObject_TypeCheck is the function that is
more commonly used within GDB's Python API implementation, include the
py-color.c use there were only 4 PyObject_IsInstance uses. Of the
remaining 3, 2 are fine, and one other (in py-disasm.c) is also
wrong. I'll address that in a separate patch.
There's also a new test included which exposes the above issue.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Remove the Py_TPFLAGS_BASETYPE flag from the gdb.Color type. This
effectively makes gdb.Color final; users can no longer create classes
that inherit from gdb.Color.
Right now I cannot think of any cases where inheritance would be
needed over composition for a simple type like gdb.Color. If I'm
wrong, then it's easy to add Py_TPFLAGS_BASETYPE back in later, this
would be an extension of the API. But it's much harder to remove the
flag later as that might break existing user code (note: there has
been no release of GDB yet that includes the gdb.Color type).
Introducing this restriction makes the next commit easier.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The PyObject_IsInstance function can return -1 for errors, 0 to
indicate false, and 1 to indicate true.
I noticed in python/py-disasm.c that we treat the result of
PyObject_IsInstance as a bool. This means that if PyObject_IsInstance
returns -1, then this will be treated as true. The consequence of
this is that we will invoke undefined behaviour by treating the result
from the _print_insn call as if it was a DisassemblerResult object,
even though PyObject_IsInstance raised an error, and the result might
not be of the required type.
I could fix this by taking the -1 result into account, however,
gdb.DisassemblerResult cannot be sub-classed, the type doesn't have
the Py_TPFLAGS_BASETYPE flag. As such, we can switch to using
PyObject_TypeCheck instead, which only return 0 or 1, with no error
case.
I have also taken the opportunity to improve the error message emitted
if the result has the wrong type. Better error message make debugging
issues easier.
I've added a test which exposes the problem when using
PyObject_IsInstance, and I've updated the existing test for the
improved error message.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Continuing to improve GDB's ability to debug linker namespaces, this
commit adds the command "info linker- namespaces". The command is
similar to "info sharedlibrary" but focused on improved readability
when the inferior has multiple linker namespaces active. This command
can be used in 2 different ways, with or without an argument.
When called without argument, the command will print the number of
namespaces, and for each active namespace, it's identifier, how many
libraries are loaded in it, and all the libraries (in a similar table to
what "info sharedlibrary" shows). As an example, this is what GDB's
output could look like:
(gdb) info linker-namespaces
There are 2 linker namespaces loaded
There are 3 libraries loaded in linker namespace [[0]]
Displaying libraries for linker namespace [[0]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7ebc000 0x00007ffff7fa2000 Yes (*) /lib64/libm.so.6
0x00007ffff7cc9000 0x00007ffff7ebc000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
There are 4 libraries loaded in linker namespace [[1]]
Displaying libraries for linker namespace [[1]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7fb9000 0x00007ffff7fbe000 Yes gdb.base/dlmopen-ns-ids/dlmopen-lib.so
0x00007ffff7bc4000 0x00007ffff7caa000 Yes (*) /lib64/libm.so.6
0x00007ffff79d1000 0x00007ffff7bc4000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
When called with an argument, the argument must be a namespace
identifier (either with or without the square brackets decorators). In
this situation, the command will truncate the output to only show the
relevant information for the requested namespace. For example:
(gdb) info linker-namespaces 0
There are 3 libraries loaded in linker namespace [[0]]
Displaying libraries for linker namespace [[0]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7ebc000 0x00007ffff7fa2000 Yes (*) /lib64/libm.so.6
0x00007ffff7cc9000 0x00007ffff7ebc000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
The test gdb.base/dlmopen-ns-id.exp has been extended to test this new
command. Because some gcc and glibc defaults can change between
systems, we are not guaranteed to always have libc and libm loaded in a
namespace, so we can't guarantee the number of libraries, but the range
of the result is 2, so we can still check for glaring issues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
This commit adds 2 simple built-in convenience variables to help users
debug an inferior with multiple linker namespaces. The first is
$_active_linker_namespaces, which just counts how many namespaces have SOs
loaded onto them. The second is $_current_linker_namespace, and it tracks
which namespace the current location in the inferior belongs to.
This commit also introduces a test ensuring that we track namespaces
correctly, and that a user can use the $_current_linker_namespace
variable to set a conditional breakpoint, while linespec changes aren't
finalized to make it more convenient.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
It includes changes to the following files:
- gdb/riscv-linux-tdep.c, gdb/riscv-linux-tdep.h: adds facilities to record
syscalls.
- gdb/riscv-tdep.c, gdb/riscv-tdep.h: adds facilities to record execution of
rv64gc instructions.
- gdb/configure.tgt: adds new files for compilation.
- gdb/testsuite/lib/gdb.exp: enables testing of full record mode for RISC-V
targets.
- gdb/syscalls/riscv-canonicalize-syscall-gen.py: a script to generate
function that canonicalizes RISC-V syscall. This script can simplify support
for syscalls on rv32 and rv64 system (currently support only for rv64). To
use this script you need to pass a path to a file with syscalls description
from riscv-glibc (example is in the help message). The script produces a
mapping from syscall names to gdb_syscall enum.
- gdb/riscv-canonicalize-syscall.c: the file generated by the previous script.
- gdb/doc/gdb.texinfo: notification that record mode is enabled in RISC-V.
- gdb/NEWS: notification of new functionality.
Approved-By: Guinevere Larsen <guinevere@redhat.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Since commit 7b80401da00 ("Handle DWARF 5 separate debug sections"), test-case
gdb.debuginfod/fetch_src_and_symbols.exp fails here:
...
(gdb) file fetch_src_and_symbols_alt.o^M
Reading symbols from fetch_src_and_symbols_alt.o...^M
warning: could not find supplementary DWARF file \
(fetch_src_and_symbols_dwz.o) for fetch_src_and_symbols_alt.o^M
(gdb) FAIL: $exp: no_url: file fetch_src_and_symbols_alt.o
...
because this is expected:
...
(gdb) file fetch_src_and_symbols_alt.o^M
Reading symbols from fetch_src_and_symbols_alt.o...^M
warning: could not find '.gnu_debugaltlink' file for \
fetch_src_and_symbols_alt.o^M
(gdb) PASS: $exp: no_url: file fetch_src_and_symbols_alt.o
...
Fix this by updating the regexp.
Tested on x86_64-linux.
|
|
This adds a "-5" flag to cc-with-tweaks, mirroring dwz's "-5" flag,
and also adds a new cc-with-dwz-5 target board.
The "-5" flag tells dwz to use the DWARF 5 .debug_sup section in
multi-file mode.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32808
|
|
DWARF 5 standardized the .gnu_debugaltlink section that dwz emits in
multi-file mode. This is handled via some new forms, and a new
.debug_sup section.
This patch adds support for this to gdb. It is largely
straightforward, I think, though one oddity is that I chose not to
have this code search the system build-id directories for the
supplementary file. My feeling was that, while it makes sense for a
distro to unify the build-id concept with the hash stored in the
.debug_sup section, there's no intrinsic need to do so.
This in turn means that a few tests -- for example those that test the
index cache -- will not work in this mode.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32808
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
There was a comment in gdb.python/py-color.exp that was probably left
over from a copy & paste, it incorrectly described what the test
script was testing.
Fixed in this commit.
There's no change in what is tested with this commit.
|
|
Spotted a stray white space at the end of an error message. Removed,
and updated the py-breakpoint.exp test to check this case.
|
|
I noticed that this commit:
commit 6447969d0ac774b6dec0f95a0d3d27c27d158690
Date: Sat Oct 5 22:27:44 2024 +0300
Add an option with a color type.
has an unnecessary `Py_INCREF (self);` in gdb.Color.__init__. This
means that the reference count on all gdb.Color objects (that pass
through __init__) will be +1 from where they should normally be, and
this will stop the gdb.Color objects from being deallocated.
Fix by removing the Py_INCREF call.
Add a test which exposes the memory leak.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
We currently have two memory leak tests in gdb.python/ and there's a
lot of duplication between these two.
In the next commit I'd like to add yet another memory leak test, which
would mean a third set of scripts which duplicate the existing two.
And three is where I draw the line.
This commit factors out the core of the memory leak tests into a new
module gdb_leak_detector.py, which can then be imported by each
tests's Python file in order to make writing the memory leak tests
easier.
I've also added a helper function to lib/gdb-python.exp which captures
some of the common steps needed in the TCL file in order to run a
memory leak test.
Finally, I use this new infrastructure to rewrite the two existing
memory leak tests.
What I considered, but ultimately didn't do, is merge the two memory
leak tests into a single TCL script. I did consider this, and for the
existing tests this would be possible, but future tests might require
different enough setup that this might not be possible for all future
tests, and now that we have helper functions in a central location,
the each individual test is actually pretty small now, so leaving them
separate seemed OK.
There should be no change in what is actually being tested after this
commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
After building gdb with -fsanitize=threads, and running test-case
gdb.cp/cplusfuncs.exp, I run into a single timeout:
...
FAIL: gdb.cp/cplusfuncs.exp: info function operator=( (timeout)
...
and the test-case takes 2m33s to finish.
This is due to expanding CUs from libstdc++.
After de-installing package libstdc++6-debuginfo, the timeout disappears and
testing time goes down to 9 seconds.
Fix this by not running to main, which brings testing time down to 3 seconds.
With a gdb built without -fsanitize=threads, testing time goes down from 11
seconds to less than 1 second.
Tested on x86_64-linux.
Reviewed-By: Keith Seitz <keiths@redhat.com>
|
|
With test-case gdb.threads/clone-attach-detach.exp I usually get:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) PASS: $exp: bg attach <n>: attach
[Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
...
but sometimes I run into:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) [Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
FAIL: $exp: bg attach <n>: attach (timeout)
...
I managed to reproduce this using make target check-readmore and
READMORE_SLEEP=100.
Fix this using -no-prompt-anchor.
Tested on x86_64-linux.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
With test-case gdb.base/bg-execution-repeat.exp, occasionally I run into a
timeout:
...
(gdb) c 1&
Will stop next time breakpoint 1 is reached. Continuing.
(gdb) PASS: $exp: c 1&: c 1&
Breakpoint 2, foo () at bg-execution-repeat.c:23
23 return 0; /* set break here */
PASS: $exp: c 1&: breakpoint hit 1
Will stop next time breakpoint 2 is reached. Continuing.
(gdb) PASS: $exp: c 1&: repeat bg command
print 1
$1 = 1
(gdb) PASS: $exp: c 1&: input still accepted
interrupt
(gdb) PASS: $exp: c 1&: interrupt
Program received signal SIGINT, Interrupt.
foo () at bg-execution-repeat.c:24
24 }
PASS: $exp: c 1&: interrupt received
set var do_wait=0
(gdb) PASS: $exp: c 1&: set var do_wait=0
continue&
Continuing.
(gdb) PASS: $exp: c 1&: continue&
FAIL: $exp: c 1&: breakpoint hit 2 (timeout)
...
I can reproduce it reliably by adding a "sleep (1)" before the "do_wait = 1"
in the .c file.
The timeout happens as follows:
- with the inferior stopped at main, gdb continues (command c 1&)
- the inferior hits the breakpoint at foo
- gdb continues (using the repeat command functionality)
- the inferior is interrupted
- inferior variable do_wait gets set to 0. The assumption here is that the
inferior has progressed enough that do_wait is set to 1 at that point, but
that happens not to be the case. Consequently, this has no effect.
- gdb continues
- the inferior sets do_wait to 1
- the inferior enters the wait function, and wait for do_wait to become 0,
which never happens.
Fix this by moving the "do_wait = 1" to before the first call to foo.
Tested on x86_64-linux.
Reviewed-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
On s390x-linux, with test-case gdb.ada/scalar_storage.exp we have:
...
(gdb) print V_LE^M
$1 = (value => 126, another_value => 12, color => 3)^M
(gdb) FAIL: gdb.ada/scalar_storage.exp: print V_LE
print V_BE^M
$2 = (value => 125, another_value => 9, color => green)^M
(gdb) KFAIL: $exp: print V_BE (PRMS: DW_AT_endianity on enum types)
...
The KFAIL is incorrect in the sense that gdb is behaving as expected.
The problem is incorrect debug info, so change this into an xfail.
Furthermore, extend the xfail to cover V_LE.
Tested on s390x-linux and x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR testsuite/32875
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32875
|
|
There's currently no test for unwinding the SVE registers from a signal
frame, so add one.
Tested on aarch64-linux-gnu native.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
With the command before the change, gdb crashes with message:
(gdb) p 1 == { }
Fatal signal: Segmentation fault
After the fix in this commit, gdb shows following message:
(gdb) p 1 == { }
size of the array element must not be zero
Add new test cases to file gdb.base/printcmds.exp to test this change
Approved-By: Tom Tromey <tom@tromey.com>
|
|
But PR gdb/20126 highlights a case where GDB emits a large number of
warnings like:
warning: Can't open file /anon_hugepage (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/PostgreSQL.1150234652 during file-backed mapping note processing
warning: Can't open file /dev/shm/PostgreSQL.535700290 during file-backed mapping note processing
warning: Can't open file /SYSV604b7d00 (deleted) during file-backed mapping note processing
... etc ...
when opening a core file. This commit aims to avoid at least some of
these warnings.
What we know is that, for at least some of these cases, (e.g. the
'(deleted)' mappings), the content of the mapping will have been
written into the core file itself. As such, the fact that the file
isn't available ('/SYSV604b7d00' at least is a shared memory mapping),
isn't really relevant, GDB can still provide access to the mapping, by
reading the content from the core file itself.
What I propose is that, when processing the file backed mappings, if
all of the mappings for a file are covered by segments within the core
file itself, then there is no need to warn the user that the file
can't be opened again. The debug experience should be unchanged, as
GDB would have read from the in-core mapping anyway.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30126
|
|
The recently included gdb.base/dlmopen-ns-ids.exp test can sometimes
fail the call to get_integer_valueof when trying to check the namespace
ID of the fourth dlopened SO, for apparently no reason.
What's happening is that the call to get_first_so_ns doesn't necessarily
consume the GDB prompt, and so get_integer_valueof will see the prompt
immediately and not find the value the test is looking for.
To fix this, the test was changed so that we consume all of the output
of the command "info sharedlibrary", but only set the namespace ID for
the first occurrence of the SO we're looking for. The command now also
gets the solib name as a parameter, to reduce the amount of output.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
When running test-case gdb.dwarf2/fission-with-type-unit.exp with a remote
host configuration, say host board local-remote-host and target board
remote-gdbserver-on-localhost, I run into:
...
(gdb) maint expand-symtabs^M
During symbol reading: Could not find DWO CU \
fission-with-type-unit.dwo(0xf00d) referenced by CU at offset 0x2d7 \
[in module /home/remote-host/fission-with-type-unit]^M
warning: Could not find DWO CU fission-with-type-unit.dwo(0xf00d) referenced \
by CU at offset 0x2d7 [in module /home/remote-host/fission-with-type-unit]^M
(gdb) FAIL: gdb.dwarf2/fission-with-type-unit.exp: maint expand-symtabs
...
Fix this by adding the missing download to remote host of the .dwo file.
Tested by running make-check-all.sh on x86_64-linux.
|
|
When writing the test, I copied these copyright entries from another
file but forgot to adjust the dates, do that now.
Change-Id: Ie458ad4ec81062b5ef24f78334f3d0920c99b318
|
|
gdb.threads/infcall-from-bp-cond-simple.exp
With a gdb 15.2 based package and test-case
gdb.threads/infcall-from-bp-cond-simple.exp, I ran into:
...
Thread 2 "infcall-from-bp" hit Breakpoint 3, function_with_breakpoint () at \
infcall-from-bp-cond-simple.c:51
51 return 1; /* Nested breakpoint. */
Error in testing condition for breakpoint 2:
The program being debugged stopped while in a function called from GDB.
Evaluation of the expression containing the function
(function_with_breakpoint) will be abandoned.
When the function is done executing, GDB will silently stop.
[Thread 0x7ffff73fe6c0 (LWP 951822) exited]
(gdb) FAIL: $exp: target_async=on: target_non_stop=on: \
run_bp_cond_hits_breakpoint: continue
...
The test fails because it doesn't expect the "[Thread ... exited]" message.
I have tried to reproduce this test failure, both using 15.2 and current
trunk, but haven't managed.
Regardless, I think the message is harmless, so allow it to occur, both in
run_bp_cond_segfaults and run_bp_cond_hits_breakpoint.
Tested on x86_64-linux.
PR testsuite/32785
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32785
|
|
On riscv64-linux, with test-case gdb.base/vla-optimized-out.exp I ran into:
...
(gdb) p sizeof (a)^M
$2 = <optimized out>^M
(gdb) FAIL: $exp: o1: printed size of optimized out vla
...
The variable a has type 0xbf:
...
<1><bf>: Abbrev Number: 12 (DW_TAG_array_type)
<c0> DW_AT_type : <0xe3>
<c4> DW_AT_sibling : <0xdc>
<2><c8>: Abbrev Number: 13 (DW_TAG_subrange_type)
<c9> DW_AT_type : <0xdc>
<cd> DW_AT_upper_bound : 13 byte block:
a3 1 5a 23 1 8 20 24 8 20 26 31 1c
(DW_OP_entry_value: (DW_OP_reg10 (a0));
DW_OP_plus_uconst: 1; DW_OP_const1u: 32;
DW_OP_shl; DW_OP_const1u: 32; DW_OP_shra;
DW_OP_lit1; DW_OP_minus)
...
which has an upper bound using a DW_OP_entry_value, and since the
corresponding call site contains no information to resolve the value of a0 at
function entry:
...
<2><6b>: Abbrev Number: 6 (DW_TAG_call_site)
<6c> DW_AT_call_return_pc: 0x638
<74> DW_AT_call_origin : <0x85>
...
evaluting the dwarf expression fails, and we get <optimized out>.
My first thought was to try breaking at *f1 instead of f1 to see if that would
help, but actually the breakpoint resolved to the same address.
In other words, the inferior is stopped at function entry.
Fix this by resolving DW_OP_entry_value when stopped at function entry by
simply evaluating the expression.
This handles these two cases (x86_64, using reg rdi):
- DW_OP_entry_value: (DW_OP_regx: 5 (rdi))
- DW_OP_entry_value: (DW_OP_bregx: 5 (rdi) 0; DW_OP_deref_size: 4)
Tested on x86_64-linux.
Tested gdb.base/vla-optimized-out.exp on riscv64-linux.
Tested an earlier version of gdb.dwarf2/dw2-entry-value-2.exp on
riscv64-linux, but atm I'm running into trouble on that machine (cfarm92) so
I haven't tested the current version there.
|
|
Commit 71a48752660b ("gdb/dwarf: remove create_dwo_cu_reader")
introduced a regression when handling files compiled with "-gsplit-dwarf
-fdebug-types-section" (at least with clang):
$ cat test.cpp
#include <vector>
int main()
{
std::vector<int> v;
return v.size ();
}
$ clang++ -O0 test.cpp -g -gdwarf-5 -gsplit-dwarf -fdebug-types-section -o test
$ ./gdb -nx -q --data-directory=data-directory ./test -ex "maint expand-symtabs"
Reading symbols from ./test...
/home/smarchi/src/binutils-gdb/gdb/dwarf2/read.c:6159: internal-error: setup_type_unit_groups: Assertion `per_cu->is_debug_types' failed.
In the main file, we have a skeleton CU with a certain DWO ID:
0x00000000: Compile Unit: ..., unit_type = DW_UT_skeleton, ..., DWO_id = 0x146eaa4daf5deef2, ...
In the .dwo file, the first unit is a type unit with a certain type
signature:
0x00000000: Type Unit: ..., unit_type = DW_UT_split_type, ..., type_signature = 0xb499dcf29e2928c4, ...
and the split compile unit matching the DWO ID from the skeleton from
the main file comes later:
0x0000117f: Compile Unit: ..., unit_type = DW_UT_split_compile, ..., DWO_id = 0x146eaa4daf5deef2, ...
The problem introduced by the aforementioned commit is that when
creating a dwo_unit structure representing the type unit, we use the
signature (DWO id) from the skeleton, instead of the signature from the
type unit's header. As a result, all dwo_units get created with the
same signature (the DWO id) and only the first unit gets inserted in the
hash table. When looking up the comp unit by DWO ID later on, we
wrongly find the type unit, and try to expand a type unit as a comp
unit, hitting the assert.
Before that commit, we passed `reader.cu ()` to lookup_dwo_id, which
yields a dwarf2_cu built from parsing the type unit's header. This
dwarf2_cu contains the comp_unit_header with the correct signature. Fix
the code to use `reader.cu ()` again.
Another thing that enables this bug is the fact that since DWARF 5, type
and compile units are all in .debug_info, and therefore read by
create_cus_hash_table, so they both end up in dwo_file::cus. Type units
should end up in dwo_file::tus, otherwise they won't be found by
lookup_dwo_cutu. This bug hasn't given me trouble so far, so I'm not
fixing it right now, but it's on my todo list.
The problem can be seen with some tests, when using the
dwarf5-fission-debug-types board:
$ make check TESTS="gdb.cp/expand-sals.exp" RUNTESTFLAGS="--target_board=dwarf5-fission-debug-types CC_FOR_TARGET=clang CXX_FOR_TARGET=clang++"
Running /home/simark/src/binutils-gdb/gdb/testsuite/gdb.cp/expand-sals.exp ...
FAIL: gdb.cp/expand-sals.exp: gdb_breakpoint: set breakpoint at main (GDB internal error)
But this patch also adds a DWARF assembler-based test that triggers the
internal error.
Note that the new test does not use the build_executable_and_dwo_files
proc, because I found that it is subtly broken and doesn't work to put
multiple units in a single .dwo file. The debug abbrev offset field in
the second unit's header would be 0, when it should have been something
else. The problem is that no linking is ever done to generate the .dwo
file, so the relocation that would apply for this field is never
applied. Instead, I generate two DWARF debug infos separately and link
the .dwo file using gdb_compile, it seems to work fine.
Change-Id: I96f809c56f703e25f72b8622c32e6bb91de20d6a
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Fix what looks like a copy paste error resulting in the wrong abbrev
section name. The resulting section name in my test was
".debug_info.dwo.dwo", when it should have been ".debug_abbrev.dwo".
Change-Id: I82166d8ac6eaf3c3abc15d2d2949d00c31fe79f4
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add support to the DWARF assembler to generate DWARF 5 split compile
units. The assembler knows how to generate DWARF < 5 split compile
units (fission), DWARF 5 compile units, but not DWARF 5 split compile
units. What's missing is:
- using the right unit type in the header: skeleton for the unit in the
main file and split_compile for the unit in the DWO file
- have a way for the caller to specify the DWO ID that will end up in
the unit header
Add a dwo_id parameter to the cu proc. In addition to specifying the
DWO ID, the presence of this parameter tells the assembler to use the
skeleton or split_compile unit type.
This is used in a subsequent patch.
Change-Id: I05d9b189a0843ea6c2771b1d5e5a91762426dea9
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I'm currently fixing bugs and performance issues when GDB encounters
this particular configuration. Since split DWARF + type units makes GDB
take some code paths not taken by any other board files, I think it
deserves to be its own board file. One particularity is that the
produced .dwo files have a .debug_info.dwo section that contains some
ype units, in addition to the compile unit.
Add that board to make-check-all.sh.
Change-Id: I245e6f600055a27e0c31f1a4a9af1f68292fe18c
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This updates the copyright headers to include 2025. I did this by
running gdb/copyright.py and then manually modifying a few files as
noted by the script.
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
and move it from gdb.base to gdb.arch as it's a target specific test.
Reviewed-by: Maciej W. Rozycki <macro@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Consider the following scenario:
...
$ cat hello
int
main (void)
{
printf ("hello\n");
return 0;
}
$ gcc -x c hello -g
$ gdb -q -iex "maint set gnu-source-highlight enabled off" a.out
Reading symbols from a.out...
(gdb) start
Temporary breakpoint 1 at 0x4005db: file hello, line 6.
Starting program: /data/vries/gdb/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Temporary breakpoint 1, main () at hello:6
6 printf ("hello\n");
...
This doesn't produce highlighting for line 6, because:
- pygments is used for highlighting instead of source-highlight, and
- pygments guesses the language for highlighting only based on the filename,
which in this case doesn't give a clue.
Fix this by:
- adding a language parameter to the extension_language_ops.colorize interface,
- passing the language as found in the debug info, and
- using it in gdb.styling.colorize to pick the pygments lexer.
The new test-case gdb.python/py-source-styling-2.exp excercises a slightly
different scenario: it compiles a c++ file with a .c extension, and checks
that c++ highlighting is done instead of c highlighting.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR cli/30966
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30966
|
|
GDB has had basic support for linkage namespaces for some time already,
but only in the sense of managing multiple copies of the same shared
object being loaded, and a very fragile way to find the correct copy of
a symbol (see PR shlibs/32054).
This commit is the first step in improving the user experience around
multiple namespace support. It introduces a user-friendly identifier for
namespaces, in the format [[<number>]], that will keep consistent between
dlmopen and dlclose calls. The plan is for this identifier to be usable
in expressions like `print [[1]]::var` to find a specific instance of a
symbol, and so the identifier must not be a valid C++ or Ada namespace
identifier, otherwise disambiguation becomes a problem. Support for
those expressions has not been implemented yet, it is only mentioned to
explain why the identifier looks like this.
This syntax was chosen based on the C attributes, since nothing in GDB
uses a similar syntax that could confuse users. Other syntax options
that were explored were "#<number>" and "@<number>". The former was
abandoned because when printing a frame, the frame number is also
printed with #<number>, so in a lot of the context in which that the
identifier would show up, it appears in a confusing way. The latter
clashes with the array printing syntax, and I believe that the having
"@N::foo" working completely differently to "foo@2" would also lead to a
bad user experience.
The namespace identifiers are stored via a vector inside svr4_info
object. The vector stores the address of the r_debug objects used by
glibc to identify each namespace, and the user-friendly ID is the index
of the r_debug in the vector. This commit also introduces a set storing
the indices of active namespaces. The glibc I used to develop this patch
(glibc 2.40 on Fedora 41) doesn't allow an SO to be loaded into a
deactivated namespace, and requesting a new namespace when a namespace
was previously closed will reuse that namespace. Because of how this is
implemented, this patch lets GDB easily track the exact namespace IDs
that the inferior will see.
Finally, two new solib_ops function pointers were added, find_solib_ns
and num_active_namespaces, to allow code outside of solib-svr4 to find
and use the namespace identifiers and the number of namespaces,
respectively. As a sanity check, the command `info sharedlibrary` has
been changed to display the namespace identifier when the inferior has
more than one active namespace. With this final change, a couple of tests
had to be tweaked to handle the possible new column, and a new test has
been created to make sure that the column appears and disappears as
needed, and that GDB can track the value of the LMID for namespaces.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
In commit af2b87e649b ("[gdb/testsuite] Add xfail for PR gcc/101633"), I added
an xfail that was controlled by variable old_gcc, triggering the xfail for
gcc 7 and before, but not for gcc 8 onwards:
...
set old_gcc [expr [test_compiler_info {gcc-[0-7]-*}]]
...
In commit 1411185a57e ("Introduce and use gnat_version_compare"), this changed
to:
...
set old_gcc [gnat_version_compare <= 7]
...
which still triggered the xfail for gcc 7, because of a bug in
gnat_version_compare.
After that bug got fixed, the xfail was no longer triggered because the gnatmake
version is 7.5.0, and [version_compare {7 5 0} <= {7}] == 0.
We could have the semantics for version_compare where we clip the input
arguments to the length of the shortest, and so we'd have
[version_compare {7 5 0} <= {7}] == [version_compare {7} <= {7}] == 1.
But let's stick with the current version-sort semantics, and fix this by
using [gnat_version_compare < 8] instead.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a test-case gdb.testsuite/version-compare.exp that excercises proc
version_compare, and a note to proc version_compare that it considers
v1 < v1.0 instead of v1 == v1.0.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Compile a 32-bit x86 executable and then stop within a system call.
Change the sysroot to a non-existent directory, GDB should try (and
fail) to reload the currently loaded shared libraries. However, GDB
should retain the symbols for the vDSO library as that is not loaded
from the file system.
Check the backtrace to ensure that the __kernel_vsyscall symbol is
still in the backtrace, this indicates GDB still has the vDSO
symbols available.
This test was present in Fedora for a long time and was
originally written by Jan Kratochvil for this fix
829a902da291e72ad17e8c44fa8d9ead3db41b1f.
Co-Authored-By: Jan Kratochvil <jan.kratochvil@redhat.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Tom de Vries pointed out that my earlier change to
gnat_version_compare made it actually test gcc's version -- not
gnat's.
This patch changes gnat_version_compare to examine gnatmake's version,
while preserving the nicer API.
Approved-By: Tom de Vries <tdevries@suse.de>
|