Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds support for process recording of the instruction rdtscp in
x86 architecture.
Debugging applications with "record full" fail to record with the error
message "Process record does not support instruction 0xf01f9".
Approved-by: Guinevere Larsen <blarsen@redhat.com>
|
|
The test gdb.base/watchpoint.exp has a proc named 'test_stepping'
which claims to "Test stepping and other mundane operations with
watchpoints enabled". It sets a watchpoint on ival2, performs an
inferior function call (which is not at all mundane), and uses 'next',
'until', and, finally, does a 'step'.
However, that final 'step' command steps to (but not over/through) the
line at which the assignment to ival2 takes place. At no time while
performing these operations is a watchpoint hit.
This commit adds a test to see what happens when stepping over/through
the assignment to ival2. The watchpoint on ival2 should be triggered
during this step. I've added another 'step' to make sure that the
correct statement is reached after performing the watchpoint-hitting
step.
After running the 'test_stepping' proc, gdb.base/watchpoint.exp does
a clean_restart before doing further tests, so nothing depends upon
'test_stepping' to stop at the particular statement at which it had
been stopping.
I've examined all tests which set watchpoints and step. I haven't
been able to identify a(nother) test case which tests what happens
when stepping over/through a statement which triggers a watchpoint.
Therefore, adding these new 'step' tests is testing something which
hasn't being tested elsewhere.
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
|
|
I noticed that the DWARF assembler starts abbrevs at 2.
I think 1 should be preferred.
Co-Authored-By: Tom de Vries <tdevries@suse.de>
|
|
Changes introduced by commit 9e8915c6cee5c37637521b424d723e990e06d597
caused a regression that meant hardware watchpoint stops would not
trigger in reverse execution or replay mode. This was documented in
PR breakpoints/21969.
The problem is that record_check_stopped_by_breakpoint always overwrites
record_full_stop_reason, thus loosing the TARGET_STOPPED_BY_WATCHPOINT
value which would be checked afterwards.
This commit fixes that by not overwriting the stop-reason in
record_full_stop_reason if we're not stopped at a breakpoint.
And the test for hw watchpoints in gdb.reverse/watch-reverse.exp actually
tested sw watchpoints again, since "set can-use-hw-watchpoints 1"
doesn't convert enabled watchpoints to use hardware.
This is fixed by disabling said watchpoint while enabling hw watchpoints.
The same is not done for gdb.reverse/watch-precsave.exp, since it's not
possible to use hw watchpoints in restored recordings anyways.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=21969
Approved-by: Guinevere Larsen <blarsen@redhat.com>
|
|
This reverts commit 1c04f72368c ("[gdb/symtab] Fix assert in set_length"), due
to a regression reported in PR29572, and implements a different fix for PR29453.
The fix is to not use the CU table in a .debug_names section to construct
all_units, but instead use create_all_units, and then verify the CU
table from .debug_names. This also fixes PR25969, so remove the KFAIL.
Approved-By: Tom Tromey <tom@tromey.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29572
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=25969
|
|
Commit 33ae45434d0 updated the text reported by GDB when showing the
number of worker threads. However, it neglected to update the assertions
using this text, which caused index-file.exp to fail. This commit
corrects this omission.
Tested index-file.exp is fixed on my local machine.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In the commit:
commit 4793f551a5aa68522fd5fbbb7e8f621148f410cd
Date: Mon Nov 27 13:33:17 2023 +0000
gdb: allow use of ~ in 'save gdb-index' command
I added a test which has a directory name within the GDB command,
which then appears in the test name as I failed to give the test a
better name.
Fixed in this commit.
|
|
This commit enables the early initialization commands (92e4e97a9f5) to
modify the number of threads used by gdb's thread pool.
The motivation here is to prevent gdb from spawning a detrimental number
of threads on many-core systems under environments with restrictive
ulimits.
With gdb before this commit, the thread pool takes the following sizes:
1. Thread pool size is initialized to 0.
2. After the maintenance commands are defined, the thread pool size is
set to the number of system cores (if it has not already been set).
3. Using early initialization commands, the thread pool size can be
changed using "maint set worker-threads".
4. After the first prompt, the thread pool size can be changed as in the
previous step.
Therefore after step 2. gdb has potentially launched hundreds of threads
on a many-core system.
After this change, step 2 and 3 are reversed so there is an opportunity
to set the required number of threads without needing to default to the
number of system cores first.
There does exist a configure option (added in 261b07488b9) to disable
multithreading, but this does not allow for an already deployed gdb to
be configured.
Additionally, the default number of worker threads is clamped at eight
to control the number of worker threads spawned on many-core systems.
This value was chosen as testing recorded on bugzilla issue 29959
indicates that parallel efficiency drops past this point.
GDB built with GCC 13.
No test suite regressions detected. Compilers: GCC, ACfL, Intel, Intel
LLVM, NVHPC; Platforms: x86_64, aarch64.
The scenario that interests me the most involves preventing GDB from
spawning any worker threads at all. This was tested by counting the
number of clones observed by strace:
strace -e clone,clone3 gdb/gdb -q \
--early-init-eval-command="maint set worker-threads 0" \
-ex q ./gdb/gdb |& grep --count clone
The new test relies on "gdb: install CLI uiout while processing early
init files" developed by Andrew Burgess. This patch will need pushing
prior to this change.
The clamping was tested on machines with both 16 cores and a single
core. "maint show worker-threads" correctly reported eight and one
respectively.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that after resizing to a narrow window, I got:
...
┌────────────────┐
│ │
│[ No Source Avail
able ] │
│ │
└────────────────┘
...
Fix this by adding two new functions:
- tui_win_info::display_string (int y, int x, const char *str)
- tui_win_info::display_string (const char *str)
that make sure that borders are not overwritten, which get us instead:
...
┌────────────────┐
│ │
│[ No Source Avai│
│ │
│ │
└────────────────┘
...
Tested on x86_64-linux.
|
|
When using GDB on native linux, it can happen that, while attempting
to detach an inferior, the inferior may have been exited or have been
killed, yet still be in the list of lwps. Should that happen, the
assert in x86_linux_update_debug_registers in
gdb/nat/x86-linux-dregs.c will trigger. The line in question looks
like this:
gdb_assert (lwp_is_stopped (lwp));
For this case, the lwp isn't stopped - it's dead.
The bug which brought this problem to my attention is one in which the
pwntools library uses GDB to to debug a process; as the script is
shutting things down, it kills the process that GDB is debugging and
also sends GDB a SIGTERM signal, which causes GDB to detach all
inferiors prior to exiting. Here's a link to the bug:
https://bugzilla.redhat.com/show_bug.cgi?id=2192169
The following shell command mimics part of what the pwntools
reproducer script does (with regard to shutting things down), but
reproduces the bug much less reliably. I have found it necessary to
run the command a bunch of times before seeing the bug. (I usually
see it within 5-10 repetitions.) If you choose to try this command,
make sure that you have no running "cat" or "gdb" processes first!
cat </dev/zero >/dev/null & \
(sleep 5; (kill -KILL `pgrep cat` & kill -TERM `pgrep gdb`)) & \
sleep 1 ; \
gdb -q -iex 'set debuginfod enabled off' -ex 'set height 0' \
-ex c /usr/bin/cat `pgrep cat`
So, basically, the idea here is to kill both gdb and cat at roughly
the same time. If we happen to attempt the detach before the process
lwp has been deleted from GDB's (linux native) LWP data structures,
then the assert will trigger. The relevant part of the backtrace
looks like this:
#8 0x00000000008a83ae in x86_linux_update_debug_registers (lwp=0x1873280)
at gdb/nat/x86-linux-dregs.c:146
#9 0x00000000008a862f in x86_linux_prepare_to_resume (lwp=0x1873280)
at gdb/nat/x86-linux.c:81
#10 0x000000000048ea42 in x86_linux_nat_target::low_prepare_to_resume (
this=0x121eee0 <the_amd64_linux_nat_target>, lwp=0x1873280)
at gdb/x86-linux-nat.h:70
#11 0x000000000081a452 in detach_one_lwp (lp=0x1873280, signo_p=0x7fff8ca3441c)
at gdb/linux-nat.c:1374
#12 0x000000000081a85f in linux_nat_target::detach (
this=0x121eee0 <the_amd64_linux_nat_target>, inf=0x16e8f70, from_tty=0)
at gdb/linux-nat.c:1450
#13 0x000000000083a23b in thread_db_target::detach (
this=0x1206ae0 <the_thread_db_target>, inf=0x16e8f70, from_tty=0)
at gdb/linux-thread-db.c:1385
#14 0x0000000000a66722 in target_detach (inf=0x16e8f70, from_tty=0)
at gdb/target.c:2526
#15 0x0000000000a8f0ad in kill_or_detach (inf=0x16e8f70, from_tty=0)
at gdb/top.c:1659
#16 0x0000000000a8f4fa in quit_force (exit_arg=0x0, from_tty=0)
at gdb/top.c:1762
#17 0x000000000070829c in async_sigterm_handler (arg=0x0)
at gdb/event-top.c:1141
My colleague, Andrew Burgess, has done some recent work on other
problems with detach. Upon hearing of this problem, he came up a test
case which reliably reproduces the problem and tests for a few other
problems as well. In addition to testing detach when the inferior has
terminated due to a signal, it also tests detach when the inferior has
exited normally. Andrew observed that the linux-native-only
"checkpoint" command would be affected too, so the test also tests
those cases when there's an active checkpoint.
For the LWP exit / termination case with no checkpoint, that's handled
via newly added checks of the waitstatus in detach_one_lwp in
linux-nat.c.
For the checkpoint detach problem, I chose to pass the lwp_info
to linux_fork_detach in linux-fork.c. With that in place, suitable
tests were added before attempting a PTRACE_DETACH operation.
I added a few asserts at the beginning of linux_fork_detach and
modified the caller code so that the newly added asserts shouldn't
trigger. (That's what the 'pid == inferior_ptid.pid' check is about
in gdb/linux-nat.c.)
Lastly, I'll note that the checkpoint code needs some work with regard
to background execution. This patch doesn't attempt to fix that
problem, but it doesn't make it any worse. It does slightly improve
the situation with detach because, due to the check noted above,
linux_fork_detach() won't be called for the wrong inferior when there
are multiple inferiors. (There are at least two other problems with
the checkpoint code when there are multiple inferiors. See:
https://sourceware.org/bugzilla/show_bug.cgi?id=31065)
This commit also adds a new test,
gdb.base/process-dies-while-detaching.exp. Andrew Burgess is the
primary author of this test case. Its design is similar to that of
gdb.threads/main-thread-exit-during-detach.exp, which was also written
by Andrew.
This test checks that GDB correctly handles several cases that can
occur when GDB attempts to detach an inferior process. The process
can exit or be terminated (e.g. via SIGKILL) prior to GDB's event
loop getting a chance to remove it from GDB's internal data
structures. To complicate things even more, detach works differently
when a checkpoint (created via GDB's "checkpoint" command) exists for
the inferior. This test checks all four possibilities: process exit
with no checkpoint, process termination with no checkpoint, process
exit with a checkpoint, and process termination with a checkpoint.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
On Linux, threads are treated much like separate processes by the
kernel. In particular, it's possible to ptrace just a single thread.
If gdb tries to attach to a multi-threaded inferior, where a non-main
thread is already being traced (e.g., by strace), then gdb will get
into an infinite loop attempting to attach.
This patch fixes this problem by having the attach fail if ptrace
fails to attach to any thread of the inferior.
|
|
The commit a3da2e7e550c4fe79128b5e532dbb90df4d4f418 has introduced
regressions when testing using the READ1 mechanism. The reason for that
is the new failure path in proc test_gdb_complete_tab_unique, which
looks for GDB suggesting more than what the test inputted, but not the
correct answer, followed by a white space. Consider the following case:
int foo(int bar, int baz);
Sending the command "break foo<tab>" to GDB will return
break foo(int, int)
which easily fits the buffer in normal testing, so everything works, but
when reading one character at a time, the test will find the partial
"break foo(int, " and assume that there was a mistake, so we get a
spurious FAIL.
That change was added because we wanted to avoid forcing a completion
failure to fail through timeout, which it had to do because there is no
way to verify that the output is done, mostly because when I was trying
to solve a different problem I kept getting reading errors and testing
completion was frustrating.
This commit implements a better way to avoid that frustration, by first
testing gdb's complete command and only if that passes we will test tab
completion. The difference is that when testing with the complete
command, we can tell when the output is over when we receive the GDB
prompt again, so we don't need to rely on timeouts. With this, the
change to test_gdb_complete_tab_unique has been removed as that test
will only be run and fail in the very unlikely scenario that tab
completion is different than command completion.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This commit makes the gdb.Command.complete methods more verbose when
it comes to error handling.
Previous to this commit if any commands implemented in Python
implemented the complete method, and if there were any errors
encountered when calling that complete method, then GDB would silently
hide the error and continue as if there were no completions.
This makes is difficult to debug any errors encountered when writing
completion methods, and encourages the idea that Python extensions can
be broken, and GDB will just silently work around them.
I don't think this is a good idea. GDB should encourage extensions to
be written correctly, and robustly, and one way in which GDB can (I
think) support this, is by pointing out when an extension goes wrong.
In this commit I've gone through the Python command completion code,
and added calls to gdbpy_print_stack() or gdbpy_print_stack_or_quit()
in places where we were either clearing the Python error, or, in some
cases, just not handling the error at all.
One thing I have not changed is in cmdpy_completer (py-cmd.c) where we
process the list of completions returned from the Command.complete
method; this routine includes a call to gdbpy_is_string to check a
possible completion is a string, if not the completion is ignored.
I was tempted to remove this check, attempt to complete each result to
a string, and display an error if the conversion fails. After all,
returning anything but a string is surely a mistake by the extension
author.
However, the docs clearly say that only strings within the returned
list will be considered as completions. Anything else is ignored. As
such, and to avoid (what I think is pretty unlikely) breakage of
existing code, I've retained the gdbpy_is_string check.
After the gdbpy_is_string check we call python_string_to_host_string,
if this call fails then I do now print the error, where before we
ignored the error. I think this is OK; if GDB thinks something is a
string, but still can't convert it to a string, then I think it's OK
to display the error in that case.
Another case which I was a little unsure about was in
cmdpy_completer_helper, and the call to PyObject_CallMethodObjArgs,
which is when we actually call Command.complete. Previously, if this
call resulted in an exception then we would ignore this and just
pretend there were no completions.
Of all the changes, this is possibly the one with the biggest
potential for breaking existing scripts, but also, is, I think, the
most useful change. If the user code is wrong in some way, such that
an exception is raised, then previously the user would have no obvious
feedback about this breakage. Now GDB will print the exception for
them, making it, I think, much easier to debug their extension. But,
if there is user code in the wild that relies on raising an exception
as a means to indicate there are no completions .... well, that code
is going to break after this commit. I think we can live with this
though, the exceptions means no completions thing was never documented
behaviour.
I also added a new error() call if the PyObject_CallMethodObjArgs call
raises an exception. This causes the completion mechanism within GDB
to stop. Within GDB the completion code is called twice, the first
time to compute the work break characters, and then a second time to
compute the actual completions.
If PyObject_CallMethodObjArgs raises an exception when computing the
word break character, and we print it by calling
gdbpy_print_stack_or_quit(), but then carry on as if
PyObject_CallMethodObjArgs had returns no completions, GDB will
call the Python completion code again, which results in another call
to PyObject_CallMethodObjArgs, which might raise the same exception
again. This results in the Python exception being printed twice.
By throwing a C++ exception after the failed
PyObject_CallMethodObjArgs call, the completion mechanism is aborted,
and no completions are offered. But importantly, the Python exception
is only printed once. I think this gives a much better user
experience.
I've added some tests to cover this case, as I think this is the most
likely case that a user will run into.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I spotted I made a small mistake in this commit:
commit aff250145af6c7a8ea9332bc1306c1219f4a63db
Date: Fri Nov 24 12:04:36 2023 +0000
gdb: generate gdb-index identically regardless of work thread count
In this commit I added a new proc in testsuite/lib/gdb.exp called
gdb_get_worker_threads. This proc uses gdb_test_multiple with two
possible patterns. One pattern is anchored with '^', while the other
is missing the '^' which it could use.
This commit adds the missing '^'.
|
|
DAP specifies a "process" event that is sent when a process is started
or attached to. gdb was not emitting this (several DAP clients appear
to ignore it entirely), but it looked easy and harmless to implement.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30473
|
|
The make-check-all.sh script (gdb/testsuite/make-check-all.sh) is
great, it makes it super easy to run some test(s) using all the
available board files.
This commit aims to make this script even easier to access by adding a
check-all-boards target to the GDB Makefile. This new target checks
for (and requires) a number of environment variables, so the target
should be used like this:
make check-all-boards GDB_TARGET_USERNAME=remote-target \
GDB_HOST_USERNAME=remote-host \
TESTS="gdb.base/break.exp"
Where GDB_TARGET_USERNAME and GDB_HOST_USERNAME are the user names
that should be passed to the make-check-all.sh --target-user and
--host-user command line options respectively.
My personal intention is to set these variables in my environment, so
all I'll need to do is:
make check-all-boards TESTS="gdb.base/break.exp"
The make rule always passes --keep-results to the make-check-all.sh
script, as I find that the most useful. It's super frustrating to run
the tests and realise you forgot that option and the results have been
discarded.
|
|
I have been making more use of the make-check-all.sh script to run
tests against all boards.
But one thing is pretty annoying. When a test fails on some random
board, I have to run make-check-all.sh with --verbose and --dry-run in
order to see what RUNTESTFLAGS I should be using.
I always run with --keep-results on, so, in this commit, I propose
that, when --keep-results is on the 'make check' command will be
written out to a file within the stored results directory, like:
check-all/BOARD_NAME/make-check.sh
then, if I want to rerun a test, I can just:
sh check-all/BOARD_NAME/make-check.sh
and the test will be re-run for me.
|
|
Similar to the previous commit, this commit ensures that the dwarf-5
index files are generated identically as the number of worker-threads
changes.
Building the dwarf-5 index makes use of a closed hash table, the
bucket_hash local within debug_names::build(). Entries are added to
bucket_hash from m_name_to_value_set, which, in turn, is populated
by calls to debug_names::insert() in write_debug_names. The insert
calls are ordered based on the entries within the cooked_index, and
the ordering within cooked_index depends on the number of worker
threads that GDB is using.
My proposal is to sort each chain within the bucket_hash closed hash
table prior to using this to build the dwarf-5 index.
The buckets within bucket_hash will always have the same ordering (for
a given GDB build with a given executable), and by sorting the chains
within each bucket, we can be sure that GDB will see each entry in a
deterministic order.
I've extended the index creation test to cover this case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It was observed that changing the number of worker threads that GDB
uses (maintenance set worker-threads NUM) would have an impact on the
layout of the generated gdb-index.
The cause seems to be how the CU are distributed between threads, and
then symbols that appear in multiple CU can be encountered earlier or
later depending on whether a particular CU moves between threads.
I certainly found this behaviour was reproducible when generating an
index for GDB itself, like:
gdb -q -nx -nh -batch \
-eiex 'maint set worker-threads NUM' \
-ex 'save gdb-index /tmp/'
And then setting different values for NUM will change the generated
index.
Now, the question is: does this matter?
I would like to suggest that yes, this does matter. At Red Hat we
generate a gdb-index as part of the build process, and we would
ideally like to have reproducible builds: for the same source,
compiled with the same tool-chain, we should get the exact same output
binary. And we do .... except for the index.
Now we could simply force GDB to only use a single worker thread when
we build the index, but, I don't think the idea of reproducible builds
is that strange, so I think we should ensure that our generated
indexes are always reproducible.
To achieve this, I propose that we add an extra step when building the
gdb-index file. After constructing the initial symbol hash table
contents, we will pull all the symbols out of the hash, sort them,
then re-insert them in sorted order. This will ensure that the
structure of the generated hash will remain consistent (given the same
set of symbols).
I've extended the existing index-file test to check that the generated
index doesn't change if we adjust the number of worker threads used.
Given that this test is already rather slow, I've only made one change
to the worker-thread count. Maybe this test should be changed to use
a smaller binary, which is quicker to load, and for which we could
then try many different worker thread counts.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed in passing that out algorithm for generating the gdb-index
file is incorrect. When building the hash table in add_index_entry we
count every incoming entry rehash when the number of entries gets too
large. However, some of the incoming entries will be duplicates,
which don't actually result in new items being added to the hash
table.
As a result, we grow the gdb-index hash table far too often.
With an unmodified GDB, generating a gdb-index for GDB, I see a file
size of 90M, with a hash usage (in the generated index file) of just
2.6%.
With a patched GDB, generating a gdb-index for the _same_ GDB binary,
I now see a gdb-index file size of 30M, with a hash usage of 41.9%.
This is a 67% reduction in gdb-index file size.
Obviously, not every gdb-index file is going to see such big savings,
however, the larger a program, and the more symbols that are
duplicated between compilation units, the more GDB would over count,
and so, over-grow the index.
The gdb-index hash table we create has a minimum size of 1024, and
then we grow the hash when it is 75% full, doubling the hash table at
that time. Given this, then we expect that either:
a. The hash table is size 1024, and less than 75% full, or
b. The hash table is between 37.5% and 75% full.
I've include a test that checks some of these constraints -- I've not
bothered to check the upper limit, and over full hash table isn't
really a problem here, but if the fill percentage is less than 37.5%
then this indicates that we've done something wrong (obviously, I also
check for the 1024 minimum size).
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Split out the code that makes a copy of the GDB executable ready for
self testing into a new proc. A later commit in this series wants to
load the GDB executable into GDB (for creating an on-disk debug
index), but doesn't need to make use of the full do_self_tests proc.
There should be no changes in what is tested after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a call to gdb_tilde_expand in the save_gdb_index_command function,
this means that we can now do:
(gdb) save gdb-index ~/blah/
Previous this wouldn't work.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
We have a target board cc-with-gdb-index that uses the gdb-add-index script to
add a .gdb_index index to an exec.
There is however an alternative way of adding a .gdb_index: the index-cache.
Add a new target board cc-with-index-cache.
This is not superfluous for two reasons:
- there is functionality that gdb-add-index doesn't support, but the
index-cache does: the index-cache can add an index to an exec with a
.gnu_debugaltlink (note that when using the cc-with-gdb-index board this
case is quietly ignored), and
- using the index-cache is excercised in only a few test-cases, and having
this target board extends the test coverage to the entire test suite. This
is for instance relevant because the index-cache is written by a worker
thread in the background, so we can check more thoroughly for data races
(see PR symtab/30837).
Tested on x86_64-linux.
Shell script changes checked with shellcheck.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
While working on cancellation, I noticed that a DAP 'pause' request
would set the "do not emit the continue" flag. This meant that a
subsequent request that should provoke a 'continue' event would
instead suppress the event.
I then tried writing a more obvious test case for this, involving an
inferior call -- and discovered that gdb.events.cont does not fire for
an inferior call.
This patch installs a new event listener for gdb.events.inferior_call
and arranges for this to emit continue and stop events when
appropriate. It also fixes the original bug, by adding a check to
exec_and_expect_stop.
|
|
GDB's Python API documentation for gdb.Command.complete() says:
The 'complete' method can return several values:
* If the return value is a sequence, the contents of the
sequence are used as the completions. It is up to 'complete'
to ensure that the contents actually do complete the word. A
zero-length sequence is allowed, it means that there were no
completions available. Only string elements of the sequence
are used; other elements in the sequence are ignored.
* If the return value is one of the 'COMPLETE_' constants
defined below, then the corresponding GDB-internal completion
function is invoked, and its result is used.
* All other results are treated as though there were no
available completions.
So, returning a non-sequence, and non-integer from a complete method
should be fine; it should just be treated as though there are no
completions.
However, if I write a complete method that returns None, I see this
behaviour:
(gdb) complete completefilenone x
Python Exception <class 'TypeError'>: 'NoneType' object is not iterable
warning: internal error: Unhandled Python exception
(gdb)
Which is caused because we currently assume that anything that is not
an integer must be iterable, and we call PyObject_GetIter on it. When
this call fails a Python exception is set, but instead of
clearing (and therefore ignoring) this exception as we do everywhere
else in the Python completion code, we instead just return with the
exception set.
In this commit I add a PySequence_Check call. If this call returns
false (and we've already checked the integer case) then we can assume
there are no completion results.
I've added a test which checks returning a non-sequence.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
On pinebook I ran into:
...
Running gdb.tui/tui-layout-asm-short-prog.exp ...
gdb compile failed, gdb.tui/tui-layout-asm-short-prog.S: Assembler messages:
gdb.tui/tui-layout-asm-short-prog.S:23: Error: \
junk at end of line, first unrecognized character is `,'
...
Fix this by using %progbits instead of @progbits for arm.
Approved-by: Luis Machado <luis.machado@arm.com>
Tested on x86_64-linux and pinebook.
|
|
I ran test-case gdb.python/tui-window-disabled.exp on a configuration without
python support, and ran into:
...
PASS: $exp: cleanup_properly=True: initial restart: set pagination off
UNSUPPORTED: $exp: cleanup_properly=True: couldn't restart GDB
PASS: $exp: cleanup_properly=False: initial restart: set pagination off
UNSUPPORTED: $exp: cleanup_properly=False: couldn't restart GDB
...
After looking into the test-case, I realized that this is a consequence of
!allow_python_tests.
Handle this instead by requiring allow_python_tests, such that we get the usual
and more clear:
...
UNSUPPORTED: $exp: require failed: allow_python_tests
...
Also fix a return without value in clean_restart_and_setup, which if triggered
would cause:
...
ERROR: expected boolean value but got ""
...
Tested on x86_64-linux.
|
|
When starting TUI in a terminal with 3 lines:
...
$ echo $LINES
3
$ gdb -q -tui
...
and resizing the terminal to 2 lines we run into a segfault.
The problem is that for the source window:
- the minimum height is 3 (the default), but
- the maximum height is only 2 because there are only 2 lines.
This discrepancy eventually leads to a call to newwin in make_window with:
...
(gdb) p height
$1 = 3
(gdb) p width
$2 = 56
(gdb) p y
$3 = -1
(gdb) p x
$4 = 0
...
which results in a nullptr.
This violates the assumption here in tui_apply_current_layout:
....
/* Get the new list of currently visible windows. */
std::vector<tui_win_info *> new_tui_windows;
applied_layout->get_windows (&new_tui_windows);
...
that get_windows only returns visible windows, which leads to tui_windows
holding a dangling pointer, which results in the segfault.
Fix this by:
- making sure get_windows only returns visible windows, and
- detecting the situation and dropping windows from the layout if
there's no room for them.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR tui/31044
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31044
|
|
When starting TUI in a terminal with 2 lines (likewise with 1 line):
...
$ echo $LINES
2
$ gdb -q -tui
...
we run into this assert in tui_apply_current_layout:
...
/* This should always be made visible by a layout. */
gdb_assert (TUI_CMD_WIN != nullptr);
...
The problem is that for the command window:
- the minimum height is 3 (the default), but
- the maximum height is only 2 because there are only 2 lines.
This discrepancy eventually leads to a call to newwin in make_window with:
...
(gdb) p height
$1 = 3
(gdb) p width
$2 = 66
(gdb) p y
$3 = -1
(gdb) p x
$4 = 0
(gdb)
...
which results in a nullptr, which eventually triggers the assert.
The easiest way to fix this is to change the minimum height of the command
window to 1. However, that would also change behaviour for the case that the
screen size is 3 lines or more. For instance, in gdb.tui/winheight.exp the
number of lines in the terminal is 24, and the test-case checks that the user
cannot increase the source window height to the point that the command window
height would be less than 3.
Fix this by calculating the minimum height of the command window as follows:
- the default (3) if max_height () allows it, and
- max_height () otherwise.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR tui/31044
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31044
|
|
The C++ type-printing code had its own variant of the accessibility
enum. This patch removes this and changes the code to use the new one
from gdbtypes.h.
This patch also changes the C++ code to recognize the default
accessibility of a class. This makes ptype a bit more C++-like, and
lets us remove a chunk of questionable code.
Acked-By: Simon Marchi <simon.marchi@efficios.com>
Reviewed-by: Keith Seitz <keiths@redhat.com>
|
|
Commit 59a561480d5 ("Fix spurious FAILs with examine-backward.exp") describes
the problem that:
...
The test case examine-backward.exp issues the command "x/-s" after the end
of the first string in TestStrings, but without making sure that this
string is preceded by a string terminator. Thus GDB may spuriously print
some random characters from before that string, and then the test fails.
...
The commit fixes the problem by adding a Barrier variable before the TestStrings
variable:
...
+const char Barrier[] = { 0x0 };
const char TestStrings[] = {
...
There is however no guarantee that Barrier is placed immediately before
TestStrings.
Before recent commit 169fe7ab54b ("Change gdb.base/examine-backwards.exp for
AIX.") on x86_64-linux, I see:
...
0000000000400660 R Barrier
0000000000400680 R TestStrings
...
So while the Barrier variable is the first before the TestStrings variable,
it's not immediately preceding TestStrings.
After commit 169fe7ab54b:
...
0000000000402259 B Barrier
0000000000402020 D TestStrings
...
they're not even in the same section anymore.
Fix this reliably by adding the zero in the array itself:
...
char TestStringsBase[] = {
0x0,
...
};
char *TestStrings = &TestStringsBase[1];
...
and do likewise for TestStringsH and TestStringsW.
Tested on x86_64-linux.
PR testsuite/31064
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31064
|
|
Function Create_large returns a large data structure. On PowerPC, register
r3 contains the address of where the data structure to be returned is to
be stored. However, on exit the ABI does not guarantee that r3 has not
been changed. The GDB finish command prints the return value of the
function at the end of the function. GDB needs to use the
DW_TAG_call_site information to determine the value of r3 on entry to
the function to correctly print the return value at the end of the
function. The test must be compiled with -fvar-tracking for the
DW_TAG_call_site information to be included in the executable file.
This patch adds the -fvar-tracking option to the compile line if the
option is supported.
The patch fixes the one regression error for the test on PowerPC.
The patch has been tested on Power 10 and X86-64 with no regressions.
|
|
Following on from this commit:
commit f2c4f78c813a9cef38b7e9c9ad18822fb9e19345
Date: Thu Sep 21 16:35:30 2023 +0100
gdb: fix reread_symbols when an objfile has target: prefix
In this commit I update reopen_exec_file to correctly handle
executables with a target: prefix. Before this commit we used the
system 'stat' call, which obviously isn't going to work for files with
a target: prefix (files located on a possibly remote target machine).
By switching to bfd_stat we will use remote fileio to stat the remote
files, which means we should now correctly detect changes in a remote
executable.
The program_space::ebfd_mtime variable, with which we compare the
result of bfd_stat is set with a call to bfd_get_mtime, which in turn
calls bfd_stat, so comparing to the result of calling bfd_stat makes
sense (I think).
As I discussed in the commit f2c4f78c813a, if a BFD is an in-memory
BFD, then calling bfd_stat will always return 0, while bfd_get_mtime
will always return the time at which the BFD was created. As a result
comparing the results will always show the file having changed.
I don't believe that GDB can set the main executable to an in-memory
BFD object, so, in this commit, I simply assert that the executable is
not in-memory. If this ever changes then we would need to decide how
to handle this case -- always reload, or never reload. The assert
doesn't appear to trigger for our current test suite.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Currently, when GDB is reverse stepping out of a function into the same
function due to a recursive call, it doesn't print frame information, as
reported by PR record/29178. This happens because when the inferior
leaves the current frame, GDB decides to refresh the step information,
clobbering the original step_frame_id, making it impossible to figure
out later on that the frame has been changed.
This commit changes GDB so that, if we notice we're in this exact
situation, we won't refresh the step information.
Because of implementation details, this change can cause some debug
information to be read when it normally wouldn't before, which showed up
as a regression on gdb.dwarf2/dw2-out-of-range-end-of-seq. Since that
isn't a problem, the test was changed to allow for the new output.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29178
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Hannes' patch to show local variables in the TUI pointed out that
NoOpStructPrinter should ignore static members. This patch implements
this.
|
|
DAP specifies that a request can fail with the "notStopped" message if
the inferior is running but the request requires that it first be
stopped.
This patch implements this for gdb. Most requests are assumed to
require a stopped inferior, and the exceptions are noted by a new
'request' parameter.
You may notice that the implementation is a bit racy. I think this is
inherent -- unless the client waits for a stop event before sending a
request, the request may be processed at any time relative to a stop.
https://sourceware.org/bugzilla/show_bug.cgi?id=31037
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
DAP specifies a StackFrameFormat object that can be used to change how
the "name" part of a stack frame is constructed. While this output
can already be done in a nicer way (and also letting the client choose
the formatting), nevertheless it is in the spec, so I figured I'd
implement it.
While implementing this, I discovered that the current code does not
correctly preserve frame IDs across requests. I rewrote frame
iteration to preserve this, and it turned out to be simpler to combine
these patches.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30475
|
|
compile.exp generally does not work for me on Fedora 38. However, I
sent a GCC patch to fix the plugin crash. With that patch, I get this
error from one test in compile.exp:
gdb command line:1:22: warning: initialization of 'int (*)(int)' from incompatible pointer type 'int (*)()' [-Wincompatible-pointer-types]
This patch adds a cast to compile.exp. This makes the test pass.
Reviewed-by: Keith Seitz <keiths@redhat.com>
|
|
Simon noticed that gdb.threads/threads-after-exec.exp was racy. You
can consistenly reproduce it (at git hash
319b460545dc79280e2904dcc280057cf71fb753), with:
$ taskset -c 0 make check TESTS="gdb.threads/threads-after-exec.exp"
gdb.log shows:
(...)
Thread 3 "threads-after-e" hit Catchpoint 2 (exec'd .../gdb.threads/threads-after-exec/threads-after-exec), 0x00007ffff7fe3290
in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) PASS: gdb.threads/threads-after-exec.exp: continue until exec
info threads
Id Target Id Frame
* 3 process 1443269 "threads-after-e" 0x00007ffff7fe3290 in _start () from /lib64/ld-linux-x86-64.so.2
(gdb) FAIL: gdb.threads/threads-after-exec.exp: info threads
(...)
maint info linux-lwps
LWP Ptid Thread ID
1443269.1443269.0 1.3
(gdb) FAIL: gdb.threads/threads-after-exec.exp: maint info linux-lwps
The FAILs happen because the .exp file expects that after the exec,
the only thread has GDB thread number 1, but it has instead 3.
This is yet another case of zombie leader detection making things a
bit fuzzy.
In the passing case, we have:
continue
Continuing.
[New Thread 0x7ffff7bff640 (LWP 603183)]
[Thread 0x7ffff7bff640 (LWP 603183) exited]
process 603180 is executing new program: .../gdb.threads/threads-after-exec/threads-after-exec
While in the failing case, we have (note remarks on the rhs):
continue
Continuing.
[New Thread 0x7ffff7bff640 (LWP 600205)]
[Thread 0x7ffff7f95740 (LWP 600202) exited] <<< gdb deletes leader thread, thread 1.
[New LWP 600202] <<< gdb adds it back -- this is now thread 3.
[Thread 0x7ffff7bff640 (LWP 600205) exited]
process 600202 is executing new program: .../threads-after-exec/threads-after-exec
The testcase only has two threads, yet GDB presented the exec for
thread 3. This is GDB deleting the leader (the backend detected it
was zombie, due to the exec), and then adding the leader back when it
saw the exec event.
I've recorded some thoughts about this in PR gdb/31069.
For now, this commit just makes the testcase cope with the non-one
thread number, as the number is not important for what this test is
exercising.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31069
Change-Id: Id80b5c73f09c9e0005efeb494cca5d066ac3bbae
|
|
This changes ada-nested.exp to fix a test name (the test expects three
variables but is named "two"), and to iterate over all the variables
that are found. It also adds a workaround to a problem Tom de Vries
found with an older version of GNAT -- it emits a duplicate "x".
|
|
'runtest' complains about a path in a test name, from the new test
case py-missing-debug.exp.
This patch fixes the problem by providing an explicit test name to
gdb_test. I chose something very basic because the block in question
is already wrapped in with_test_prefix.
|
|
A co-worker requested that the DAP scope for a nested function's frame
also show the variables from outer frames. DAP doesn't directly
support this notion, so this patch arranges to put these variables
into the inner frames "Locals" scope.
I chose to do this only for DAP. For CLI and MI, gdb currently does
not do this, so this preserves the behavior.
Note that an earlier patch (see commit 4a1311ba) removed some code
that seemed to do something similar. However, that code did not
actually work.
|
|
I ran into the following FAIL:
...
(gdb) PASS: gdb.threads/stepi-over-clone.exp: catch process syscalls
continue^M
Continuing.^M
^M
Catchpoint 2 (call to syscall clone), clone () at \
../sysdeps/unix/sysv/linux/x86_64/clone.S:78^M
warning: 78 ../sysdeps/unix/sysv/linux/x86_64/clone.S: \
No such file or directory^M
(gdb) FAIL: gdb.threads/stepi-over-clone.exp: continue
...
All but one regexps in the .exp file use "clone\[23\]?" with "?" to
also accept "clone", except the failing case. This commit fixes that
case to also use "?".
Furthermore, there are FAILs like this:
...
(gdb) PASS: gdb.threads/stepi-over-clone.exp: third_thread=false: \
non-stop=on: displaced=off: i=0: continue
stepi^M
[New Thread 0x7ffff7ff8700 (LWP 15301)]^M
Hello from the first thread.^M
78 in ../sysdeps/unix/sysv/linux/x86_64/clone.S^M
(gdb) XXX: Consume the initial command
XXX: Consume new thread line
XXX: Consume first worker thread message
FAIL: gdb.threads/stepi-over-clone.exp: third_thread=false: non-stop=on: \
displaced=off: i=0: stepi
...
because this output is expected instead:
...
Hello from the first thread.^M
0x00000000004212cd in clone3 ()^M
...
The root cause for the difference is the presence of .debug_line info for
clone.
Fix this by updating the relevant regexps.
Tested on x86_64-linux, specifically:
- openSUSE Leap 15.4 (where the FAILs where observed), and
- openSUSE Tumbleweed (where the FAILs where not observed).
Co-Authored-By: Pedro Alves <pedro@palves.net>
Approved-By: Pedro Alves <pedro@palves.net>
Change-Id: I74ca9e7d4cfe6af294fd50e8c509fcbad289b78c
|
|
This commit builds on the previous commit, and implements the
extension_language_ops::handle_missing_debuginfo function for Python.
This hook will give user supplied Python code a chance to help find
missing debug information.
The implementation of the new hook is pretty minimal within GDB's C++
code; most of the work is out-sourced to a Python implementation which
is modelled heavily on how GDB's Python frame unwinders are
implemented.
The following new commands are added as commands implemented in
Python, this is similar to how the Python unwinder commands are
implemented:
info missing-debug-handlers
enable missing-debug-handler LOCUS HANDLER
disable missing-debug-handler LOCUS HANDLER
To make use of this extension hook a user will create missing debug
information handler objects, and registers these handlers with GDB.
When GDB encounters an objfile that is missing debug information, each
handler is called in turn until one is able to help. Here is a
minimal handler that does nothing useful:
import gdb
import gdb.missing_debug
class MyFirstHandler(gdb.missing_debug.MissingDebugHandler):
def __init__(self):
super().__init__("my_first_handler")
def __call__(self, objfile):
# This handler does nothing useful.
return None
gdb.missing_debug.register_handler(None, MyFirstHandler())
Returning None from the __call__ method tells GDB that this handler
was unable to find the missing debug information, and GDB should ask
any other registered handlers.
By extending the __call__ method it is possible for the Python
extension to locate the debug information for objfile and return a
value that tells GDB how to use the information that has been located.
Possible return values from a handler:
- None: This means the handler couldn't help. GDB will call other
registered handlers to see if they can help instead.
- False: The handler has done all it can, but the debug information
for the objfile still couldn't be found. GDB will not call
any other handlers, and will continue without the debug
information for objfile.
- True: The handler has installed the debug information into a
location where GDB would normally expect to find it. GDB
should look again for the debug information.
- A string: The handler can return a filename, which is the file
containing the missing debug information. GDB will load
this file.
When a handler returns True, GDB will look again for the debug
information, but only using the standard built-in build-id and
.gnu_debuglink based lookup strategies. It is not possible for an
extension to trigger another debuginfod lookup; the assumption is that
the debuginfod server is remote, and out of the control of extensions
running within GDB.
Handlers can be registered globally, or per program space. GDB checks
the handlers for the current program space first, and then all of the
global handles. The first handler that returns a value that is not
None, has "handled" the objfile, at which point GDB continues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The original intention of the test appears to be checking to make sure
setting a breakpoint in an inlined function didn't set multiple
breakpoints where one of them was at address 0.
The gdb.ada/inline-section-gc.exp test may pass or fail depending on the
version of gnat. Per the discussion on IRC, the ada inlining appears to
have some target dependencies. In this test there are two functions,
callee and caller. Function calee is inlined into caller. The test sets
a breakpoint in function callee. The reported location where the
breakpoint is set may be at the requested location in callee or the
location in caller after callee has been inlined. The test needs to
accept either location as correct provided the breakpoint address is not
zero.
This patch checks to see if the reported breakpoint is in function callee
or function caller and fails if the breakpoint address is 0x0. The line
number where the breakpoint is set will match the requested line if the
breakpoint location is reported is callee.adb. If the breakpoint is
reported in caller.adb, the line number in caller is the breakpoint
location in callee where it is inlined into caller.
This patch fixes the single regression failure for the test on PowerPC.
It does not introduce any failures on X86-64.
|
|
If your target has no support for TARGET_WAITKIND_NO_RESUMED events
(and no way to support them, such as the yet-unsubmitted AMDGPU
target), and you step over thread exit with scheduler-locking on, this
is what you get:
(gdb) n
[Thread ... exited]
*hang*
Getting back the prompt by typing Ctrl-C may not even work, since no
inferior thread is running to receive the SIGINT. Even if it works,
it seems unnecessarily harsh. If you started an execution command for
which there's a clear thread of interest (step, next, until, etc.),
and that thread disappears, then I think it's more user friendly if
GDB just detects the situation and aborts the command, giving back the
prompt.
That is what this commit implements. It does this by explicitly
requesting the target to report thread exit events whenever the main
resumed thread has a thread_fsm. Note that unlike stepping over a
breakpoint, we don't need to enable clone events in this case.
With this patch, we get:
(gdb) n
[Thread 0x7ffff7d89700 (LWP 3961883) exited]
Command aborted, thread exited.
(gdb)
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I901ab64c91d10830590b2dac217b5264635a2b95
|
|
Add new gdb.threads/step-over-thread-exit.exp and
gdb.threads/step-over-thread-exit-while-stop-all-threads.exp
testcases, exercising stepping over thread exit syscall. These make
use of lib/my-syscalls.S to define the exit syscall.
Co-authored-by: Pedro Alves <pedro@palves.net>
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27338
Change-Id: Ie8b2c5747db99b7023463a897a8390d9e814a9c9
|
|
Refactor the syscall assembly code in gdb/testsuite/lib/my-syscalls.S
behind a SYSCALL macro so that it's easy to add new syscalls without
duplicating code.
Note that the way the macro is implemented, it only works correctly
for syscalls with up to 3 arguments, and, if the syscall doesn't
return (the macro doesn't bother to save/restore callee-saved
registers).
The following patch will want to use the macro to define a wrapper for
the "exit" syscall, so the limitations continue to be sufficient.
Change-Id: I8acf1463b11a084d6b4579aaffb49b5d0dea3bba
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
|
|
If scheduler-locking is in effect, e.g., with "set scheduler-locking
on", and you step over a function that spawns a new thread, the new
thread is allowed to run free, at least until some event is hit, at
which point, whether the new thread is re-resumed depends on a number
of seemingly random factors. E.g., if the target is all-stop, and the
parent thread hits a breakpoint, and GDB decides the breakpoint isn't
interesting to report to the user, then the parent thread is resumed,
but the new thread is left stopped.
I think that letting the new threads run with scheduler-locking
enabled is a defect. This commit fixes that, making use of the new
clone events on Linux, and of target_thread_events() on targets where
new threads have no connection to the thread that spawned them.
Testcase and documentation changes included.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: Ie12140138b37534b7fc1d904da34f0f174aa11ce
|
|
Now that gdb/19675 is fixed for both native and gdbserver GNU/Linux,
remove the gdb/19675 kfails.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19675
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I95c1c38ca370100675d303cd3c8995860bef465d
|