Age | Commit message (Collapse) | Author | Files | Lines |
|
I noticed yesterday that if gdb output is redirected to a file, the
pager will still be active. This is irritating, because the output
isn't actually visible -- just the pager prompt. Looking in bugzilla,
I found that this had been filed 17 years ago, as PR cli/8798.
This patch fixes the bug. It changes the pagination code to query the
particular ui-file to see if paging is allowable. The ui-file
implementations are changed so that only the stdout implementation and
a tee (where one sub-file is stdout) can page.
Regression tested on x86-64 Fedora 34.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=8798
|
|
This commit ensures that the following settings are cloned from one
inferior to the new one when processing the clone-inferior command:
- inferior-tty
- environment variables
- cwd
- args
Some of those parameters can be passed as command line arguments to GDB
(-args and -tty), so one could expect the clone-inferior to respect
those flags. The following debugging session illustrates that:
gdb -nx -quiet -batch \
-ex "show args" \
-ex "show inferior-tty" \
-ex "clone-inferior" \
-ex "inferior 2" \
-ex "show args" \
-ex "show inferior-tty" \
-tty=/some/tty \
-args echo foo bar
Argument list to give program being debugged when it is started is "foo bar".
Terminal for future runs of program being debugged is "/some/tty".
[New inferior 2]
Added inferior 2.
[Switching to inferior 2 [<null>] (/bin/echo)]
Argument list to give program being debugged when it is started is "".
Terminal for future runs of program being debugged is "".
The other properties this commit copies on clone (i.e. CWD and the
environment variables) are included since they are related (in the sense
that they influence the runtime behavior of the program) even if they
cannot be directly set using command line switches.
There is a chance that this patch changes existing user workflow. I
think that this change is mostly harmless. If users want to start a new
inferior based on an existing one, they probably already propagate those
settings to the new inferior in some way.
Tested on x86_64-linux.
Change-Id: I3b1f28b662f246228b37bb24c2ea1481567b363d
|
|
Set of fixes to resolve some duplicate test names in the gdb.mi/
directory. There should be no real test changes after this set of
fixes, they are all either:
- Adding with_test_prefix type constructs to make test names unique,
or
- Changing the test name to be more descriptive, or better reflect
the test being run.
|
|
Bug PR gdb/28405 reports a regression when using attach with an
extended-remote target. In this case the target is not including a
thread-id in the stop packet it sends back after the attach.
The regression was introduced with this commit:
commit 8f66807b98f7634c43149ea62e454ea8f877691d
Date: Wed Jan 13 20:26:58 2021 -0500
gdb: better handling of 'S' packets
The problem is that when GDB processes the stop packet, it sees that
there is no thread-id and so has to "guess" which thread the stop
should apply to.
In this case the target only has one thread, so really, there's no
guessing needed, but GDB still runs through the same process, this
shouldn't cause us any problems.
However, after the above commit, GDB now expects itself to be more
internally consistent, specifically, only a thread that GDB thinks is
resumed, can be a candidate for having stopped.
It turns out that, when GDB attaches to a process through an
extended-remote target, the threads of the process being attached too,
are not, initially, marked as resumed.
And so, when GDB tries to figure out which thread the stop might apply
too, it finds no threads in the processes marked resumed, and so an
assert triggers.
In extended_remote_target::attach we create a new thread with a call
to add_thread_silent, rather than remote_target::remote_add_thread,
the reason is that calling the latter will result in a call to
'add_thread' rather than 'add_thread_silent'. However,
remote_target::remote_add_thread includes additional
actions (i.e. calling remote_thread_info::set_resumed and set_running)
which are missing from extended_remote_target::attach. These missing
calls are what would serve to mark the new thread as resumed.
In this commit I propose that we add an extra parameter to
remote_target::remote_add_thread. This new parameter will force the
new thread to be added with a call to add_thread_silent. We can now
call remote_add_thread from the ::attach method, the extra
actions (listed above) will now be performed, and the thread will be
left in the correct state.
Additionally, in PR gdb/28405, a segfault is reported. This segfault
triggers when 'set debug remote 1' is used before trying to reproduce
the original assertion failure. The cause of this is in
remote_target::select_thread_for_ambiguous_stop_reply, where we do
this:
remote_debug_printf ("first resumed thread is %s",
pid_to_str (first_resumed_thread->ptid).c_str ());
remote_debug_printf ("is this guess ambiguous? = %d", ambiguous);
gdb_assert (first_resumed_thread != nullptr);
Notice that when debug printing is on we dereference
first_resumed_thread before we assert that the pointer is not
nullptr. This is the cause of the segfault, and is resolved by moving
the assert before the debug printing code.
I've extended an existing test, ext-attach.exp, so that the original
test is run multiple times; we run in the original mode, as normal,
but also, we now run with different packets disabled in gdbserver. In
particular, disabling Tthread would trigger the assertion as it was
reported in the original bug. I also run the test in all-stop and
non-stop modes now for extra coverage, we also run the tests with
target-async enabled, and disabled.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28405
|
|
Fixes PR gdb/28681. It was observed that after using the `finish`
command an incorrect value was displayed in some cases. Specifically,
this behaviour was observed on an x86-64 target.
Consider this test program:
struct A
{
int i;
A ()
{ this->i = 0; }
A (const A& a)
{ this->i = a.i; }
};
A
func (int i)
{
A a;
a.i = i;
return a;
}
int
main ()
{
A a = func (3);
return a.i;
}
And this GDB session:
$ gdb -q ex.x
Reading symbols from ex.x...
(gdb) b func
Breakpoint 1 at 0x401115: file ex.cc, line 14.
(gdb) r
Starting program: /home/andrew/tmp/ex.x
Breakpoint 1, func (i=3) at ex.cc:14
14 A a;
(gdb) finish
Run till exit from #0 func (i=3) at ex.cc:14
main () at ex.cc:23
23 return a.i;
Value returned is $1 = {
i = -19044
}
(gdb) p a
$2 = {
i = 3
}
(gdb)
Notice how after the `finish` the contents of $1 are junk, but, when I
immediately ask for the value of `a`, I get back the correct value.
The problem here is that after the finish command GDB calls the
function amd64_return_value to figure out where the return value can
be found (on x86-64 targets anyway).
This function makes the wrong choice for the struct A in our case, as
sizeof(A) <= 8, then amd64_return_value decides that A will be
returned in a register. GDB then reads the return value register an
interprets the contents as an instance of A.
Unfortunately, A is not trivially copyable (due to its copy
constructor), and the sys-v specification for argument and return
value passing, says that any non-trivial C++ object should have space
allocated for it by the caller, and the address of this space is
passed to the callee as a hidden first argument. The callee should
then return the address of this space as the return value.
And so, the register that GDB is treating as containing an instance of
A, actually contains the address of an instance of A (in this case on
the stack), this is why GDB shows the incorrect result.
The call stack within GDB for where we actually go wrong is this:
amd64_return_value
amd64_classify
amd64_classify_aggregate
And it is in amd64_classify_aggregate that we should be classifying
the type as AMD64_MEMORY, instead of as AMD64_INTEGER as we currently
do (via a call to amd64_classify_aggregate_field).
At the top of amd64_classify_aggregate we already have this logic:
if (TYPE_LENGTH (type) > 16 || amd64_has_unaligned_fields (type))
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
Which handles some easy cases where we know a struct will be placed
into memory, that is (a) the struct is more than 16-bytes in size,
or (b) the struct has any unaligned fields.
All we need then, is to add a check here to see if the struct is
trivially copyable. If it is not then we know the struct will be
passed in memory.
I originally structured the code like this:
if (TYPE_LENGTH (type) > 16
|| amd64_has_unaligned_fields (type)
|| !language_pass_by_reference (type).trivially_copyable)
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
This solved the example from the bug, and my small example above. So
then I started adding some more extensive tests to the GDB testsuite,
and I ran into a problem. I hit this error:
gdbtypes.h:676: internal-error: loc_bitpos: Assertion `m_loc_kind == FIELD_LOC_KIND_BITPOS' failed.
This problem is triggered from:
amd64_classify_aggregate
amd64_has_unaligned_fields
field::loc_bitpos
Inside the unaligned field check we try to get the bit position of
each field. Unfortunately, in some cases the field location is not
FIELD_LOC_KIND_BITPOS, but is FIELD_LOC_KIND_DWARF_BLOCK.
An example that shows this bug is:
struct B
{
short j;
};
struct A : virtual public B
{
short i;
A ()
{ this->i = 0; }
A (const A& a)
{ this->i = a.i; }
};
A
func (int i)
{
A a;
a.i = i;
return a;
}
int
main ()
{
A a = func (3);
return a.i;
}
It is the virtual base class, B, that causes the problem. The base
class is represented, within GDB, as a field within A. However, the
location type for this field is a DWARF_BLOCK.
I spent a little time trying to figure out how to convert the
DWARF_BLOCK to a BITPOS, however, I realised that, in this case at
least, conversion is not needed.
The C++ standard says that a class is not trivially copyable if it has
any virtual base classes. And so, in this case, even if I could
figure out the BITPOS for the virtual base class fields, I know for
sure that I would immediately fail the trivially_copyable check. So,
lets just reorder the checks in amd64_classify_aggregate to:
if (TYPE_LENGTH (type) > 16
|| !language_pass_by_reference (type).trivially_copyable
|| amd64_has_unaligned_fields (type))
{
theclass[0] = theclass[1] = AMD64_MEMORY;
return;
}
Now, if we have a class with virtual bases we will fail quicker, and
avoid the unaligned fields check completely.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28681
|
|
PR26056 reports that when GDB is connected to non-TTY stdin/stdout, it
crashes when it receives a SIGWINCH signal.
This can be reproduced as follows:
$ gdb/gdb -nx -batch -ex 'run' --args sleep 60 </dev/null 2>&1 | cat
# from another terminal:
$ kill -WINCH %(pidof gdb)
When doing so, the process crashes in a call to rl_resize_terminal:
void
rl_resize_terminal (void)
{
_rl_get_screen_size (fileno (rl_instream), 1);
...
}
The problem is that at this point rl_instream has the value NULL.
The rl_instream variable is supposed to be initialized during a call to
readline_initialize_everything, which in a normal startup sequence is
called under this call chain:
tui_interp::init
tui_ensure_readline_initialized
rl_initialize
readline_initialize_everything
In tui_interp::init, we have the following sequence:
tui_initialize_io ();
tui_initialize_win (); // <- Installs SIGWINCH
if (gdb_stdout->isatty ())
tui_ensure_readline_initialized (); // <- Initializes rl_instream
This function unconditionally installs the SIGWINCH signal handler (this
is done by tui_initialize_win), and then if gdb_stdout is a TTY it
initializes readline. Therefore, if stdout is not a TTY, SIGWINCH is
installed but readline is not initialized. In such situation
rl_instream stays NULL, and when GDB receives a SIGWINCH it calls its
handler and in fine tries to access rl_instream leading to the crash.
This patch proposes to fix this issue by installing the SIGWINCH signal
handler only if GDB is connected to a TTY. Given that this
initialization it the only task of tui_initialize_win, this patch moves
tui_initialize_win just after the call to
tui_ensure_readline_initialized.
Tested on x86_64-linux.
Co-authored-by: Pedro Alves <pedro@palves.net>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=26056
Change-Id: I6458acef7b0d9beda2a10715d0345f02361076d9
|
|
Run black 21.9b0 on gdb/ (this is the version currently mentioned on the
wiki [1], the subsequent commit will bump that version).
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
Change-Id: I5ceaab42c42428e053e2572df172aa42a88f0f86
|
|
Powerpc is not reporting the
Catchpoint 1 (returned from syscall execve), ....
as expected. The issue appears to be with the kernel not returning the
expected result. This patch marks the test failure as an xfail.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28623
|
|
While working on a Python script, which was interacting with a remote
target, I noticed some weird slowness in GDB. In my program I had a
structure something like this:
struct foo_t
{
int array[5];
};
struct foo_t global_foo;
Then in the Python script I was fetching a complete copy of global
foo, like:
val = gdb.parse_and_eval('global_foo')
val.fetch_lazy()
Then I would work with items in foo_t.array, like:
print(val['array'][1])
I called the fetch_lazy method specifically because I knew I was going
to end up accessing almost all of the contents of val, and so I wanted
GDB to do a single remote protocol call to fetch all the contents in
one go, rather than trying to do lazy fetches for a couple of bytes at
a time.
What I observed was that, after the fetch_lazy call, GDB does,
correctly, fetch the entire contents of global_foo, including all of
the contents of array, however, when I access val.array[1], GDB still
goes and fetches the value of this element from the remote target.
What's going on is that in valarith.c, in value_subscript, for C like
languages, we always end up treating the array value as a pointer, and
then doing value_ptradd, and value_ind, the second of these calls
always returns a lazy value.
My guess is that this approach allows us to handle indexing off the
end of an array, when working with zero element arrays, or when
indexing a raw pointer as an array. And, I agree, that in these
cases, where, even when the original value is non-lazy, we still will
not have the content of the array loaded, we should be using the
value_ind approach.
However, for cases where we do have the array contents loaded, and we
do know the bounds of the array, I think we should be using
value_subscripted_rvalue, which is what we use for non C like
languages.
One problem I did run into, exposed by gdb.base/charset.exp, was that
value_subscripted_rvalue stripped typedefs from the element type of
the array, which means the value returned will not have the same type
as an element of the array, but would be the raw, non-typedefed,
type. In charset.exp we got back an 'int' instead of a
'wchar_t' (which is a typedef of 'int'), and this impacts how we print
the value. Removing typedefs from the resulting value just seems
wrong, so I got rid of that, and I don't see any test regressions.
With this change in place, my original Python script is now doing no
additional memory accesses, and its performance increases about 10x!
|
|
This commit updates uses of 'loc' and 'loc_kind' to 'm_loc' and
'm_loc_kind' respectively, in gdb-gdb.py.in, which is required after
this commit:
commit cd3f655cc7a55437a05aa8e7b1fcc9051b5fe404
Date: Thu Sep 30 22:38:29 2021 -0400
gdb: add accessors for field (and call site) location
I have also incorporated this change:
https://sourceware.org/pipermail/gdb-patches/2021-September/182171.html
Which means we print 'm_name' instead of 'name' when displaying the
'm_name' member variable.
Finally, I have also added support for the new TYPE_SPECIFIC_INT
fields, which were added with this commit:
commit 20a5fcbd5b28cca88511ac5a9ad5e54251e8fa6d
Date: Wed Sep 23 09:39:24 2020 -0600
Handle bit offset and bit size in base types
I updated the gdb.gdb/python-helper.exp test to cover all of these
changes.
|
|
The comment on top of gdb/testsuite/boards/remote-stdio-gdbserver.exp says
that user can specify path to gdbserver on remote system by setting
GDBSERVER variable. However, this variable was ignored and /usr/bin/gdbserver
was used unconditionally.
This commit updates the code to respect GDBSERVER if set and fall back to
/usr/bin/gdbserver if not.
|
|
The documented behavior of proc runto is to not emit a PASS when
succeeding to to run to the specified location, but emit a FAIL when
failing to do so. I suppose the intent is that it won't pollute the
results normally passing tests (although I don't see why we would care),
but make visible any problems.
However, it seems like the implementation makes it default to never
print anything. "no-message" is prependend to "args", so if "message"
is not passed, we will always take the path that sets print_fail to 0,
which will silence any failure.
This unfortunately means that tests relying on runto_main won't emit a
FAIL if failing to run to main. And since commit 4dfef5be6812
("gdb/testsuite: make runto_main not pass no-message to runto"), tests
don't emit a FAIL themselves when failing to run to main. This means
that tests failing to run to main just fail silently, and that's bad.
This can be reproduced by hacking gdb.base/template.exp like so:
diff --git a/gdb/testsuite/gdb.base/template.c b/gdb/testsuite/gdb.base/template.c
index bcf39c377d92..052be5b79d73 100644
--- a/gdb/testsuite/gdb.base/template.c
+++ b/gdb/testsuite/gdb.base/template.c
@@ -15,6 +15,14 @@
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
+#include <stdlib.h>
+
+__attribute__((constructor))
+static void c (void)
+{
+ exit (1);
+}
+
int
main (void)
{
Running the modified gdb.base/template.exp shows that it exits without
printing any result.
Remove the line that prepends no-message to args, that should make
runto's behavior match its documentation.
This patch will appear to add many failures, but in reality they already
existed, they were just silenced.
Change-Id: I2a730d5bc72b6ef0698cd6aad962d9902aa7c3d6
|
|
On ELFv1, the _start symbol must point to the *function descriptor* (in
the .opd section), not to the function code (in the .text section) like
with ELFv2 and other architectures.
|
|
With test-case gdb.base/maint.exp and target board -readnow, I run into:
...
FAIL: gdb.base/maint.exp: maint info line-table w/o a file name
...
The problem is that this and other regexps anchored using '^':
...
-re "^$gdb_prompt $" {
...
don't trigger because other regexps don't consume the entire preceding line.
This is partly due to the addition of the IS-STMT column.
Fix this by making the regexps consume entire lines.
Tested on x86_64-linux with native and target board readnow, as well as
check-read1 and check-readmore.
|
|
With test-case gdb.base/include-main.exp and target board readnow, I run into:
...
FAIL: gdb.base/include-main.exp: maint info symtab
...
The corresponding check in gdb.base/include-main.exp:
...
gdb_test_no_output "maint info symtab"
...
checks that no CU was expanded, while -readnow ensures that all CUs are
expanded.
Fix this by skipping the check with -readnow.
Tested on x86_64-linux, with native and target board readnow.
|
|
While working with pending fork events, I wondered what would happen if
the user detached an inferior while a thread of that inferior had a
pending fork event. What happens with the fork child, which is
ptrace-attached by the GDB process (or by GDBserver), but not known to
the core? Sure enough, neither the core of GDB or the target detach the
child process, so GDB (or GDBserver) just stays ptrace-attached to the
process. The result is that the fork child process is stuck, while you
would expect it to be detached and run.
Make GDBserver detach of fork children it knows about. That is done in
the generic handle_detach function. Since a process_info already exists
for the child, we can simply call detach_inferior on it.
GDB-side, make the linux-nat and remote targets detach of fork children
known because of pending fork events. These pending fork events can be
stored in:
- thread_info::pending_waitstatus, if the core has consumed the event
but then saved it for later (for example, because it got the event
while stopping all threads, to present an all-stop stop on top of a
non-stop target)
- thread_info::pending_follow: if we ran to a "catch fork" and we
detach at that moment
Additionally, pending fork events can be in target-specific fields:
- For linux-nat, they can be in lwp_info::status and
lwp_info::waitstatus.
- For the remote target, they could be stored as pending stop replies,
saved in `remote_state::notif_state::pending_event`, if not
acknowledged yet, or in `remote_state::stop_reply_queue`, if
acknowledged. I followed the model of remove_new_fork_children for
this: call remote_notif_get_pending_events to process /
acknowledge any unacknowledged notification, then look through
stop_reply_queue.
Update the gdb.threads/pending-fork-event.exp test (and rename it to
gdb.threads/pending-fork-event-detach.exp) to try to detach the process
while it is stopped with a pending fork event. In order to verify that
the fork child process is correctly detached and resumes execution
outside of GDB's control, make that process create a file in the test
output directory, and make the test wait $timeout seconds for that file
to appear (it happens instantly if everything goes well).
This test catches a bug in linux-nat.c, also reported as PR 28512
("waitstatus.h:300: internal-error: gdb_signal target_waitstatus::sig()
const: Assertion `m_kind == TARGET_WAITKIND_STOPPED || m_kind ==
TARGET_WAITKIND_SIGNALLED' failed.). When detaching a thread with a
pending event, get_detach_signal unconditionally fetches the signal
stored in the waitstatus (`tp->pending_waitstatus ().sig ()`). However,
that is only valid if the pending event is of type
TARGET_WAITKIND_STOPPED, and this is now enforced using assertions (iit
would also be valid for TARGET_WAITKIND_SIGNALLED, but that would mean
the thread does not exist anymore, so we wouldn't be detaching it). Add
a condition in get_detach_signal to access the signal number only if the
wait status is of kind TARGET_WAITKIND_STOPPED, and use GDB_SIGNAL_0
instead (since the thread was not stopped with a signal to begin with).
Add another test, gdb.threads/pending-fork-event-ns.exp, specifically to
verify that we consider events in pending stop replies in the remote
target. This test has many threads constantly forking, and we detach
from the program while the program is executing. That gives us some
chance that we detach while a fork stop reply is stored in the remote
target. To verify that we correctly detach all fork children, we ask
the parent to exit by sending it a SIGUSR1 signal and have it write a
file to the filesystem before exiting. Because the parent's main thread
joins the forking threads, and the forking threads wait for their fork
children to exit, if some fork child is not detach by GDB, the parent
will not write the file, and the test will time out. If I remove the
new remote_detach_pid calls in remote.c, the test fails eventually if I
run it in a loop.
There is a known limitation: we don't remove breakpoints from the
children before detaching it. So the children, could hit a trap
instruction after being detached and crash. I know this is wrong, and
it should be fixed, but I would like to handle that later. The current
patch doesn't fix everything, but it's a step in the right direction.
Change-Id: I6d811a56f520e3cb92d5ea563ad38976f92e93dd
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28512
|
|
This patch aims at fixing a bug where an inferior is unexpectedly
created when a fork happens at the same time as another event, and that
other event is reported to GDB first (and the fork event stays pending
in GDBserver). This happens for example when we step a thread and
another thread forks at the same time. The bug looks like (if I
reproduce the included test by hand):
(gdb) show detach-on-fork
Whether gdb will detach the child of a fork is on.
(gdb) show follow-fork-mode
Debugger response to a program call of fork or vfork is "parent".
(gdb) si
[New inferior 2]
Reading /home/simark/build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-while-fork-in-other-thread/step-while-fork-in-other-thread from remote target...
Reading /home/simark/build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-while-fork-in-other-thread/step-while-fork-in-other-thread from remote target...
Reading symbols from target:/home/simark/build/binutils-gdb/gdb/testsuite/outputs/gdb.threads/step-while-fork-in-other-thread/step-while-fork-in-other-thread...
[New Thread 965190.965190]
[Switching to Thread 965190.965190]
Remote 'g' packet reply is too long (expected 560 bytes, got 816 bytes): ... <long series of bytes>
The sequence of events leading to the problem is:
- We are using the all-stop user-visible mode as well as the
synchronous / all-stop variant of the remote protocol
- We have two threads, thread A that we single-step and thread B that
calls fork at the same time
- GDBserver's linux_process_target::wait pulls the "single step
complete SIGTRAP" and the "fork" events from the kernel. It
arbitrarily choses one event to report, it happens to be the
single-step SIGTRAP. The fork stays pending in the thread_info.
- GDBserver send that SIGTRAP as a stop reply to GDB
- While in stop_all_threads, GDB calls update_thread_list, which ends
up querying the remote thread list using qXfer:threads:read.
- In the reply, GDBserver includes the fork child created as a result
of thread B's fork.
- GDB-side, the remote target sees the new PID, calls
remote_notice_new_inferior, which ends up unexpectedly creating a new
inferior, and things go downhill from there.
The problem here is that as long as GDB did not process the fork event,
it should pretend the fork child does not exist. Ultimately, this event
will be reported, we'll go through follow_fork, and that process will be
detached.
The remote target (GDB-side), has some code to remove from the reported
thread list the threads that are the result of forks not processed by
GDB yet. But that only works for fork events that have made their way
to the remote target (GDB-side), but haven't been consumed by the core
yet, so are still lingering as pending stop replies in the remote target
(see remove_new_fork_children in remote.c). But in our case, the fork
event hasn't made its way to the GDB-side remote target. We need to
implement the same kind of logic GDBserver-side: if there exists a
thread / inferior that is the result of a fork event GDBserver hasn't
reported yet, it should exclude that thread / inferior from the reported
thread list.
This was actually discussed a while ago, but not implemented AFAIK:
https://pi.simark.ca/gdb-patches/1ad9f5a8-d00e-9a26-b0c9-3f4066af5142@redhat.com/#t
https://sourceware.org/pipermail/gdb-patches/2016-June/133906.html
Implementation details-wise, the fix for this is all in GDBserver. The
Linux layer of GDBserver already tracks unreported fork parent / child
relationships using the lwp_info::fork_relative, in order to avoid
wildcard actions resuming fork childs unknown to GDB. This information
needs to be made available to the handle_qxfer_threads_worker function,
so it can filter the reported threads. Add a new thread_pending_parent
target function that allows the Linux target to return the parent of an
eventual fork child.
Testing-wise, the test replicates pretty-much the sequence of events
shown above. The setup of the test makes it such that the main thread
is about to fork. We stepi the other thread, so that the step completes
very quickly, in a single event. Meanwhile, the main thread is resumed,
so very likely has time to call fork. This means that the bug may not
reproduce every time (if the main thread does not have time to call
fork), but it will reproduce more often than not. The test fails
without the fix applied on the native-gdbserver and
native-extended-gdbserver boards.
At some point I suspected that which thread called fork and which thread
did the step influenced the order in which the events were reported, and
therefore the reproducibility of the bug. So I made the test try both
combinations: main thread forks while other thread steps, and vice
versa. I'm not sure this is still necessary, but I left it there
anyway. It doesn't hurt to test a few more combinations.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28288
Change-Id: I2158d5732fc7d7ca06b0eb01f88cf27bf527b990
|
|
The documentation suggests that we implement gdb.Value.__init__,
however, this is not currently true, we really implement
gdb.Value.__new__. This will cause confusion if a user tries to
sub-class gdb.Value. They might write:
class MyVal (gdb.Value):
def __init__ (self, val):
gdb.Value.__init__(self, val)
obj = MyVal(123)
print ("Got: %s" % obj)
But, when they source this code they'll see:
(gdb) source ~/tmp/value-test.py
Traceback (most recent call last):
File "/home/andrew/tmp/value-test.py", line 7, in <module>
obj = MyVal(123)
File "/home/andrew/tmp/value-test.py", line 5, in __init__
gdb.Value.__init__(self, val)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)
(gdb)
The reason for this is that, as we don't implement __init__ for
gdb.Value, Python ends up calling object.__init__ instead, which
doesn't expect any arguments.
The Python docs suggest that the reason why we might take this
approach is because we want gdb.Value to be immutable:
https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_new
But I don't see any reason why we should require gdb.Value to be
immutable when other types defined in GDB are not. This current
immutability can be seen in this code:
obj = gdb.Value(1234)
print("Got: %s" % obj)
obj.__init__ (5678)
print("Got: %s" % obj)
Which currently runs without error, but prints:
Got: 1234
Got: 1234
In this commit I propose that we switch to using __init__ to
initialize gdb.Value objects.
This does introduce some additional complexity, during the __init__
call a gdb.Value might already be associated with a gdb value object,
in which case we need to cleanly break that association before
installing the new gdb value object. However, the cost of doing this
is not great, and the benefit - being able to easily sub-class
gdb.Value seems worth it.
After this commit the first example above works without error, while
the second example now prints:
Got: 1234
Got: 5678
In order to make it easier to override the gdb.Value.__init__ method,
I have tweaked the definition of gdb.Value.__init__. The second,
optional argument to __init__ is a gdb.Type, if this argument is not
present then GDB figures out a suitable type.
However, if we want to override the __init__ method in a sub-class,
and still support the default argument, it is easier to write:
class MyVal (gdb.Value):
def __init__ (self, val, type=None):
gdb.Value.__init__(self, val, type)
Currently, passing None for the Type will result in an error:
TypeError: type argument must be a gdb.Type.
After this commit I now allow the type argument to be None, in which
case GDB figures out a suitable type just as if the type had not been
passed at all.
Unless a user is trying to reinitialize a value, or create sub-classes
of gdb.Value, there should be no user visible changes after this
commit.
|
|
Permanent program breakpoints (ones inserted into the code) other than
the one GDB uses for POWER (0x7fe00008) did not result in stop but
caused GDB to loop infinitely.
This was because GDB did not recognize trap instructions other than
"trap". For example, "tw 12, 4, 4" was not be recognized, causing GDB
to loop forever.
This commit fixes this by providing POWER specific hook
(gdbarch_program_breakpoint_here_p) recognizing all tw, twi, td and tdi
instructions.
Tested on Linux on PowerPC e500 and on QEMU PPC64le.
|
|
In commit 80ad340c902 ("[gdb/testsuite] use -Ttext-segment for jit-elf tests")
the following change was made:
...
proc compile_jit_elf_main_as_so {main_solib_srcfile main_solib_binfile options} {
- set options [concat $options debug]
+ global jit_load_address jit_load_increment
+
+ set options [list \
+ additional_flags="-DMAIN=jit_dl_main" \
+ additional_flags=-DLOAD_ADDRESS=$jit_load_address \
+ additional_flags=-DLOAD_INCREMENT=$jit_load_increment \
+ debug]
...
Before the change, the options argument was used, but after the change not
anymore.
Fix this by reverting back to using "set options [concat $options ...]".
Fixing this gets us twice the -DMAIN=jit_dl_main bit, once from a caller, and
once from compile_jit_elf_main_as_so. Fix this by removing the bit from
compile_jit_elf_main_as_so, which makes the code similar to compile_jit_main.
Tested on x86_64-linux.
|
|
On openSUSE Leap 15.2 aarch64 I ran into:
...
FAIL: gdb.tui/basic.exp: check main is where we expect on the screen
...
while this is passing on x86_64.
On x86_64-linux we have at the initial screen dump for "list -q main":
...
0 +-/home/vries/gdb_versions/devel/src/gdb/testsuite/gdb.tui/tui-layout.c--+
1 | 15 You should have received a copy of the GNU General Public |
2 | 16 along with this program. If not, see <http://www.gnu.org/|
3 | 17 |
4 | 18 int |
5 | 19 main () |
6 | 20 { |
7 | 21 return 0; |
8 | 22 } |
9 | 23 |
...
but on aarch64:
...
0 +-/home/tdevries/gdb/src/gdb/testsuite/gdb.tui/tui-layout.c--------------+
1 | 16 along with this program. If not, see <http://www.gnu.org/|
2 | 17 |
3 | 18 int |
4 | 19 main () |
5 | 20 { |
6 | 21 return 0; |
7 | 22 } |
8 | 23 |
9 | 24 |
...
The cause of the diffferent placement is that we have as line number for main
on x86_64:
...
$ gdb -q -batch outputs/gdb.tui/basic/basic -ex "info line main"
Line 20 of "tui-layout.c" starts at address 0x4004a7 <main> \
and ends at 0x4004ab <main+4>.
...
and on aarch64 instead:
...
$ gdb -q -batch outputs/gdb.tui/basic/basic -ex "info line main"
Line 21 of "tui-layout.c" starts at address 0x4005f4 <main> \
and ends at 0x4005f8 <main+4>.
...
Fix this by using a new source file main-one-line.c, that implements the
entire main function on a single line, in order to force the compiler to use
that line number.
Also try to do less hard-coding in the test-case.
Tested on x86_64-linux and aarch64-linux.
|
|
When running test-case gdb.base/cached-source-file.exp with target board
readnow, we run into:
...
FAIL: gdb.base/cached-source-file.exp: rerun program (the program exited)
...
The problem is that when rereading, the readnow is ignored.
Fix this by copying the readnow handling code from symbol_file_add_with_addrs
to reread_symbols.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=26800
|
|
This patch adds an #elif defined for PowerPC to setup the exit_0 macro.
This patch addes the needed macro definitionald logic to handle both elfV1
and elfV2.
The patch has been successfully tested on both PowerPC BE, Powerpc LE and
X86_64 with no regressions.
|
|
Test-cases gdb.arch/i386-{avx,sse}.exp use assembly instructions that require
the memory operands to be aligned to a certain boundary, and the test-cases
use C11's _Alignas to make that happen.
The draw-back of using _Alignas is that while it does enforce a minimum
alignment, the actual alignment may be bigger, which makes the following
scenario possible:
- copy say, gdb.arch/i386-avx.c as basis for a new test-case
- run the test-case and observe a PASS
- commit the new test-case in the supposition that the test-case is correct
and well-tested
- run later into a failure on a different test setup (which may be a setup
where reproduction and investigation is more difficult and time-consuming),
and find out that the specified alignment was incorrect and should have been
updated to say, 64 bytes. The initial PASS occurred only because the actual
alignment happened to be greater than required.
The idea of having precise alignment as a means of having more predictable
execution which allows flushing out bugs earlier, has been filed as PR
gcc/103095.
Add a new file lib/precise-aligned-alloc.c with functions
precise_aligned_alloc and precise_aligned_dup, to support precise alignment.
Use precise_aligned_dup in aforementioned test-cases to:
- verify that the specified alignment is indeed sufficient, rather
than too little but accidentally over-aligned.
- prevent the same type of problems in any new test-cases based on these
Tested on x86_64-linux, with both gcc and clang.
|
|
When running test-case gdb.arch/i386-avx.exp with clang I ran into:
...
(gdb) PASS: gdb.arch/i386-avx.exp: set first breakpoint in main
continue^M
Continuing.^M
^M
Program received signal SIGSEGV, Segmentation fault.^M
0x000000000040052b in main (argc=1, argv=0x7fffffffd3c8) at i386-avx.c:54^M
54 asm ("vmovaps 0(%0), %%ymm0\n\t"^M
(gdb) FAIL: gdb.arch/i386-avx.exp: continue to breakpoint: \
continue to first breakpoint in main
...
The problem is that the vmovaps insn requires an 256-bit (or 32-byte) aligned
address, and it's only 16-byte aligned:
...
(gdb) p /x $rax
$1 = 0x601030
...
Fix this by using a sufficiently aligned address, using _Alignas.
Compile using -std=gnu11 to support _Alignas.
Likewise in gdb.arch/i386-sse.exp.
Tested on x86_64-linux, with both gcc and clang.
|
|
I don't think it's very useful to show deprecated aliases to the
user. It encourages the user to use them, when the goal is the
opposite.
For example, before:
(gdb) help set index-cache enabled
set index-cache enabled, set index-cache off, set index-cache on
alias set index-cache off = set index-cache enabled off
alias set index-cache on = set index-cache enabled on
Enable the index cache.
When on, enable the use of the index cache.
(gdb) help set index-cache on
Warning: 'set index-cache on', an alias for the command 'set index-cache enabled', is deprecated.
Use 'set index-cache enabled on'.
set index-cache enabled, set index-cache off, set index-cache on
alias set index-cache off = set index-cache enabled off
alias set index-cache on = set index-cache enabled on
Enable the index cache.
When on, enable the use of the index cache.
After:
(gdb) help set index-cache enabled
Enable the index cache.
When on, enable the use of the index cache.
(gdb) help set index-cache on
Warning: 'set index-cache on', an alias for the command 'set index-cache enabled', is deprecated.
Use 'set index-cache enabled on'.
Enable the index cache.
When on, enable the use of the index cache.
Change-Id: I989b618a5ad96ba975367e9d16db95523cd57a4c
|
|
Commit 92228a334ba2 ("gdb: small "maintenance info line-table"
readability improvements") change the output format of "maint info
line-table" slightly, adding some empty lines between each
line-table. This causes two tests to start failing, update them to
account for those empty lines.
Change-Id: I9d33a58fce3e860ba0554b25f5582e8066a5c519
|
|
A test in gdb.python/py-send-packet.exp added in this commit:
commit 24b2de7b776f8f23788d855b1eec290c6e208821
Date: Tue Aug 31 14:04:36 2021 +0100
gdb/python: add gdb.RemoteTargetConnection.send_packet
included a large amount of binary data in the command sent to GDB. As
this test didn't have a real test name the binary data was included in
the gdb.sum file. The contents of the binary data could change
between different runs of GDB, and this makes comparing results
harder.
This commit gives the test a real test name.
|
|
Commit ab557072b8ec ("gdb: use actual DWARF version in compunit's
debugformat field") changes the debug format string in "info source" to
show the actual DWARF version, rather than always show "DWARF 2".
However, it failed to consider that some tests checked for the "DWARF 2"
string to see if the test program is compiled with DWARF debug
information. Since everything is compiled with DWARF 4 or 5 nowadays,
that changed the behavior of those tests. Notably, it prevent the
tests using skip_inline_var_tests to run.
Grep through the testsuite for "DWARF 2" and change all occurrences I
could find to use "DWARF [0-9]" instead (that string is passed to TCL's
string match).
Change-Id: Ic7fb0217fb9623880c6f155da6becba0f567a885
|
|
Consider the following code:
type FP1_Type is delta 0.1 range -1.0 .. +1.0; -- Ordinary
function Call_FP1 (F : FP1_Type) return FP1_Type is
begin
return F;
end Call_FP1;
When the default in GCC is to generate proper DWARF info for fixed point
types, then in gdb, printing the result of a call to call_fp1 with a
decimal parameter leads to:
(gdb) p call_fp1(0.5)
$1 = 0
The displayed value is wrong, and we actually expected:
(gdb) p call_fp1(0.5)
$1 = 0.5
What happened is that our fixed point type parameter got promoted to a
32bit integer because we detected that the length of that object was less
than 4 bytes. The compiler does not perform this promotion and therefore
GDB should not either.
This patch fixes the behavior described above.
|
|
This adds a 'task apply' command, which is the Ada tasking analogue of
'thread apply'. Unlike 'thread apply', it doesn't offer the
'ascending' flag; but otherwise it's essentially the same.
|
|
Breakpoints in gdb can be made specific to an Ada task using the
"task" qualifier. This patch applies this same idea to watchpoints.
|
|
With gdb.multi/multi-arch-exec.exp I run into:
...
Running src/gdb/testsuite/gdb.multi/multi-arch-exec.exp ...
ERROR: tcl error sourcing src/gdb/testsuite/gdb.multi/multi-arch-exec.exp.
ERROR: wrong # args: extra words after "else" clause in "if" command
while executing
"if [istarget "powerpc64*-*-*"] {
set march "-m64"
} else if [istarget "s390*-*-*"] {
set march "-m31"
} else {
set march "-m32"
}"
...
Fix the else if -> elseif typo.
Tested on x86_64-linux.
|
|
When running test-case gdb.arch/i386-pkru.exp on a machine with "Memory
Protection Keys for Userspace" support, we run into:
...
(gdb) PASS: gdb.arch/i386-pkru.exp: probe PKRU support
print $pkru^M
$2 = 1431655764^M
(gdb) FAIL: gdb.arch/i386-pkru.exp: pkru register
...
The test-case expects the $pkru register to have the default value 0, matching
the "init state" of 0 defined by the XSAVE hardware.
Since linux kernel version v4.9 containing commit acd547b29880 ("x86/pkeys:
Default to a restrictive init PKRU"), the register is set to 0x55555554 by
default (which matches the printed decimal value above).
Fix the FAIL by accepting this value for linux.
Tested on x86_64-linux.
|
|
core file
When my system isn't properly configured to generate core files in the
local directory, I see these DUPLICATEs:
DUPLICATE: gdb.base/corefile-buildid.exp: could not generate core file
Fix that by having a single with_test_prefix around that message and
what follows.
Change-Id: I4ac245fcce1c666db56e3bad3582aa17f183dcba
|
|
The expect file has a procedure append_arch_options which sets march based
the istarget. The current if / else statement does not check for
powerpc64. The else statement is hit which sets march to -m32. This
results in compilation errors on 64-bit PowerPC.
This patch adds an if statement to check for powerpc64 and if true sets mach
to -m64.
The patch was tested on a Power 10 system. No compile errors were generated.
The test completes with 1 expected pass and no failures.
|
|
When running the gdb.python/py-arch.exp tests on a GDB built
against Python 2 I ran into some errors. The problem is that this
test script exercises the gdb.Architecture.integer_type method, and
this method uses 'p' as an argument format specifier in a call to
gdb_PyArg_ParseTupleAndKeywords.
Unfortunately this specified was only added in Python 3.3, so will
cause an error for earlier versions of Python.
This commit switches to use the 'O' specifier to collect a PyObject,
and then uses PyObject_IsTrue to convert the object to a boolean.
An earlier version of this patch incorrectly switched from using 'p'
to use 'i', however, it was pointed out during review that this would
cause some changes in behaviour, for example both of these will work
with 'p', but not with 'i':
gdb.selected_inferior().architecture().integer_type(32, None)
gdb.selected_inferior().architecture().integer_type(32, "foo")
The new approach of using 'O' works fine with these cases. I've added
some new tests to cover both of the above.
There should be no user visible changes after this commit.
|
|
When running test-case gdb.base/style.exp with a gdb build using
stub-termcap.c, we run into:
...
(gdb) PASS: gdb.base/style.exp: all styles enabled: frame when width=20
^M<et width 30^M
(gdb) FAIL: gdb.base/style.exp: all styles enabled: set width 30
...
The problem is that we're trying to issue the command "set width 30" while
width is set to 20, which causes horizontal scrolling.
Fix this by resetting the width to 0 before issuing the "set width 30"
command.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=24582
|
|
The gdb.python/py-inferior-leak.exp test makes use of the tracemalloc
module. When running the Python tests with a GDB built against Python
2 I ran into a test failure due to the tracemalloc module not being
available.
This commit adds a new helper function to lib/gdb-python.exp that
checks if a named module is available. Using this we can then skip
the py-inferior-leak.exp test when the tracemalloc module is not
available.
|
|
After this commit:
commit 76b43c9b5c2b275cbf4f927bfc25984410cb5dd5
Date: Tue Oct 5 15:10:12 2021 +0100
gdb: improve error reporting from the disassembler
We started seeing FAILs in the gdb.base/all-architectures*.exp tests,
when running on a 32-bit ARM target, though I suspect running on any
target that compiles such that bfd_vma is 32-bits would also trigger
the failures.
The problem is that the test is expected GDB's disassembler to print
an error like this:
Cannot access memory at address 0x0
However, after the above commit we see an error like:
unknown disassembler error (error = -1)
The reason for this is this code in opcodes/i386-dis.c (in the
print_insn function):
if (address_mode == mode_64bit && sizeof (bfd_vma) < 8)
{
(*info->fprintf_func) (info->stream,
_("64-bit address is disabled"));
return -1;
}
This code effectively disallows us from ever disassembling 64-bit x86
code if we compiled GDB with a 32-bit bfd_vma. Notice we return
-1 (indicating a failure to disassemble), but never call the
memory_error_func callback.
Prior to the above commit GDB, when it received the -1 return value
would assume that a memory error had occurred and just print whatever
value happened to be in the memory error address variable, the default
value of 0 just happened to be fine because the test had asked GDB to
do this 'disassemble 0x0,+4'.
If we instead change the test to do 'disassemble 0x100,+4' then GDB
would (previously) have still reported:
Cannot access memory at address 0x0
which makes far less sense.
In this commit I propose to fix this issue by changing the test to
accept either the "Cannot access memory ..." string, or the newer
"unknown disassembler error ..." string. With this change done the
test now passes.
However, there is one weakness with this strategy; if GDB broke such
that we _always_ reported "unknown disassembler error ..." we would
never notice. This clearly would be bad. To avoid this issue I have
adjusted the all-architectures*.exp tests so that, when we disassemble
for the default architecture (the one selected by "auto") we _only_
expect to get the "Cannot access memory ..." error string.
[ Note: In an ideal world we should be able to disassemble any
architecture at all times. There's no reason why the 64-bit x86
disassembler requires a 64-bit bfd_vma, other than the code happens
to be written that way. We could rewrite the disassemble to not
have this requirement, but, I don't plan to do that any time soon. ]
Further, I have changed the all-architectures*.exp test so that we now
disassemble at address 0x100, this should avoid us being able to pass
by printing a default address of 0x0. I did originally change the
address we disassembled at to 0x4, however, some architectures,
e.g. ia64, have a default instruction alignment that is greater than
4, so would still round down to 0x0. I could have just picked 0x8 as
an address, but I figured that 0x100 was likely to satisfy most
architectures alignment requirements.
|
|
This commits adds a new sub-class of gdb.TargetConnection,
gdb.RemoteTargetConnection. This sub-class is created for all
'remote' and 'extended-remote' targets.
This new sub-class has one additional method over its base class,
'send_packet'. This new method is equivalent to the 'maint
packet' CLI command, it allows a custom packet to be sent to a remote
target.
The outgoing packet can either be a bytes object, or a Unicode string,
so long as the Unicode string contains only ASCII characters.
The result of calling RemoteTargetConnection.send_packet is a bytes
object containing the reply that came from the remote.
|
|
This commit adds a new object type gdb.TargetConnection. This new
type represents a connection within GDB (a connection as displayed by
'info connections').
There's three ways to find a gdb.TargetConnection, there's a new
'gdb.connections()' function, which returns a list of all currently
active connections.
Or you can read the new 'connection' property on the gdb.Inferior
object type, this contains the connection for that inferior (or None
if the inferior has no connection, for example, it is exited).
Finally, there's a new gdb.events.connection_removed event registry,
this emits a new gdb.ConnectionEvent whenever a connection is removed
from GDB (this can happen when all inferiors using a connection exit,
though this is not always the case, depending on the connection type).
The gdb.ConnectionEvent has a 'connection' property, which is the
gdb.TargetConnection being removed from GDB.
The gdb.TargetConnection has an 'is_valid()' method. A connection
object becomes invalid when the underlying connection is removed from
GDB (as discussed above, this might be when all inferiors using a
connection exit, or it might be when the user explicitly replaces a
connection in GDB by issuing another 'target' command).
The gdb.TargetConnection has the following read-only properties:
'num': The number for this connection,
'type': e.g. 'native', 'remote', 'sim', etc
'description': The longer description as seen in the 'info
connections' command output.
'details': A string or None. Extra details for the connection, for
example, a remote connection's details might be
'hostname:port'.
|
|
The Rust compiler plans to change the encoding of a Rust 'char' type
to use DW_ATE_UTF. You can see the discussion here:
https://github.com/rust-lang/rust/pull/89887
However, this fails in gdb. I looked into this, and it turns out that
the handling of DW_ATE_UTF is currently fairly specific to C++. In
particular, the code here assumes the C++ type names, and it creates
an integer type.
This comes from commit 53e710acd ("GDB thinks char16_t and char32_t
are signed in C++"). The message says:
Both places need fixing. But since I couldn't tell why dwarf2read.c
needs to create a new type, I've made it use the per-arch built-in
types instead, so that the types are only created once per arch
instead of once per objfile. That seems to work fine.
... which is fine, but it seems to me that it's also correct to make a
new character type; and this approach is better because it preserves
the type name as well. This does use more memory, but first we
shouldn't be too concerned about the memory use of types coming from
debuginfo; and second, if we are, we should implement type interning
anyway.
Changing this code to use a character type revealed a couple of
oddities in the C/C++ handling of TYPE_CODE_CHAR. This patch fixes
these as well.
I filed PR rust/28637 for this issue, so that this patch can be
backported to the gdb 11 branch.
|
|
PR28539 describes a segfault in lambda function search_one_symtab due to
psymbol_functions::expand_symtabs_matching calling expansion_notify with a
nullptr symtab:
...
struct compunit_symtab *symtab =
psymtab_to_symtab (objfile, ps);
if (expansion_notify != NULL)
if (!expansion_notify (symtab))
return false;
...
This happens as follows. The partial symtab ps is a dwarf2_include_psymtab
for some header file:
...
(gdb) p ps.filename
$5 = 0x64fcf80 "/usr/include/c++/11/bits/stl_construct.h"
...
The includer of ps is a shared symtab for a partial unit, with as user:
...
(gdb) p ps.includer().user.filename
$11 = 0x64fc9f0 \
"/usr/src/debug/llvm13-13.0.0-1.2.x86_64/tools/clang/lib/AST/Decl.cpp"
...
The call to psymtab_to_symtab expands the Decl.cpp symtab (and consequently
the shared symtab), but returns nullptr because:
...
struct dwarf2_include_psymtab : public partial_symtab
{
...
compunit_symtab *get_compunit_symtab (struct objfile *objfile) const override
{
return nullptr;
}
...
Fix this by returning the Decl.cpp symtab instead, which fixes the segfault
in the PR.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28539
|
|
Proc lines contains a typo:
...
string_form { set $_line_string_form $value }
...
Remove the incorrect '$' in '$_line_string_form'.
Tested on x86_64-linux.
|
|
While debugging a problem in gdb.dwarf2/dw2-lines.exp, I realized that the
test-case generates all executables and associated temporary files using the
same filenames.
Fix this by adding a new proc prefix_id in lib/gdb.exp, and using it in the
test-case.
Tested on x86_64-linux.
|
|
When running test-case gdb.dwarf2/dw2-lines.exp with target board -unix/-m32,
we run into another instance of PR28383, where the dwarf assembler generates
64-bit relocations which are not supported by the 32-bit assembler:
...
dw2-lines-dw.S: Assembler messages:^M
outputs/gdb.dwarf2/dw2-lines/dw2-lines-dw.S:76: Error: \
cannot represent relocation type BFD_RELOC_64^M
...
Fix this by using _op_offset in _line_finalize_header.
Tested on x86_64-linux.
|
|
In commit f8080fb7a44 "[gdb/testsuite] Add gdb.base/include-main.exp" a
file gdb.base/main.c was added, which caused the following regression:
...
(gdb) list^M
<gdb.base/main.c>
(gdb) FAIL: gdb.base/list-missing-source.exp: list
...
The problem is that the test-case does not expect to find a file main.c, but
now it finds gdb.base/main.c.
Fix this by using the more specific file name list-missing-source.c.
Tested on x86_64-linux.
|
|
The test-case gdb.ada/dgopt.exp uses the -gnatD switch, in combination with
-gnatG.
This causes the source file $src/gdb/testsuite/gdb.ada/dgopt/x.adb to be
expanded into $build/gdb/testsuite/outputs/gdb.ada/dgopt/x.adb.dg, and the
debug information should refer to the x.adb.dg file.
That is the case for the .debug_line part:
...
The Directory Table is empty.
The File Name Table (offset 0x1c):
Entry Dir Time Size Name
1 0 0 0 x.adb.dg
...
but not for the .debug_info part:
...
<11> DW_AT_name : $src/gdb/testsuite/gdb.ada/dgopt/x.adb
<15> DW_AT_comp_dir : $build/gdb/testsuite/outputs/gdb.ada/dgopt
...
Filed as PR gcc/103436.
In C we can generate similar debug information, using a source file that does
not contain any code, but includes another one that does:
...
$ cat gdb/testsuite/gdb.base/include-main.c
#include "main.c"
...
such that in the .debug_line part we have:
...
The Directory Table (offset 0x1c):
1 /home/vries/gdb_versions/devel/src/gdb/testsuite/gdb.base
The File Name Table (offset 0x57):
Entry Dir Time Size Name
1 1 0 0 main.c
...
and in the .debug_info part:
...
<11> DW_AT_name : $src/gdb/testsuite/gdb.base/include-main.c
<15> DW_AT_comp_dir : $build/gdb/testsuite
...
Add a C test-case that mimics gdb.ada/dgopt.exp, that is:
- generate debug info as described above,
- issue a list of a line in include-main.c, while the corresponding
CU is not expanded yet.
Tested on x86_64-linux.
|
|
Basic ambiguity detection assumes that when 2 fields with the same name
have the same byte offset, it must be an unambiguous request. This is not
always correct. Consider the following code:
class empty { };
class A {
public:
[[no_unique_address]] empty e;
};
class B {
public:
int e;
};
class C: public A, public B { };
if we tried to use c.e in code, the compiler would warn of an ambiguity,
however, since A::e does not demand an unique address, it gets the same
address (and thus byte offset) of the members, making A::e and B::e have the
same address. however, "print c.e" would fail to report the ambiguity,
and would instead print it as an empty class (first path found).
The new code solves this by checking for other found_fields that have
different m_struct_path.back() (final class that the member was found
in), despite having the same byte offset.
The testcase gdb.cp/ambiguous.exp was also changed to test for this
behavior.
|