Age | Commit message (Collapse) | Author | Files | Lines |
|
When running test-case gdb.mi/mi-dprintf.exp with check-read1, I run into:
...
(gdb) ^M
PASS: gdb.mi/mi-dprintf.exp: gdb: mi 2nd dprintf stop
-data-evaluate-expression stderr^M
^done,value="0x7ffff7e4a420 <_IO_2_1_stderr_>"^M
(gdb) FAIL: gdb.mi/mi-dprintf.exp: stderr symbol check
...
The problem is in proc mi_gdb_is_stderr_available:
...
proc mi_gdb_is_stderr_available {} {
set has_stderr_symbol false
gdb_test_multiple "-data-evaluate-expression stderr" "stderr symbol check" {
-re "\\^error,msg=\"'stderr' has unknown type; cast it to its declared type\"\r\n$::mi_gdb_prompt$" {
}
-re "$::mi_gdb_prompt$" {
set has_stderr_symbol true
}
}
...
which uses a gdb_test_multiple that is supposed to use the mi prompt, but
doesn't use -prompt to indicate this. Consequently, the default patterns use
the regular gdb prompt, which trigger earlier than the two custom patterns
which use "$::mi_gdb_prompt$".
Fix this by adding the missing -prompt "$::mi_gdb_prompt$" arguments.
While we're at it, make the gdb_test_multiple call a bit more readable by
using variables, and by using -wrap.
Tested on x86_64-linux, with:
- gcc and clang (to trigger both the has_stderr_symbol true and false cases)
- make check and make check-read1.
|
|
With check-read1 we run into:
...
(gdb) break DNE>::DNE^M
Function "DNE>::DNE" not defined.^M
Make breakpoint pending on future shared library load? (y or [n]) y^M
Breakpoint 9 (DNE>::DNE) pending.^M
n^M
(gdb) FAIL: gdb.cp/namespace.exp: br malformed '>' (got interactive prompt)
n^M
...
The question is supposed to be handled by the question and response arguments
to this gdb_test call:
...
gdb_test "break DNE>::DNE" "" "br malformed \'>\'" \
"Make breakpoint pending on future shared library load?.*" "y"
...
but both this and the builtin handling in gdb_test_multiple triggers.
The cause of this is that the question argument regexp is incomplete.
Fix this by making sure that the entire question is matched in the regexp:
...
set yn_re [string_to_regexp {(y or [n])}]
...
"Make breakpoint pending on future shared library load\\? $yn_re " "Y"
...
Tested on x86_64-linux.
|
|
Per RISC-V spec, the lr/sc sequence can consist of up to 16 instructions, and we
cannot insert breakpoints in the middle of this sequence. Before this, we only
detected a specific pattern (the most common one). This patch improves this part
and now supports more complex pattern detection.
Signed-off-by: Yang Liu <liuyang22@iscas.ac.cn>
Approved-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
|
|
|
|
After this commit:
commit 1925bba80edd37c2ef90ef1d2c599dfc2fc17f72
Date: Thu Jan 4 10:01:24 2024 +0000
gdb/python: add gdb.InferiorThread.__repr__() method
failures were reported for gdb.python/py-inferior.exp.
The test grabs a gdb.InferiorThread object representing an inferior
thread, and then, later in the test, expects this Python object to
become invalid when the inferior thread has exited.
The gdb.InferiorThread object was obtained from the list returned by
calling gdb.Inferior.threads().
The mistake I made in the original commit was to assume that the order
of the threads returned from gdb.Inferior.threads() somehow reflected
the thread creation order. Specifically, I was expecting the main
thread to be first in the list, and "other" threads to appear ... not
first.
However, the gdb.Inferior.threads() function creates a list and
populates it from a map. The order of the threads in the returned
list has no obvious relationship to the thread creation order, and can
vary from host to host.
On my machine the ordering was as I expected, so the test passed for
me. For others the ordering was not as expected, and it just happened
that we ended up recording the gdb.InferiorThread for the main thread.
As the main thread doesn't exit (until the test is over), the
gdb.InferiorThread object never became invalid, and the test failed.
Fixed in this commit by taking more care to correctly find a non-main
thread. I do this by recording the main thread early on (when there
is only one inferior thread), and then finding any thread that is not
this main thread.
Then, once all of the secondary threads have exited, I know that the
second InferiorThread object I found should now be invalid.
The test still passes for me, and I believe this should fix the issue
for everyone else too.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31238
|
|
This commit is the result of the following actions:
- Running gdb/copyright.py to update all of the copyright headers to
include 2024,
- Manually updating a few files the copyright.py script told me to
update, these files had copyright headers embedded within the
file,
- Regenerating gdbsupport/Makefile.in to refresh it's copyright
date,
- Using grep to find other files that still mentioned 2023. If
these files were updated last year from 2022 to 2023 then I've
updated them this year to 2024.
I'm sure I've probably missed some dates. Feel free to fix them up as
you spot them.
|
|
This commit updates the Python example code in the gdb.Progspace and
gdb.Objfile sections of the docs. Changes made:
1. Use @value{GDBP} for the GDB prompt rather than
hard-coding (gdB),
2. Use @group...@end group to split the example code into
unbreakable chunks, and
3. Add parenthesis to the Python print() calls in the examples. In
Python 2 it was OK to drop the parenthesis, but now GDB is Python 3
only, example code should include the parenthesis.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In previous commits I've added Object.__dict__ support to gdb.Inferior
and gdb.InferiorThread, this is similar to the existing support for
gdb.Objfile and gdb.Progspace.
This commit extends the documentation to offer the user some guidance
on selecting good names for their custom attributes so they
can (hopefully) avoid conflicting with any future attributes that GDB
might add.
The rules I've proposed are:
1. Don't start user attributes with a lower case letter, all the
current GDB attributes start with a lower case letter, and I suspect
all future attributes would also start with a lower case letter, and
2. Don't start user attributes with a double underscore, this risks
conflicting with Python built in attributes (e.g. __dict__) - though
clearly the user would need to start and end with a double
underscore, but it seemed easier just to say no double underscores.
I'm doing this as a separate commit as I've updated the docs for the
existing gdb.Objfile and gdb.Progspace so they all reference a single
paragraph on selecting attribute names.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdb.Objfile, gdb.Progspace, gdb.Type, and gdb.Inferior Python
types already have a __dict__ attribute, which allows users to create
user defined attributes within the objects. This is useful if the
user wants to cache information within an object.
This commit adds the same functionality to the gdb.InferiorThread
type.
After this commit there is a new gdb.InferiorThread.__dict__
attribute, which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> t = gdb.selected_thread()
>>> t._user_attribute = 123
>>> t._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The gdb.Objfile, gdb.Progspace, and gdb.Type Python types already have
a __dict__ attribute, which allows users to create user defined
attributes within the objects. This is useful if the user wants to
cache information within an object.
This commit adds the same functionality to the gdb.Inferior type.
After this commit there is a new gdb.Inferior.__dict__ attribute,
which is a dictionary. A user can, for example, do this:
(gdb) pi
>>> i = gdb.selected_inferior()
>>> i._user_attribute = 123
>>> i._user_attribute
123
>>>
There's a new test included.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that it is possible for the user to create a new
gdb.Progspace object, like this:
(gdb) pi
>>> p = gdb.Progspace()
>>> p
<gdb.Progspace object at 0x7ffad4219c10>
>>> p.is_valid()
False
As the new gdb.Progspace object is not associated with an actual C++
program_space object within GDB core, then the new gdb.Progspace is
created invalid, and there is no way in which the new object can ever
become valid.
Nor do I believe there's anywhere in the Python API where it makes
sense to consume an invalid gdb.Progspace created in this way, for
example, the gdb.Progspace could be passed as the locus to
register_type_printer, but all that would happen is that the
registered printer would never be used.
In this commit I propose to remove the ability to create new
gdb.Progspace objects. Attempting to do so now gives an error, like
this:
(gdb) pi
>>> gdb.Progspace()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create 'gdb.Progspace' instances
Of course, there is a small risk here that some existing user code
might break ... but if that happens I don't believe the user code can
have been doing anything useful, so I see this as a small risk.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a gdb.Frame.__repr__() method. Before this patch we would see
output like this:
(gdb) pi
>>> gdb.selected_frame()
<gdb.Frame object at 0x7fa8cc2df270>
After this patch, we now see:
(gdb) pi
>>> gdb.selected_frame()
<gdb.Frame level=0 frame-id={stack=0x7ffff7da0ed0,code=0x000000000040115d,!special}>
More verbose, but, I hope, more useful.
If the gdb.Frame becomes invalid, then we will see:
(gdb) pi
>>> invalid_frame_variable
<gdb.Frame (invalid)>
which is inline with how other invalid objects are displayed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Add a gdb.InferiorThread.__repr__() method. Before this patch we
would see output like this:
(gdb) pi
>>> gdb.selected_thread()
<gdb.InferiorThread object at 0x7f4dcc49b970>
After this patch, we now see:
(gdb) pi
>>> gdb.selected_thread()
<gdb.InferiorThread id=1.2 target-id="Thread 0x7ffff7da1700 (LWP 458134)">
More verbose, but, I hope, more useful.
If the gdb.InferiorThread becomes invalid, then we will see:
(gdb) pi
>>> invalid_thread_variable
<gdb.InferiorThread (invalid)>
Which is inline with how other invalid objects are displayed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Many object types now have a __repr__() function implementation. A
common pattern is that, if an object is invalid, we print its
representation as: <TYPENAME (invalid)>.
I thought it might be a good idea to move the formatting of this
specific representation into a utility function, and then update all
of our existing code to call the new function.
The only place where I haven't made use of the new function is in
unwind_infopy_repr, where we currently return a different string.
This case is a little different as the UnwindInfo is invalid because
it references a frame, and it is the frame itself which is invalid.
That said, I think it would be fine to switch to using the standard
format; if the UnwindInfo references an invalid frame, then the
UnwindInfo is itself invalid. But changing this would be an actual
change in behaviour, while all the other changes in this commit are
just refactoring.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This patch contains work pulled from this previously proposed patch:
https://inbox.sourceware.org/gdb-patches/20210213220752.32581-2-lsix@lancelotsix.com/
But has been modified by me. Credit for the original idea and
implementation goes to Lancelot, any bugs in this new iteration belong
to me.
Consider the executable `/tmp/foo/my_exec', and if we assume `/tmp' is
empty other than the `foo' sub-directory, then currently within GDB,
if I type:
(gdb) file /tmp/f
and then hit TAB, GDB completes this to:
(gdb) file /tmp/foo/
notice that not only did GDB fill in the whole of `foo', but GDB also
added a trailing '/' character. This is done within readline when the
path that was just completed is a directory. However, if I instead
do:
(gdb) complete file /tmp/f
file /tmp/foo
I now see the completed directory name, but the trailing '/' is
missing. The reason is that, in this case, the completions are not
offered via readline, but are handled entirely within GDB, and so
readline never gets the chance to add the trailing '/' character.
The above patch added filename option support to GDB, which included
completion of the filename options. This initially suffered from the
same problem that I've outlined above, but the above patch proposed a
solution to this problem, but this solution only applied to filename
options (which have still not been added to GDB), and was mixed in
with the complete filename options support.
This patch pulls out just the fix for the trailing "/" problem, and
applies it to GDB's general filename completion. This patch does not
add filename options to GDB, that can always be done later, but I
think this small part is itself a useful fix.
One of the biggest changes I made in this version is that I got rid of
the set_from_readline member function, instead, I now pass the value
of m_from_readline into the completion_tracker constructor.
I then moved the addition of the trailing '/' into filename_completer
so that it is applied in the general filename completion case. I also
added a call to gdb_tilde_expand which was missing from the original
patch, I haven't tested, but I suspect that this meant that the
original patch would not add the trailing '/' if the user entered a
path starting with a tilde character.
When writing the test for this patch I ran into two problems.
The first was that the procedures in lib/completion-support.exp relied
on the command being completed for the test name. This is fine for
many commands, but not when completing a filename, if we use the
command in this case the test name will (potentially) include the name
of the directory in which the test is being run, which means we can't
compare results between two runs of GDB from different directories.
So in this commit I've gone through completion-support.exp and added a
new (optional) testname argument to many of the procedures, this
allows me to give a unique test name, that doesn't include the path
for my new tests.
The second issue was in the procedure make_tab_completion_list_re,
this builds the completion list which is displayed after a double tab
when there are multiple possible completions.
The procedure added the regexp ' +' after each completion, and then
added another ' +' at the very end of the expected output. So, if we
expected to match the name of two functions 'f1' and 'f2' the
generated regexp would be: 'f1 +f2 + +'. This would match just fine,
the actual output would be: 'f1 f2 ', notice that we get two spaces
after each function name.
However, if we complete two directory names 'd1' and 'd2' then the
output will be 'd1/ d2/ '. Notice that now we only have a single
space between each match, however, we do get the '/' added instead.
What happens is that when presenting the matches, readline always adds
the appropriate trailing character; if we performed tab completion of
'break f1' then, as 'f1' is a unique match, we'd get 'break f1 ' with
a trailing space added. However, if we complete 'file d1' then we get
'file d1/'. Then readline is adding a single space after each
possible match, including the last one, which accounts for the
trailing space character.
To resolve this I've simply remove the addition o the second ' +'
within make_tab_completion_list_re, for the function completion
example I gave above the expected pattern is now 'f1 +f2 +', which for
the directory case we expect 'd1/ +d2/ +', both of which work just
fine.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16265
Co-Authored-By: Lancelot SIX <lsix@lancelotsix.com>
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: John Baldwin <jhb@FreeBSD.org>
|
|
This commit adds a new InferiorThread.ptid_string attribute. This
read-only attribute contains the string returned by target_pid_to_str,
which actually converts a ptid (not pid) to a string.
This is the string that appears (at least in part) in the output of
'info threads' in the 'Target Id' column, but also in the thread
exited message that GDB prints.
Having access to this string from Python is useful for allowing
extensions identify threads in a similar way to how GDB core would
identify the thread.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In test-case gdb.dwarf2/assign-variable-value-to-register.exp a return is
missing here after the unsupported:
...
if { ![is_x86_64_m64_target] } {
unsupported "unsupported architecture"
}
...
and consequently on aarch64-linux I ran into an UNSUPPORTED followed by 3
FAILs.
Fix this by simply using require:
...
require is_x86_64_m64_target
...
Tested on x86_64-linux and aarch64-linux.
|
|
Commit 78f2fd84e83 ("gdb: remove VALUE_REGNUM, add value::regnum")
introduced an unexpected change in value_assign. It replaced
`get_prev_frame_always (next_frame)` with `next_frame`in the call to
gdbarch_value_to_register.
This is the result of a merge error, since I previously had a patch to
change gdbarch_value_to_register to take the next frame, and later
decided to drop it. Revert that change.
Add a test based on the DWARF assembler to expose the problem and test
the fix. I also tested on ppc64le to make sure the problem reported in
PR 31231 was fixed.
Change-Id: Ib8b851287ac27a4b2e386f7b680cf65865e6aee6
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31231
|
|
On ppc64le-linux, I run into:
...
(gdb) bt^M
#0 0x00000000100006dc in foobar (J=2)^M
#1 0x000000001000070c in prog ()^M
(gdb) FAIL: gdb.dwarf2/dw2-entry-points.exp: bt foo
...
The test-case attemps to emulate additional entry points of a function, with
function bar having entry points foo and foobar:
...
(gdb) p bar
$1 = {void (int, int)} 0x1000064c <bar>
(gdb) p foo
$2 = {void (int, int)} 0x10000698 <foo>
(gdb) p foobar
$3 = {void (int)} 0x100006d0 <foobar>
...
However, when setting a breakpoint on the entry point foo:
...
(gdb) b foo
Breakpoint 1 at 0x100006dc
...
it ends up in foobar instead of in foo, due to prologue skipping, and
consequently the backtrace show foobar instead foo.
The problem is that the test-case does not emulate an actual prologue at each
entry point.
Fix this by disabling the prologue skipping when setting a breakpoint, using
"break *foo".
Tested on ppc64le-linux and x86_64-linux.
Tested-By: Guinevere Larsen <blarsen@redhat.com>
Approved-By: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
PR testsuite/31232
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31232
|
|
I ran into the following FAIL:
...
(gdb) python kill_and_detach()^M
Traceback (most recent call last):^M
File "<string>", line 1, in <module>^M
File "<string>", line 7, in kill_and_detach^M
gdb.error: Selected thread is running.^M
Error while executing Python code.^M
(gdb) FAIL: gdb.base/kill-during-detach.exp: exit_p=true: checkpoint_p=true: \
python kill_and_detach()
...
The FAIL happens as follows:
- gdb is debugging a process A
- a checkpoint is created, in other words, fork is called in the inferior,
after which we have:
- checkpoint 0 (the fork parent, process A), and
- checkpoint 1 (the fork child, process B).
- during checkpoint creation, lseek is called in the inferior (process A) for
all file descriptors, and it returns != -1 for at least one file descriptor.
- the process A continues in the background
- gdb detaches, from process A
- gdb switches to process B, in other words, it restarts checkpoint 1
- while restarting checkpoint 1, gdb tries to call lseek in the inferior
(process B), but this fails because gdb incorrectly thinks that inferior B
is running.
This happens because linux_nat_switch_fork patches the pid of process B into
the current inferior and current thread which where originally representing
process A. So, because process A was running in the background, the
thread_info fields executing and resumed are set accordingly, but they are not
correct for process B.
There's a line in fork_load_infrun_state that fixes up the thread_info field
stop_pc, so fix this by adding similar fixups for the executing and resumed
fields alongside.
The FAIL did not always reproduce, so extend the test-case to reliably
trigger this scenario.
Tested on x86_64-linux.
Approved-By: Kevin Buettner <kevinb@redhat.com>
PR gdb/31203
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31203
|
|
No functional change here, just touch up generated output slightly.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Only check these decls once in case other m4 macros also look for them.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This is used by gdb, gdbsupport, and gdbserver. We want to use it
in the sim tree too. Move it to gdbsupport which is meant as the
common sharing space for these projects.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In AIX we were missing some hooks needed to catch a fork () event
in rs6000-aix-nat.c. Due to their absence we were returning 1 while we
insert the breakpoint/catchpoint location. This patch is a fix to the same.
|
|
When doing "checkpoint delete 0" we run into an assertion failure:
...
+delete checkpoint 0
inferior.c:406: internal-error: find_inferior_pid: Assertion `pid != 0' failed.
...
Fix this by handling the "pptid == null_ptid" case in
delete_checkpoint_command.
Tested on x86_64-linux.
Approved-By: Kevin Buettner <kevinb@redhat.com>
PR gdb/31209
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31209
|
|
Consider test-case gdb.base/checkpoint.exp. At some point, it issues an info
checkpoints command:
...
(gdb) info checkpoints^M
* 0 process 30570 (main process) at 0x0^M
1 process 30573 at 0x4008bb, file checkpoint.c, line 49^M
2 process 30574 at 0x4008bb, file checkpoint.c, line 49^M
3 process 30575 at 0x4008bb, file checkpoint.c, line 49^M
4 process 30576 at 0x4008bb, file checkpoint.c, line 49^M
5 process 30577 at 0x4008bb, file checkpoint.c, line 49^M
6 process 30578 at 0x4008bb, file checkpoint.c, line 49^M
7 process 30579 at 0x4008bb, file checkpoint.c, line 49^M
8 process 30580 at 0x4008bb, file checkpoint.c, line 49^M
9 process 30582 at 0x4008bb, file checkpoint.c, line 49^M
10 process 30583 at 0x4008bb, file checkpoint.c, line 49^M
...
According to the docs, each of these (0-10) is a checkpoint.
But the pc address (as well as the file name and line number) is missing for
checkpoint 0.
Fix this by sampling the pc value for the current process in
info_checkpoints_command, such that we have instead:
...
* 0 process 30570 (main process) at 0x4008bb, file checkpoint.c, line 49^M
...
Tested on x86_64-linux.
Approved-By: Kevin Buettner <kevinb@redhat.com>
PR gdb/31211
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31211
|
|
While reading info_checkpoints_command, I noticed variable printed:
...
const fork_info *printed = NULL;
...
for (const fork_info &fi : fork_list)
{
if (requested > 0 && fi.num != requested)
continue;
printed = &fi;
...
}
if (printed == NULL)
...
has pointer type, but is just used as bool.
Make this explicit by changing the variable type to bool.
Tested on x86_64-linux.
Approved-By: Kevin Buettner <kevinb@redhat.com>
|
|
Currently cooked_index entry creation is either:
- done immediately if the parent_entry is known, or
- deferred if the parent_entry is not yet known, and done later while
resolving the deferred entries.
Instead, create all cooked_index entries immediately, and keep track of which
entries have a parent_entry that needs resolving later using the new
IS_PARENT_DEFERRED flag.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Make cooked_index_entry::parent_entry private, and add member functions to
access it.
Tested on x86_64-linux and ppc64le-linux.
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Make cooked_index_storage::add and cooked_index_entry::add return a
"cooked_index_entry *" instead of a "const cooked_index_entry *".
Tested on x86_64-linux and ppc64le-linux.
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Simon pointed out that my recent change to the DWO code caused a
failure in ASAN testing.
The bug here was I updated the code to use a different search type in
the hash table; but then did not change the search code to use
htab_find_slot_with_hash.
Note that this bug would not be possible with my type-safe hash table
series, hint, hint.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
A user pointed out that the recent background DWARF reader series
broke the build when --disable-threading is in use. This patch fixes
the problem. I am checking it in.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31223
|
|
dwarf2_base_index_functions::find_per_cu is documented as using an
unrelocated address. This patch changes the interface to use the
unrelocated_addr type, just to be a bit more type-safe.
Regression tested on x86-64 Fedora 38.
|
|
Now that the DWARF reader does not use parallel_for_each, we can
remove some of the features that were added just for it: return values
and task sizing.
The thread_pool typed tasks feature could also be removed, but I
haven't done so here. This one seemed less intrusive and perhaps more
likely to be needed at some point.
|
|
The previous patches are nearly enough to enable background DWARF
reading. However, this hack in language_defn::get_symbol_name_matcher
causes an early computation of current_language:
/* If currently in Ada mode, and the lookup name is wrapped in
'<...>', hijack all symbol name comparisons using the Ada
matcher, which handles the verbatim matching. */
if (current_language->la_language == language_ada
&& lookup_name.ada ().verbatim_p ())
return current_language->get_symbol_name_matcher_inner (lookup_name);
I considered various options here -- reversing the order of the
checks, or promoting the verbatim mode to not be a purely Ada feature
-- but in the end found that the few calls to this during startup
could be handled more directly.
In the JIT code, and in create_exception_master_breakpoint_hook, gdb
is really looking for a certain kind of symbol (text or data) using a
linkage name. Changing the lookup here is clearer and probably more
efficient as well.
In create_std_terminate_master_breakpoint, the lookup can't really be
done by linkage name (it would require relying on a certain mangling
scheme, and also may trip over versioned symbols) -- but we know that
this spot is C++-specific, and so the language ought to be temporarily
set to C++ here.
After this patch, the "file" case is much faster:
(gdb) file /tmp/gdb
2023-10-23 13:16:54.456 - command started
Reading symbols from /tmp/gdb...
2023-10-23 13:16:54.520 - command finished
Command execution time: 0.225906 (cpu), 0.064313 (wall)
|
|
lookup_minimal_symbol_text always loops over all objfiles, even when
an objfile is passed in as an argument. This patch changes the
function to loop over the minimal number of objfiles in the latter
situation.
|
|
When gdb starts up with a symbol file, it uses the program's "main" to
decide the "static context" and the initial language. With background
DWARF reading, this means that gdb has to wait for a significant
amount of DWARF to be read synchronously.
This patch introduces lazy language setting. The idea here is that in
many cases, the prompt can show up early, making gdb feel more
responsive.
|
|
This changes the 'current_language' global to be a macro that wraps a
function call. This change will let a subsequent patch introduce lazy
language setting.
|
|
quick_symbol_functions::read_partial_symbols is no longer implemented,
so both it and quick_symbol_functions::can_lazily_read_symbols can be
removed. This allows for other functions to be removed as well.
Note that SYMFILE_NO_READ is now pretty much dead. I haven't removed
it here -- but could if that's desirable. I tend to think that this
functionality would be better implemented in the core; but whenever I
dive into the non-DWARF readers it is pretty depressing.
|
|
dwarf2_has_info and dwarf2_initialize_objfile are only separate
because the DWARF reader implemented lazy psymtab reading. However,
now that this is gone, we can simplify the public DWARF API again.
|
|
This patch rearranges the DWARF reader so that more work is done in
the background. This is PR symtab/29942.
The idea here is that there is only a small amount of work that must
be done on the main thread when scanning DWARF -- before the main
scan, the only part is mapping the section data.
Currently, the DWARF reader uses the quick_symbol_functions "lazy"
functionality to defer even starting to read. This patch instead
changes the reader to start reading immediately, but doing more in
worker tasks.
Before this patch, "file" on my machine:
(gdb) file /tmp/gdb
2023-10-23 12:29:56.885 - command started
Reading symbols from /tmp/gdb...
2023-10-23 12:29:58.047 - command finished
Command execution time: 5.867228 (cpu), 1.162444 (wall)
After the patch, more work is done in the background and so this takes
a bit less time:
(gdb) file /tmp/gdb
2023-10-23 13:25:51.391 - command started
Reading symbols from /tmp/gdb...
2023-10-23 13:25:51.712 - command finished
Command execution time: 1.894500 (cpu), 0.320306 (wall)
I think this could be further sped up by using the shared library load
map to avoid objfile loops like the one in expand_symtab_containing_pc
-- it seems like the correct objfile could be chosen more directly.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29942
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30174
|
|
This changes the cooked index code to wait for threads in its
public-facing API. That is, the waits are done in cooked_index now,
and never in the cooked_index_shard. Centralizing this decision makes
it easier to wait for other events here as well.
|
|
For testing, it's sometimes convenient to be able to request that
DWARF reading be done synchronously. This patch adds a new "maint"
setting for this purpose.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This moves cooked_index_storage to cooked-index.h. This is needed by
a subsequent patch.
|
|
This adds a new compute_main_name method to quick_symbol_functions.
Currently there are no implementations of this, but a subsequent patch
will add one.
|
|
When DWARF reading is done in the background,
read_addrmap_from_aranges will be called from a worker thread.
Because warnings can't be emitted from these threads, this patch adds
a new deferred_warnings parameter to the function, letting the caller
control exactly how the warnings are emitted.
|
|
This patch changes the way complaint works in a background thread.
The new approach requires installing a complaint interceptor in each
worker, and then the resulting complaints are treated as one of the
results of the computation. This change is needed for a subsequent
patch, where installing a complaint interceptor around a parallel-for
is no longer a viable approach.
|
|
This changes gdb to ensure that gdb's BFD cache is guarded by a lock.
This avoids any races when multiple threads might open a BFD (and thus
use the BFD cache) at the same time.
Currently, this change is not needed because the the main thread waits
for some DWARF scanning to be completed before returning. The only
locking that's required is when opening DWO files, and there's a local
lock to this end in dwarf2/read.c.
However, in the coming patches, the DWARF reader will begin its work
earlier, in the background. This means there is the potential for the
DWARF reader and other code on the main thread to both attempt to open
BFDs at the same time.
|
|
This adds a couple of calls to bfd_cache_close at points where a BFD
isn't actively needed by gdb. Normally at these points, all the
needed section data is already mapped, so we can simply close the file
descriptor. This is harmless at worst, because if this is needed
after all, the BFD file descriptor cache will reopen it.
|
|
This changes the DWZ code to pre-read the section data and somewhat
simplify the DWZ API. This makes it easier to add the bfd_cache_close
call to the new dwarf2_read_dwz_file function -- after this is done,
there shouldn't be a reason to keep the BFD's file descriptor open.
|