Age | Commit message (Collapse) | Author | Files | Lines |
|
While testing DAP, we found a situation where a compiler-generated
variable caused the "variables" request to fail -- the variable in
question being an apparent 67-megabyte string.
It seems to me that artificial variables like this aren't interesting
to DAP users, and the gdb CLI omits these as well.
This patch changes DAP to omit these variables, adding a new
gdb.Symbol.is_artificial attribute to make this possible.
|
|
PR dap/32090 points out that gdb's DAP "launch" sequencing is
incorrect. The current approach (which is itself a 2nd
implementation...) was based on a misreading of the spec. The spec
has since been clarified here:
https://github.com/microsoft/debug-adapter-protocol/issues/497
The clarification here is that a client is free to send the "launch"
(or "attach") request at any point after the "initialized" event has
been sent by gdb. However, the "launch" does not cause any action to
be taken -- and does not send a response -- until after
"configurationDone" has been seen.
This patch implements this by arranging for the launch and attach
commands to return a DeferredRequest object.
All the tests needed updates. I've also added a new test that checks
that the deferred "launch" request can be cancelled. (Note that the
cancellation is lazy -- it also waits until configurationDone is seen.
This could be fixed, but I was not sure whether it is important to do
so.)
Finally, the "launch" command has a somewhat funny sequencing now.
Simply sending the command and waiting for a response yielded strange
results if the inferior did not stop -- in this case, the repsonse was
never sent. So now, the command is split into two parts, with some
setup being done synchronously (for better error propagation) and the
actual "run" being done async.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32090
Reviewed-by: Kévin Le Gouguec <legouguec@adacore.com>
|
|
After the commit:
commit 03ad29c86c232484f9090582bbe6f221bc87c323
Date: Wed Jun 19 11:14:08 2024 +0100
gdb: 'target ...' commands now expect quoted/escaped filenames
it was no longer possible to pass GDB the name of a core file
containing any special characters (white space or quote characters) on
the command line. For example:
$ gdb -c /tmp/core\ file.core
Junk after filename "/tmp/core": file.core
(gdb)
The problem is that the above commit changed the 'target core' command
to expect quoted filenames, so before the above commit a user could
write:
(gdb) target core /tmp/core file.core
[New LWP 2345783]
Core was generated by `./mkcore'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000000000401111 in ?? ()
(gdb)
But after the above commit the user must write:
(gdb) target core /tmp/core\ file.core
or
(gdb) target core "/tmp/core file.core"
This is part of a move to make GDB's filename argument handling
consistent.
Anyway, the problem with the '-c' command line flag is that it
forwards the filename unmodified through to the 'core-file' command,
which in turn forwards to the 'target core' command.
So when the user, at a shell writes:
$ gdb -c "core file.core"
this arrives in GDB as the unquoted string 'core file.core' (without
the single quotes). GDB then forwards this to the 'core-file'
command as if the user had written this at a GDB prompt:
(gdb) core-file core file.core
Which then fails to parse due to the unquoted white space between
'core' and 'file.core'.
The solution I propose is to escape any special characters in the core
file name passed from the command line before calling 'core-file'
command from main.c.
I've updated the corefile.exp test to include a test for passing a
core file containing a white space character. While I was at it I've
modernised the part of corefile.exp that I was touching.
|
|
The recent commit <HASH> moved an initialization of an objfile_holder in
syms_from_objfile_1 much earlier in the function, to better deal with
when GDB is unable to read the objfile format.
However, there is an early exit from syms_from_objfile_1 when the
objfile can be understood, but has no symbols. That was not releasing
the objfile_holder, so the objfile was being unlinked from the program
space, but the process of reading the objfile was being continued,
leading to use-after-frees flagged by the Address Sanitizer.
This commit fixes that UAF by making the objfile_holder release the
objfile right before the early exit.
This commit also changes the test gdb.base/dump.exp since that was the
original test that flagged the UAF, but at the end of the test the
generated files were being deleted, meaning we couldn't redo the test
manually after the fact. That final deletion was removed
Reported-by: Simon Marchi <simark@simark.ca>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
After the commit:
commit b9de07a5ff74663ff39bf03632d1b2ea417bf8d5
Date: Thu Oct 10 11:37:34 2024 +0100
gdb: fix handling of DW_AT_entry_pc of inlined subroutines
GDB's buildbot CI testing highlighted this assertion failure:
(gdb) c
Continuing.
../../binutils-gdb/gdb/block.h:203: internal-error: set_entry_pc: Assertion `start >= this->start () && start < this->end ()' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
----- Backtrace -----
FAIL: gdb.base/break-probes.exp: run til our library loads (GDB internal error)
This assertion was in the new function set_entry_pc and is asserting
that the default_entry_pc() value is within the blocks start/end
range.
The default_entry_pc() is the value GDB will use as the entry-pc if
the DWARF doesn't specifically override the entry-pc. This value is
calculated as:
1. The start address of the first sub-range within the block, if the
block has more than 1 range, or
2. The low address (from DW_AT_low_pc) for the block.
If the block only has a single range then this means the block was
defined with low/high pc attributes (case #2 above). These low/high
pc values are what block::start() and block::end() return. This means
that by definition, if the block is continuous, the above assert
cannot trigger as 'start', the default_entry_pc() would be equivalent
to block::start().
This means that, for the assert to trigger, the block must have
multiple ranges, and the first address of the first range is not
within the blocks low/high address range. This seems wrong.
I inspected the state at the time the assert triggered and discovered
the block's start() address. Then I removed the assert and restarted
GDB. I was now able to inspect the blocks at the offending address:
(gdb) maintenance info blocks 0x7ffff7dddaa4
Blocks at 0x7ffff7dddaa4:
from objfile: [(objfile *) 0x44a37f0] /lib64/ld-linux-x86-64.so.2
[(block *) 0x46b30c0] 0x7ffff7ddd5a0..0x7ffff7dde8a6
entry pc: 0x7ffff7ddd5a0
is global block
symbol count: 4
is contiguous
[(block *) 0x46b3020] 0x7ffff7ddd5a0..0x7ffff7dde8a6
entry pc: 0x7ffff7ddd5a0
is static block
symbol count: 9
is contiguous
[(block *) 0x46b2f70] 0x7ffff7ddda00..0x7ffff7dddac3
entry pc: 0x7ffff7ddda00
function: __GI__dl_find_dso_for_object
symbol count: 4
is contiguous
[(block *) 0x46b2e10] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
inline function: __GI__dl_find_dso_for_object
symbol count: 5
is contiguous
[(block *) 0x46b2a40] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
symbol count: 1
is contiguous
[(block *) 0x46b2970] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
symbol count: 2
address ranges:
0x7ffff7ddda0e..0x7ffff7ddda77
0x7ffff7ddda90..0x7ffff7ddda96
I've left everything in for context, but the only really interesting
bit is the very last block, it's low/high range is:
0x7ffff7dddaa4..0x7ffff7dddac3
but it has separate ranges:
0x7ffff7ddda0e..0x7ffff7ddda77
0x7ffff7ddda90..0x7ffff7ddda96
which are all outside the low/high range. This is what triggers the
assert. But why does that block exist at all?
What I believe is happening is that we're running into a bug in older
versions of GCC. The buildbot failure was with an 8.5 gcc, and Tom de
Vries also reported seeing failures when using version 7 and 8 gcc,
but not with gcc 9 and onward.
Looking at the DWARF I can see that the problematic block is created
from this DIE:
<4><15efb>: Abbrev Number: 83 (DW_TAG_lexical_block)
<15efc> DW_AT_abstract_origin: <0x15e9f>
<15efe> DW_AT_low_pc : 0x7ffff7dddaa4
<15f06> DW_AT_high_pc : 31
which links via DW_AT_abstract_origin to:
<2><15e9f>: Abbrev Number: 80 (DW_TAG_lexical_block)
<15ea0> DW_AT_ranges : 0x38e0
<15ea4> DW_AT_sibling : <0x15eca>
And so we can see that <15efb> has got both low/high pc attributes and
a ranges attribute.
If I widen my checking to parents of DIE <15efb> then I see that they
also have DW_AT_abstract_origin, however, there is something
interesting going on, the parent DIEs are linking to a different DIE
tree than <15efb>.
What I believe is happening is this, we have an abstract instance
tree, this is rooted at a DW_AT_subprogram, and contains all the
blocks, variables, parameters, etc, that you would expect. As this is
an abstract instance, then there are no low/high pc attributes, and no
ranges attributes in this tree. This makes sense.
Now elsewhere we have a DW_TAG_subprogram (not
DW_TAG_inlined_subroutine) which links via
DW_AT_abstract_origin to the abstract DW_AT_subprogram. This case is
documented in the DWARF 5 spec in section 3.3.8.3, and describes an
Out-of-Line Instance of an Inlined Subroutine. Within this out of
line instance many of the DIE correctly link back, using
DW_AT_abstract_origin to the abstract instance tree. This tree also
includes the DIE <15e9f>, which is where our problem DIE references.
Now, to really confuse things, within this out-of-line instance we
have a DW_TAG_inlined_subroutine, which is another instance of the
same abstract instance tree! This would seem to indicate a recursive
call to the inline function, and the compiler, for some reason, needed
to instantiate an out of line instance of this function.
And it is within this nested, inlined subroutine, that the problem DIE
exists. The problem DIE is referencing the corresponding DIE within
the out of line instance tree, but I am convinced this must be a (long
fixed) GCC bug, and that the problem DIE should be referencing the DIE
within the abstract instance tree.
I'm aware that the above is pretty confusing. The actual DWARF would
be a around 200 lines long, so I'd like to avoid dumping it in here.
But here's my attempt at representing what's going on in a minimal
example. The numbers down the side represent the section offset, not
the nesting level, and I've removed any attributes that are not
relevant:
<1> DW_TAG_subprogram
<2> DW_TAG_lexical_block
<3> DW_TAG_subprogram
DW_AT_abstract_origin <1>
<4> DW_TAG_lexical_block
DW_AT_ranges ...
<5> DW_TAG_inlined_subroutine
DW_AT_abstract_origin <1>
<6> DW_TAG_lexical_block
DW_AT_abstract_origin <4>
DW_AT_low_pc ...
DW_AT_high_pc ...
The lexical block at <6> is linking to <4> when it should be linking
to <2>.
There is one additional thing that we might wonder about, which is,
when calculating the low/high pc range for a block, why does GDB not
make use of the range information and expand the range beyond the
defined low/high values?
The answer to this is in dwarf_get_pc_bounds_ranges_or_highlow_pc in
dwarf/read.c. This is where the low/high bounds are calculated. What
we see is that GDB first checks for a low/high attribute pair, and if
that is present, this defines the address range for the block. Only
if there is no DW_AT_low_pc do we check for the DW_AT_ranges, and use
that to define the extent of the block. And this makes sense, section
3.5 of the DWARF-5 spec says:
The lexical block entry may have either a DW_AT_low_pc and DW_AT_high_pc
pair of attributes or a DW_AT_ranges attribute whose values encode the
contiguous or non-contiguous address ranges, respectively, of the machine
instructions generated for the lexical block...
Section 3.5 is specifically about lexical blocks, but the same
wording, about it being either low/high OR ranges is repeated for
other DW_TAG_ types.
So this explains why GDB doesn't use the ranges to expand the problem
blocks ranges; as the first DIE has low/high addresses, these are
used, and the ranges is not consulted.
It is only later in dwarf2_record_block_ranges that we create a range
based off the low/high pc, and then also process the ranges data, this
allows the problem block to exist with ranges that are outside the
low/high range.
To solve this I considered a number of options:
1. Prevent loading certain attributes from an abstract instance.
Section 3.3.8.1 of the DWARF-5 spec talks about which attributes are
appropriate to place in an abstract instance. Any attribute that
might vary between instances should not appear in an abstract
instance. DW_AT_ranges is included as an example in the
non-exhaustive list of attributes that should not appear in an
abstract instance.
Currently in dwarf2_attr (dwarf2/read.c), when we see a
DW_AT_abstract_origin attribute, we always follow this to try and find
the attribute we are looking for. But we could change this function
so that we prevent this following for attributes that we know should
not be looked up in an abstract instance. This would solve the
problem in this case by preventing us finding the DW_AT_ranges in the
incorrect abstract instance.
2. Filter the ranges.
Having established a blocks low/high address range in
dwarf_get_pc_bounds_ranges_or_highlow_pc, we could allow
dwarf2_record_block_ranges to parse the ranges, but we could reject
any range that extends outside the blocks defined start and end
addresses.
For well behaved DWARF where we have either low/high or ranges, then
the blocks start/end are defined from the range data, and so, by
definition, every range would be acceptable.
But in our problem case we would reject all of the invalid ranges.
This is my least favourite solution as it feels like rejecting the
ranges is tackling the problem too late on.
3. Don't try to parse ranges when we have low/high attributes.
This option involves updating dwarf2_record_block_ranges to match the
behaviour of dwarf_get_pc_bounds_ranges_or_highlow_pc, and, I believe,
to match the DWARF spec: don't try to read range data from
DW_AT_ranges if we have low/high pc attributes.
In our case this solves the issue because the problematic DIE has the
low/high attributes, and it then links to the wrong DIE which happens
to have DW_AT_ranges. With this change in place we don't even look
for the DW_AT_ranges.
If the problem were reversed, and the initial DIE had DW_AT_ranges,
but the incorrectly referenced DIE had the low/high pc attributes,
we would pick up the wrong addresses, but this wouldn't trigger any
asserts. The reason is that dwarf_get_pc_bounds_ranges_or_highlow_pc
would also find the low/high addresses from the incorrectly referenced
DIE, and so we would just end up with a block which had the wrong
address ranges, but the block would be self consistent, which is
different to the problem we hit here.
In the end, in this commit I went with solution #3, having
dwarf_get_pc_bounds_ranges_or_highlow_pc and
dwarf2_record_block_ranges be consistent seems sensible. However, I
do wonder if in the future we might want to explore solution #1 as an
additional safety feature.
With this patch in place I'm able to run the gdb.base/break-probes.exp
without seeing the assert that CI testing highlighted. I see no
regressions when testing on x86-64 GNU/Linux with gcc 9.3.1.
Note: the diff in this commit looks big, but it's really just me
indenting the code.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A failure of 'runto_main' in 'start_structs_test' results in a TCL
error. The return value of 'start_structs_test' function is evaluated
inside an if conditional clause, which expects a boolean value. Return
'-1' on failure to avoid the error.
Reviewed-By: Keith Seitz <keiths@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In commit 922ab963e1c ("[gdb/python] Handle empty PYTHONDONTWRITEBYTECODE") I
added a test in gdb.python/py-startup-opt.exp that checks the
"show python dont-write-bytecode" output.
Then in commit 348290c7ef4 ("[gdb/python] Warn and ignore ineffective python
settings") I changed the output of "show python dont-write-bytecode" after
python initialization.
I tested these changes individually, and found no problems but after
committing both the test started failing, which the Linaro CI reported.
Fix this by updating the expected output.
While we're at it, make the test a bit more generic by testing
"show python $setting" in all cases.
Tested on x86_64-linux, using:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
|
|
This changes the "maint print reggroups" command to use a ui-out table
rather than printf.
It also fixes a typo I noticed in a related test case name; and lets
us finally remove the leading \s from the regexp in completion.exp.
Reviewed-by: Christina Schimpe <christina.schimpe@intel.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This changes various "maint print" register commands to use ui-out
tables rather than the current printf approach.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
With test-case gdb.arch/pr25124.exp, I run into:
...
PASS: gdb.arch/pr25124.exp: disassemble thumb instruction (1st try)
PASS: gdb.arch/pr25124.exp: disassemble thumb instruction (2nd try)
DUPLICATE: gdb.arch/pr25124.exp: disassemble thumb instruction (2nd try)
...
Fix this by using a comma instead of parentheses.
Tested on arm-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When using PYTHONDONTWRITEBYTECODE with an empty string we get:
...
$ PYTHONDONTWRITEBYTECODE= gdb -q -batch -ex "show python dont-write-bytecode"
Python's dont-write-bytecode setting is auto (currently on).
...
This is incorrect, it should be off.
The actual setting is correct, that was already fixed in commit 24d2cbc42cc
("set/show python dont-write-bytecode fixes"), in function
python_write_bytecode.
Fix this by:
- factoring out new function env_python_dont_write_bytecode out of
python_write_bytecode, and
- using it in show_python_dont_write_bytecode.
Tested on x86_64-linux, using test-case gdb.python/py-startup-opt.exp and:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
Approved-By: Tom Tromey <tom@tromey.com>
PR python/32389
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32389
|
|
PYTHONDONTWRITEBYTECODE
When running test-case gdb.python/py-startup-opt.exp with empty
PYTHONDONTWRITEBYTECODE:
...
$ cd build/gdb/testsuite
$ PYTHONDONTWRITEBYTECODE= make check \
RUNTESTFLAGS=gdb.python/py-startup-opt.exp
...
I get:
...
end^M
dont_write_bytecode is off^M
(gdb) FAIL: $exp: attr=dont_write_bytecode: testname: input 6: end
...
The problem is that the test-case expects dont_write_bytecode to be
on, which is incorrect because PYTHONDONTWRITEBYTECODE only has effect if set
to a non-empty string [1].
Fix this by correctly setting expectations in the test-case.
Tested on x86_64-linux, with:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
Approved-By: Tom Tromey <tom@tromey.com>
[1] https://docs.python.org/3/using/cmdline.html#envvar-PYTHONDONTWRITEBYTECODE
|
|
The test gdb.reverse/i386-avx-reverse.exp was assuming that if the CPU
was like x86, it would have AVX instructions because I didn't know how
to check for AVX instruction support explicitly. This commit updates
that to use the pre-existing TCL proc have_avx.
Also update the comment at the top of the test, since it was a copy of a
different test.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
When building gdb with --with-expat=no and running test-case
gdb.base/reset-catchpoint-cond.exp we get:
...
(gdb) catch syscall write^M
warning: Can not parse XML syscalls information; \
XML support was disabled at compile time.^M
Unknown syscall name 'write'.^M
(gdb) FAIL: $exp: mode=syscall: catch syscall write
...
Fix this by skipping the test for --with-expat=no.
Tested on x86_64-linux.
|
|
When building gdb with --disable-tui, we run into:
...
(gdb) python print(type(gdb.TuiWindow))^M
Python Exception <class 'AttributeError'>: \
module 'gdb' has no attribute 'TuiWindow'^M
Error occurred in Python: module 'gdb' has no attribute 'TuiWindow'^M
(gdb) FAIL: gdb.python/python.exp: gdb.TuiWindow is registered
...
Fix this by skipping the test for --disable-tui.
Tested on x86_64-linux.
|
|
The test gdb.cp/step-and-next-inline.exp creates a test binary called
step-and-next-inline-no-header. This test includes a function
`tree_check` which is inlined 3 times.
When testing with some older versions of gcc (I've tried 8.4.0, 9.3.1)
we see the following DWARF representing one of the inline instances of
tree_check:
<2><8d9>: Abbrev Number: 38 (DW_TAG_inlined_subroutine)
<8da> DW_AT_abstract_origin: <0x9ee>
<8de> DW_AT_entry_pc : 0x401165
<8e6> DW_AT_GNU_entry_view: 0
<8e7> DW_AT_ranges : 0x30
<8eb> DW_AT_call_file : 1
<8ec> DW_AT_call_line : 52
<8ed> DW_AT_call_column : 10
<8ee> DW_AT_sibling : <0x92d>
...
<1><9ee>: Abbrev Number: 46 (DW_TAG_subprogram)
<9ef> DW_AT_external : 1
<9ef> DW_AT_name : (indirect string, offset: 0xe8): tree_check
<9f3> DW_AT_decl_file : 1
<9f4> DW_AT_decl_line : 38
<9f5> DW_AT_decl_column : 1
<9f6> DW_AT_linkage_name: (indirect string, offset: 0x2f2): _Z10tree_checkP4treei
<9fa> DW_AT_type : <0x9e8>
<9fe> DW_AT_inline : 3 (declared as inline and inlined)
<9ff> DW_AT_sibling : <0xa22>
...
Contents of the .debug_ranges section:
Offset Begin End
...
00000030 0000000000401165 0000000000401165 (start == end)
00000030 0000000000401169 0000000000401173
00000030 0000000000401040 0000000000401045
00000030 <End of list>
...
Notice that one of the sub-ranges of tree-check is empty, this is the
line marked 'start == end'. As the end address is the first address
after the range, this range cover absolutely no code.
But notice too that the DW_AT_entry_pc for the inline instance points
at this empty range.
Further, notice that despite the ordering of the sub-ranges, the empty
range is actually in the middle of the region defined by the lowest
address to the highest address. The ordering is not a problem, the
DWARF spec doesn't require that ranges be in any particular order.
However, this empty range is causing issues with GDB newly acquire
DW_AT_entry_pc support.
GDB already rejects, and has done for a long time, empty sub-ranges,
after all, the DWARF spec is clear that such a range covers no code.
The recent DW_AT_entry_pc patch also had GDB reject an entry-pc which
was outside of the low/high bounds of a block.
But in this case, the entry-pc value is within the bounds of a block,
it's just not within any useful sub-range. As a consequence, GDB is
storing the entry-pc value, and making use of it, but when GDB stops,
and tries to work out which block the inferior is in, it fails to spot
that the inferior is within tree_check, and instead reports the
function into which tree_check was inlined.
I've tested with newer versions of gcc (12.2.0 and 14.2.0) and with
these versions gcc is still generating the empty sub-range, but now
this empty sub-range is no longer the entry point. Here's the
corresponding ranges table from gcc 14.2.0:
Contents of the .debug_rnglists section:
Table at Offset: 0:
Length: 0x56
DWARF version: 5
Address size: 8
Segment size: 0
Offset entries: 0
Offset Begin End
...
00000021 0000000000401165 000000000040116f
0000002b 0000000000401040 (base address)
00000034 0000000000401040 0000000000401040 (start == end)
00000037 0000000000401041 0000000000401046
0000003a <End of list>
...
The DW_AT_entry_pc is 0x401165, but this is not the empty sub-range,
as a result, when GDB stops at the entry-pc, GDB will correctly spot
that the inferior is in the tree_check function.
The fix I propose here is, instead of rejecting entry-pc values that
are outside the block's low/high range, instead reject entry-pc values
that are not inside any of the block's sub-ranges.
Now, GDB will ignore the prescribed entry-pc, and will instead select
a suitable default entry-pc based on either the block's low-pc value,
or the first address of the first range.
I have extended the gdb.cp/step-and-next-inline.exp test to check this
case, but this does depend on the compiler version being used (newer
compilers will always pass, even without the fix).
So I have also added a DWARF assembler test to cover this case.
Reviewed-By: Kevin Buettner <kevinb@redhat.com>
|
|
Add missing return statements in
* gdb.threads/process-exit-status-is-leader-exit-status.c
* gdb.threads/next-fork-exec-other-thread.c
to fix 'no return statement' compiler warnings, e.g.:
process-exit-status-is-leader-exit-status.c: In function ‘start’:
process-exit-status-is-leader-exit-status.c:46:1: warning: no return
statement in function returning non-void [-Wreturn-type]
46 | }
| ^
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Since 2020 it has been reported to clang[1] that the debug information
around OpenMP is insufficient. The OpenMP section is not declared
within the correct scope, and instead clang marks as if the section was
a function in the global scope. This causes several failures in the
test gdb.threads/omp-par-scope.exp when using clang to test GDB.
Since this isn't a true failure of GDB, and there is little expectation
that clang will be able to fix this soon, this commit disables the
aforementioned test when clang is being used.
[1] https://github.com/llvm/llvm-project/issues/44236
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
Add a regression test for PR symtab/32225.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32225
|
|
Intel has EOL'ed the Nios II architecture, and it's time to remove support
from all toolchain components before it gets any more bit-rotten from
lack of maintenance or regular testing.
|
|
This converts gdb_bfd.c to use the new hash table for all_bfds.
This patch slightly changes the htab_t pretty-printer test, which was
relying on all_bfds. Note that with the new hash table, gdb-specific
printers aren't needed; the libstdc++ printers suffice -- in fact,
they are better, because the true types of the contents are available.
Change-Id: I48b7bd142085287b34bdef8b6db5587581f94280
Co-Authored-By: Tom Tromey <tom@tromey.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This converts the typedef hash to use the new hash table.
This patch found a latent bug in the typedef code. Previously, the
hash function looked at the type name, but the hash equality function
used types_equal -- but that strips typedefs, meaning that equality of
types did not imply equality of hashes. This patch fixes the problem
and updates the relevant test.
Change-Id: I0d10236b01e74bac79621244a1c0c56f90d65594
Co-Authored-By: Tom Tromey <tom@tromey.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Some of the filename completion tests perform mid-line completion.
That is we enter a partial line, then move the cursor back to the
middle of the line and perform completion.
The problem is that, emitting characters into the middle of a terminal
line relies on first emitting some control characters. And which
control characters are emitted will depend on the current TERM
setting.
When I initially added the mid-line completion tests I setup two
regexp that covered two different terminal types, but PR gdb/32338
identifies additional terminal types that emit different sequences of
control characters.
Rather than trying to handle all possible terminal types, lets just
force the TERM variable to something simple (i.e. "dumb") and then
just support that one case. The thing being tested for here was that
GDB would complete a filename in the middle of a line, the specific
terminal type was not really important.
I've simplified the regexp used to match the completion in two places,
and I now force TERM to be "dumb" for the mid-line completion tests.
I've tested this by setting my global environment TERM to 'ansi',
'xterm', 'xterm-mono', and 'dumb', and I see no failures in any mode
now.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32338
Tested-By: Tom de Vries <tdevries@suse.de>
|
|
GDB provides a special process record and replay target that can
record a log of the process execution, and replay it later with
both forward and reverse execution commands. This patch adds the
basic support of process record and replay on LoongArch, it allows
users to debug basic LoongArch instructions and provides reverse
debugging support.
Here is a simple example on LoongArch:
$ cat test.c
int a = 0;
int main()
{
a = 1;
a = 2;
return 0;
}
$ gdb test
...
(gdb) start
...
Temporary breakpoint 1, main () at test.c:4
4 a = 1;
(gdb) record
(gdb) p a
$1 = 0
(gdb) n
5 a = 2;
(gdb) n
6 return 0;
(gdb) p a
$2 = 2
(gdb) rn
5 a = 2;
(gdb) rn
Reached end of recorded history; stopping.
Backward execution from here not possible.
main () at test.c:4
4 a = 1;
(gdb) p a
$3 = 0
(gdb) record stop
Process record is stopped and all execution logs are deleted.
(gdb) c
Continuing.
[Inferior 1 (process 129178) exited normally]
Signed-off-by: Hui Li <lihui@loongson.cn>
Approved-By: Guinevere Larsen <guinevere@redhat.com> (record-full)
Approved-By: Tom Tromey <tom@tromey.com>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
|
|
Eli mentioned [1] that given that we use US English spelling in our
documentation, we should use "behavior" instead of "behaviour".
In wikipedia-common-misspellings.txt there's a rule:
...
behavour->behavior, behaviour
...
which leaves this as a choice.
Add an overriding rule to hardcode the choice to common-misspellings.txt:
...
behavour->behavior
...
and add a rule to rewrite behaviour into behavior:
...
behaviour->behavior
...
and re-run spellcheck.sh on gdb*.
Tested on x86_64-linux.
[1] https://sourceware.org/pipermail/gdb-patches/2024-November/213371.html
|
|
This commit adds recording support for the AVX instruction vpor, and the
AVX2 extension. Since the encoding of vpor and vpxor are the same, and
their semantics are basically the same, modulo the mathematical
operation, they are handled by the same switch case block.
This also updates the vpxor function, to test vpor and vpxor, and
updates the name to vpor_xor_test to better reflect what it does.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit adds support for recording the AVX instruction vpmovmskb,
and tests to the relevant file. The test didn't really support checking
general purpose registers, so this commit also adds a proc to
gdb.reverse/i386-avx-reverse.exp, which can be used to test them
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit adds support to recording instructions of the form
VPCMPEQ[B|W|D]. They are all encoded in the same way and only
differentiated by the opcode, so they are all processed together. This
commit also updates the test to (quite exhaustively) test the new
instruction.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This commit adds support for recording the instruction vpxor,
introduced in the AVX extension, and extended in AVX2 to use 256 bit
registers. The test gdb.reverse/i386-avx-reverse.exp has been extended
to test this instruction as well.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I tried out making python initialization fail by passing an incorrect
PYTHONHOME, and got:
...
$ PYTHONHOME=foo ./gdb.sh -q
Python path configuration:
PYTHONHOME = 'foo'
...
Python initialization failed: \
failed to get the Python codec of the filesystem encoding
Python not initialized
$
...
The relevant part of the code is:
...
static void
gdbpy_initialize (const struct extension_language_defn *extlang)
{
if (!do_start_initialization () && py_isinitialized && PyErr_Occurred ())
gdbpy_print_stack ();
gdbpy_enter enter_py;
...
What happens is:
- gdbpy_enter::gdbpy_enter () is called, where we run into:
'if (!gdb_python_initialized) error (_("Python not initialized"));'
- the error propagates to gdb's toplevel
- gdb print the error and exits.
It seems unnecesssary that we exit gdb. We could continue the
session without python support.
Fix this by:
- bailing out of gdbpy_initialize if !do_start_initialization
- bailing out of finalize_python if !gdb_python_initialized
This gets us instead:
...
$ PYTHONHOME=foo gdb -q
Python path configuration:
PYTHONHOME = 'foo'
...
Python initialization failed: \
failed to get the Python codec of the filesystem encoding
(gdb) python print (1)
Python not initialized
(gdb)
...
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When disassembling function symbols in C++ code, if GDB is asked to
disassemble a function by name then the function name in the header
line can be demangled by turning on `set print asm-demangle on`, e.g.:
(gdb) disassemble foo_type::some_function
Dump of assembler code for function _ZN8foo_type13some_functionE7my_type:
0x0000000000401142 <+0>: push %rbp
... etc ...
End of assembler dump.
(gdb) set print asm-demangle on
(gdb) disassemble foo_type::some_function
Dump of assembler code for function foo_type::some_function(my_type):
0x0000000000401142 <+0>: push %rbp
... etc ... │
End of assembler dump. │
However, if GDB is disassembling the current function, then this
demangling doesn't work, e.g.:
(gdb) break foo_type::some_function
Breakpoint 1 at 0x401152: file mangle.cc, line 16.
(gdb) run
Starting program: /tmp/mangle
Breakpoint 1, foo_type::some_function (this=0x7fffffffa597, obj=...) at mangle.cc:16
16 obj.update ();
(gdb) disassemble
Dump of assembler code for function _ZN8foo_type13some_functionE7my_type:
0x0000000000401142 <+0>: push %rbp
... etc ...
End of assembler dump.
(gdb) set print asm-demangle on
(gdb) disassemble
Dump of assembler code for function _ZN8foo_type13some_functionE7my_type:
0x0000000000401142 <+0>: push %rbp
... etc ...
End of assembler dump.
This commit fixes this issue, and extends gdb.cp/disasm-func-name.exp,
which was already testing the first case (disassemble by name) to also
cover disassembling the current function.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that gdb.base/bg-exec-sigint-bp-cond.exp fails for remote host
(concretely, host board local-remote-host and target board
remote-gdbserver-on-localhost):
...
(gdb) c&^M
Continuing.^M
(gdb) bash: line 0: kill: (23834) - Operation not permitted^M
^M
Breakpoint 2, foo () at bg-exec-sigint-bp-cond.c:23^M
23 return 0;^M
...
due to getting gdb's pid like this:
...
set gdb_pid [exp_pid -i [board_info host fileid]]
...
For remote host using ssh, this returns the pid of the ssh session on build.
Fix this by requiring local host.
Tested on x86_64-linux.
|
|
The test-case gdb.base/bg-exec-sigint-bp-cond.exp sends 10 SIGINTS to gdb, and
counts whether 10 have arrived.
Occasionally, less than 10 arrive due to signal merging [1].
This is more likely to happen when building gdb with -fsanitize=thread.
Fix this by instead, sending one SIGINT at a time, and waiting for it to
arrive.
This still makes the test-case fail if the fix fixing the PR that the
test-case was introduced for is reverted.
Tested on aarch64-linux and x86_64-linux.
PR testsuite/32329
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32329
[1] https://www.gnu.org/software/libc/manual/html_node/Merged-Signals.html
|
|
The 80-column-help-string self-test can fail if gdb's install
directory is too long, because the help for "jit-reader-load" includes
JIT_READER_DIR.
This help text is actually somewhat misleading, though.
JIT_READER_DIR is not actually used directly -- instead the relocated
variant is used.
This patch adds a new "show jit-reader-directory" command and changes
the help text to refer to this instead. I considered adding a "set"
command as well, but since absolute paths are acceptable here, and
since this is a very niche command anyway, I figured there was no need
to bother.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32357
Reviewed-By: Kévin Le Gouguec <legouguec@adacore.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
While looking at build_id_to_bfd_suffix (in gdb/build-id.c) I realised
that GDB would likely not do what we wanted if a build-id was ever a
single byte.
Right now, build-ids generated by the GNU linker are 32 bytes, but
there's nothing that forces this to be the case, it's pretty easy to
create a fake, single byte, build-id. Given that the build-id is an
external input (read from the objfile), GDB should protect itself
against these edge cases.
The problem with build_id_to_bfd_suffix is that this function creates
the path used to lookup either the debug information, or an
executable, based on its build-id. So a 3-byte build-id 0xaabbcc will
look in the path: `$DEBUG_FILE_DIRECTORY/.build-id/aa/bbcc.debug`.
However, a single byte build-id 0xaa, will look in the file:
`$DEBUG_FILE_DIRECTORY/.build-id/aa/.debug` which doesn't seem right.
Worse, when looking for an objfile given a build-id GDB will look for
a file called `$DEBUG_FILE_DIRECTORY/.build-id/aa/` with a trailing
'/' character.
I propose that, in build_id_to_bfd_suffix we just return early if the
build-id is 1 byte (or less) with a return value that indicates no
separate file was found.
For testing I made use of the DWARF assembler. I needed to update the
build-id creation proc, the existing code assumes that the build-id is
a multiple of 4 bytes, so I added some additional padding to ensure
that the generated note was a multiple of 4 bytes, even if the
build-id was not.
I added a test with a 1 byte build-id, and also for the case where the
build-id has zero length. The zero length case already does what
you'd expect (no debug is loaded) as the bfd library rejects the
build-id when loading it from the objfile, but adding this additional
test is pretty cheap.
Approved-By: Kevin Buettner <kevinb@redhat.com>
|
|
Some shells automatically confirm the 'dir' command:
(gdb) dir
Reinitialize source path to empty? (y or n)
[answered Y; input not from terminal]
Source directories searched: $cdir;$cwd
(gdb) y
dir <...>/gdb/testsuite/gdb.base
Undefined command: "y". Try "help".
For example, this reprdocues in a MinGW32 environment with
'TERM=dumb'. Skip sending 'y' if the command is already confirmed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
ada-lang.c has a "sort_choices" function that claims to sort the
symbol choices, but which does not really implement sorting. This
patch changes this code to really sort the result vector, sorting
first by filename, then line number, and finally by the symbol name.
The filename sorting is done first by comparing basenames. It turns
out that gnatmake and gprbuild invoke the compiler a bit differently,
so depending on which one you use, the results of a naive sort might
be different (due to the use of absolute or relative paths).
|
|
The Intel (R) linear address masking (LAM) feature modifies the checking
applied to 64-bit linear addresses. With this so-called "modified
canonicality check" the processor masks the metadata bits in a pointer
before using it as a linear address. LAM supports two different modes that
differ regarding which pointer bits are masked and can be used for
metadata: LAM 48 resulting in a LAM width of 15 and LAM 57 resulting in a
LAM width of 6.
This patch adjusts watchpoint addresses based on the currently enabled
LAM mode using the untag mask provided in the /proc/<pid>/status file.
As LAM can be enabled at runtime or as the configuration may change
when entering an enclave, GDB checks enablement state each time a watchpoint
is updated.
In contrast to the patch implemented for ARM's Top Byte Ignore "Clear
non-significant bits of address on memory access", it is not necessary to
adjust addresses before they are passed to the target layer cache, as
for LAM tagged pointers are supported by the system call to read memory.
Additionally, LAM applies only to addresses used for data accesses.
Thus, it is sufficient to mask addresses used for watchpoints.
The following examples are based on a LAM57 enabled program.
Before this patch tagged pointers were not supported for watchpoints:
~~~
(gdb) print pi_tagged
$2 = (int *) 0x10007ffffffffe004
(gdb) watch *pi_tagged
Hardware watchpoint 2: *pi_tagged
(gdb) c
Continuing.
Couldn't write debug register: Invalid argument.
~~~~
Once LAM 48 or LAM 57 is enabled for the current program, GDB can now
specify watchpoints for tagged addresses with LAM width 15 or 6,
respectively.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
|
|
From the commit 667ed4b14ddaa9af196481f1757c0e517e80b6ed onward, instead
of normal name GDB looks for the "jit_descriptor" linkage name in the JIT
code initialization. Without this change, the function
"lookup_minimal_symbol_linkage", only matches the non-static data. So in
case jit_debugger is static type then setting up breakpoint in the JIT code
fails. Issue is seen for the intel compilers, where jit_debug_descriptor has
static type i.e. "mst_file_data". Hence lookup_minimal_symbol_linkage returns
nullptr for it. So, in this case breakpoint does not hit in the JIT code.
To resolve this, the commit introduces a new boolean argument to the
lookup_minimal_symbol_linkage function. This argument allows the function to
also match mst_file_data and mst_file_bss types when set to true. The
function is called with this new argument set to true only from JIT code
initialization handling, ensuring that the current behavior remains unchanged
for other cases. Because handling of static types of data symbols for all cases
result in regression for "gdb.base/print-file-var.exp" test.
Example of minsym for the JIT code emitted by the intel compilers where
lookup_minimal_symbol_linkage fails without this change because jit_debugger
type is "mst_file_data".
(top-gdb) p *msymbol
$1 = {<general_symbol_info> =
{m_name = 0x7fffcc77dc95 "__jit_debug_descriptor",
m_value = {ivalue = 84325936, block = 0x506b630,
bytes = 0x506b630 <error: Cannot access memory at address 0x506b630>,
address = 0x506b630, unrel_addr = (unknown: 0x506b630),
common_block = 0x506b630, chain = 0x506b630},
language_specific = {obstack = 0x0, demangled_name = 0x0},
m_language = language_unknown, ada_mangled = 0, m_section = 29},
m_size = 24, filename = 0x55555a751b70 "JITLoaderGDB.cpp",
m_type = mst_file_data, created_by_gdb = 0,
m_target_flag_1 = 0, m_target_flag_2 = 0, m_has_size = 1,
name_set = 1, hash_next = 0x55555b86e4f0, demangled_hash_next = 0x0}
Updated the test "jit-elf-so.exp" to test the static type of jit_descriptor
object.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It was pointed out that the recently added gdb.opt/inline-entry.exp
test would fail when run using gcc 7 and earlier, on an x86-64 target:
https://inbox.sourceware.org/gdb-patches/9fe35ea1-d99b-444d-bd1b-e3a1f108dd77@suse.de
Bernd Edlinger points out that, for gcc, the test relies on the
-gstatement-frontiers work which was added in gcc 8.x:
https://inbox.sourceware.org/gdb-patches/DU2PR08MB10263357597688D9D66EA745CE4242@DU2PR08MB10263.eurprd08.prod.outlook.com
For gcc 7.x and older, without the -gstatement-frontiers work, the
compiler uses DW_AT_entry_pc differently, which leads to a poorer
debug experience.
Here is the interesting source line from inline-entry.c:
if ((global && bar (1)) || bar (2))
And here's some of the relevant disassembly output:
Dump of assembler code for function main:
0x401020 <+0>: mov 0x3006(%rip),%eax (1)
0x401026 <+6>: test %eax,%eax (2)
0x401028 <+8>: mov 0x2ffe(%rip),%eax (3)
0x40102e <+14>: je 0x401038 <main+24> (4)
0x401030 <+16>: sub $0x1,%eax (5)
0x401033 <+19>: jne 0x40103d <main+29> (6)
Lines (1), (2), and (4) represent the check of 'global'. However,
line (3) is actually the first instruction for 'bar' which has been
inlined. Lines (5) and (6) are also part of the first inlined 'bar'
function.
If the check of 'global' returns false then the first call to 'bar'
should never happen, this is accomplished by the branch at (4) being
taken.
For gcc 8+, gcc generates a DW_AT_entry_pc with the value 0x401030,
this is where GDB places a breakpoint for 'bar', and this address is
after the branch at line (4), and so, if the call to 'bar' never
happens, the breakpoint is never hit.
For gcc 7 and older, gcc generates a DW_AT_entry_pc with the value
0x401028, which is the first address associated with the inline 'bar'
function. Unfortunately, this address is also before the check of
'global' has completed, this means that GDB hits the 'bar' breakpoint
before the inferior has decided if 'bar' should actually be called or
not.
I don't think there's really much GDB can do in the older gcc
versions, we are placing the breakpoint at the entry point, and this
is within bar. Given that this test does really depend on the newer
gcc behaviour, I think the only sensible solution is to skip this test
when an older version of gcc is being used.
I've incorporated the check for -gstatement-frontiers support that
Bernd suggested and now the test will be skipped for older versions of
GCC.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
As with the previous two commits, this commit fixes a location where
we called PyObject_IsTrue without including an error check, this time
in bppy_init.
The 'qualified' argument is supposed to be a bool, the docs say:
The optional QUALIFIED argument is a boolean that allows
interpreting the function passed in 'spec' as a fully-qualified
name. It is equivalent to 'break''s '-qualified' flag (*note
Linespec Locations:: and *note Explicit Locations::).
It's not totally clear that the only valid values are True or False,
but I'm choosing to interpret the docs that way, and so I've added a
PyBool_Type check during argument parsing. Now, if a non-bool is
passed the user will get a TypeError during argument parsing. I've
added a test to cover this case.
This is a potentially breaking change to the Python API, but hopefully
this will not impact too many people. I've added a NEWS entry to
highlight this change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Like the previous commit, I discovered that in micmdpy_set_installed
we were calling PyObject_IsTrue, but not checking for a possible error
value being returned.
The micmdpy_set_installed function implements the
gdb.MICommand.installed attribute, and the documentation indicates that
this attribute should only be assigned a bool:
This attribute is read-write, setting this attribute to 'False'
will uninstall the command, removing it from the set of available
commands. Setting this attribute to 'True' will install the
command for use.
So I propose that instead of using PyObject_IsTrue we use
PyBool_Check, and if the new value fails this check we raise an
error. We can then compare the new value to Py_True directly instead
of calling PyObject_IsTrue.
This is a potentially breaking change to the Python API, but hopefully
this will not impact too many people, and the fix is pretty
easy (switch to using a bool). I've added a NEWS entry to draw
attention to this change.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Building on the previous two commits, I was auditing our uses of
PyObject_IsTrue looking for places where we were missing an error
check.
The gdb.Architecture.integer_type() function takes a 'signed' argument
which should be a 'bool', and the docs do say:
If SIGNED is not specified, it defaults to 'True'. If SIGNED is
'False', the returned type will be unsigned.
Currently we use PyObject_IsTrue, but we are missing an error check.
To fix this I've tightened the code to enforce the bool requirement at
the point that the arguments are parsed. With that done I can remove
the call to PyObject_IsTrue and instead compare to Py_True directly,
the object in question will always be a PyBool_Type.
However, we were testing that passing a non-bool argument for 'signed'
is treated as Py_False, this was added with this commit:
commit 90fe61ced1c9aa4afb263326e336330d15603fbf
Date: Mon Nov 29 13:53:06 2021 +0000
gdb/python: don't use the 'p' format for parsing args
which is when the PyObject_IsTrue call was added. Given that the docs
do seem pretty clear that only True or False are suitable argument
values, my proposal is that we just remove these tests and instead
test that any non-bool argument value for 'signed' gives a TypeError.
This is a breaking change to the Python API, however, my hope is that
this is such a edge case that it will not cause too many problem.
I've added a NEWS entry to highlight this change though.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
After the previous commit I audited all our uses of PyObject_IsTrue
looking for places where we were missing an error check. I did find
some that are missing error checks in places where we really should
have error checks, and I'll fix those in later commits.
This commit however, focuses on those locations where PyObject_IsTrue
is called, there is no error check, and the error check isn't really
necessary because we already know that the object we are dealing with
is of type PyBool_Type.
Inline with the previous commit, in these cases I have removed the
PyObject_IsTrue call, and replaced it with a comparison against
Py_True. In one location where it is not obvious that the object we
have is PyBool_Type I've added an assert, but in the other cases the
comparison to Py_True immediately follows a PyBool_Check call, so an
assert would be redundant.
I've added a test for the gdb.Value.format_string styling argument
being passed a non-bool value as this wasn't previously being tested,
though this new test will pass before and after this commit.
There should be no functional change after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The dictionary contains a few entries with capital letters:
...
$ grep -E '[A-Z]' .git/wikipedia-common-misspellings.txt | wc -l
143
...
but they don't look too interesting in the gdb context (for instance,
Habsbourg->Habsburg), so filter them out.
That leaves us with entries looking only like "foobat->foobar", so add
handling of capitalized words, such that we also rewrite "Foobat" to "Foobar".
Tested on aarch64-linux. Verified with shellcheck.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
Add "doens't->doesn't" to gdb/contrib/common-misspellings.txt, and run
gdb/contrib/spellcheck.sh to fix this in a few files.
Tested on x86_64-linux.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
Add handling of double quotes in gdb/contrib/spellcheck.sh, and fix the
following typos:
...
inheritence -> inheritance
psuedo -> pseudo
succeded -> succeeded
...
Tested on x86_64-linux.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
Currently, text adjacent to parentheses is not spell-checked:
...
$ cat tmp.txt
(upto)
$ ./gdb/contrib/spellcheck.sh tmp.txt
$
...
Add handling of parentheses, such that we get:
...
$ ./gdb/contrib/spellcheck.sh tmp.txt
upto -> up to
$
...
Rerun spellcheck.sh, resulting in a few "thru->through" replacements.
Tested on x86_64-linux.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
I (Andrew) have split this small change from a larger patch which was
posted here:
https://inbox.sourceware.org/gdb-patches/AS1PR01MB9465608EBD5D62642C51C428E4922@AS1PR01MB9465.eurprd01.prod.exchangelabs.com
And I have written the stand alone test for this issue. The original
patch included this paragraph to explain this change (I've fixed one
typo in this text replacing 'program' with 'function'):
... it may happen that the infrun machinery steps from one inline
range to another inline range of the same inline function. That can
look like jumping back and forth from the calling function to the
inline function, while really the inline function just jumps from a
hot to a cold section of the code, i.e. error handling.
The important thing that happens here is that both the outer function
and the inline function must both have multiple ranges. When the
inferior is within the inline function and moves from one range to
another it is critical that the address we stop at is the start of a
range in both the outer function and the inline function.
The diagram below represents how the functions are split and aligned:
(A) (B)
bar: |------------| |---|
foo: |------------------| |--------|
The inferior is stepping through 'bar' and eventually reaches
point (A) at which point control passes to point (B).
Currently, when the inferior stops, GDB notices that both 'foo' and
'bar' start at address (B), and so GDB uses the inline frame mechanism
to skip 'bar' and tells the user that the inferior is in 'foo'.
However, as we were in 'bar' before the step then it makes sense that
we should be in 'bar' after the step, and this is what the patch does.
There are two tests using the DWARF assembler, the first checks the
above situation and ensures that GDB reports 'bar' after the step.
The second test is similar, but after the step we enter a new range
where a different inline function starts, something like this:
(A) (B)
bar: |------------|
baz: |---|
foo: |------------------| |--------|
In this case as we step at (A) and land at (B) we leave 'bar' and
expect to stop in 'foo', GDB shouldn't automatically enter 'baz' as
that is a completely different inline function. And this is, indeed,
what we see.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
|
|
The entry PC for a DIE, e.g. an inline function, might not be the base
address of the DIE. Currently though, in block::entry_pc(), GDB
always returns the base address (low-pc or the first address of the
first range) as the entry PC.
This commit extends the block class to carry the entry PC as a
separate member variable. Then the DWARF reader is extended to read
and set the entry PC for the block. Now in block::entry_pc(), if the
entry PC has been set, this is the value returned.
If the entry-pc has not been set to a specific value then the old
behaviour of block::entry_pc() remains, GDB will use the block's base
address. Not every DIE will set the entry-pc, but GDB still needs to
have an entry-pc for every block, so the existing logic supplies the
entry-pc for any block where the entry-pc was not set.
The DWARF-5 spec for reading the entry PC is a super-set of the spec
as found in DWARF-4. For example, if there is no DW_AT_entry_pc then
DWARF-4 says to use DW_AT_low_pc while DWARF-5 says to use the base
address, which is DW_AT_low_pc or the first address in the first range
specified by DW_AT_ranges if there is no DW_AT_low_pc.
I have taken the approach of just implementing the DWARF-5 spec for
everyone. There doesn't seem to be any benefit to deliberately
ignoring a ranges based entry PC value for DWARF-4. If some naughty
compiler has emitted that, then lets use it.
Similarly, DWARF-4 says that DW_AT_entry_pc is an address. DWARF-5
allows an address or a constant, where the constant is an offset from
the base address. I allow both approaches for all DWARF versions.
There doesn't seem to be any downsides to this approach.
I ran into an issue when testing this patch where GCC would have the
DW_AT_entry_pc point to an empty range. When GDB parses the ranges
any empty ranges are ignored. As a consequence, the entry-pc appears
to be outside the address range of a block.
The empty range problem is certainly something that we can, and should
address, but that is not the focus of this patch, so for now I'm
ignoring that problem. What I have done is added a check: if the
DW_AT_entry_pc is outside the range of a block then the entry-pc is
ignored, GDB will then fall-back to its default algorithm for
computing the entry-pc.
If/when in the future we address the empty range problem, these
DW_AT_entry_pc attributes will suddenly become valid and GDB will
start using them. Until then, GDB continues to operate as it always
has.
An early version of this patch stored the entry-pc within the block
like this:
std::optional<CORE_ADDR> m_entry_pc;
However, a concern was raised that this, on a 64-bit host, effectively
increases the size of block by 16-bytes (8-bytes for the CORE_ADDR,
and 8-bytes for the std::optional's bool plus padding).
If we remove the std::optional part and just use a CORE_ADDR then we
need to have a "special" address to indicate if m_entry_pc is in use
or not. I don't really like using special addresses; different
targets can access different address ranges, even zero is a valid
address on some targets.
However, Bernd Edlinger suggested storing the entry-pc as an offset,
and I think that will resolve my concerns. So, we store the entry-pc
as a signed offset from the block's base address (the first address of
the first range, or the start() address value if there are now
ranges). Remember, ranges can be out of order, in which case the
first address of the first range might be greater than the entry-pc.
When GDB needs to read the entry-pc we can add the offset onto the
blocks base address to recalculate it.
With this done, on a 64-bit host, block only needs to increase by
8-bytes.
The inline-entry.exp test was originally contributed by Bernd here:
https://inbox.sourceware.org/gdb-patches/AS1PR01MB94659E4D9B3F4A6006CC605FE4922@AS1PR01MB9465.eurprd01.prod.exchangelabs.com
though I have made some edits, making more use of lib/gdb.exp
functions, making the gdb_test output patterns a little tighter, and
updating the test to run with Clang. I also moved the test to
gdb.opt/ as that seemed like a better home for it.
Co-Authored-By: Bernd Edlinger <bernd.edlinger@hotmail.de>
|