Age | Commit message (Collapse) | Author | Files | Lines |
|
There is some inconsistency between the behaviour of objdump -D and
objdump -s, both supposedly operating on all sections by default.
objdump -s ignores bss sections, while objdump -D dissassembles the
zeros. Fix this by making objdump -D ignore bss sections too.
Furthermore, "objdump -s -j .bss" doesn't dump .bss as it should,
since the user is specifically asking to look at all those zeros.
This change does find some tests that used objdump -D with expected
output in bss-style sections. I've updated all the msp430 tests that
just wanted to find a non-empty section to look at section headers
instead, making the tests slightly more stringent. The ppc xcoff and
spu tests are fixed by adding -j options to objdump, which makes the
tests somewhat more lenient.
binutils/
* objdump.c (disassemble_section): Ignore sections without
contents, unless overridden by -j.
(dump_section): Allow -j to override the default of not
displaying sections without contents.
* doc/binutils.texi (objdump options): Update -D, -s and -j
description.
gas/
* testsuite/gas/ppc/xcoff-tls-32.d: Select wanted objdump
sections with -j.
* testsuite/gas/ppc/xcoff-tls-64.d: Likewise.
ld/
* testsuite/ld-msp430-elf/main-bss-lower.d,
* testsuite/ld-msp430-elf/main-bss-upper.d,
* testsuite/ld-msp430-elf/main-const-lower.d,
* testsuite/ld-msp430-elf/main-const-upper.d,
* testsuite/ld-msp430-elf/main-text-lower.d,
* testsuite/ld-msp430-elf/main-text-upper.d,
* testsuite/ld-msp430-elf/main-var-lower.d,
* testsuite/ld-msp430-elf/main-var-upper.d: Expect -wh output.
* testsuite/ld-msp430-elf/msp430-elf.exp: Use objdump -wh
rather than objdump -D or objdump -d with tests checking for
non-empty given sections.
* testsuite/ld-spu/ear.d,
* testsuite/ld-spu/icache1.d,
* testsuite/ld-spu/ovl.d,
* testsuite/ld-spu/ovl2.d: Select wanted objdump sections.
|
|
* dwarf1.c (_bfd_dwarf1_find_nearest_line): Exclude .debug
sections without contents.
|
|
open_source_file relies on errno to communicate the reason for a missing
source file.
open_source_file may also call debuginfod_find_source. It is possible
for debuginfod_find_source to set errno to a value unrelated to the
reason for a failed download.
This can result in bogus error messages being reported as the reason for
a missing source file. The following error message should instead be
"No such file or directory":
Temporary breakpoint 1, 0x00005555556f4de0 in main ()
(gdb) list
Downloading source file /usr/src/debug/glibc-2.36-8.fc37.x86_64/elf/<built-in>
1 /usr/src/debug/glibc-2.36-8.fc37.x86_64/elf/<built-in>: Directory not empty.
Fix this by having open_source_file return a negative errno if it fails
to open a source file. Use this value to generate the error message
instead of errno.
Approved-By: Tom Tromey <tom@tromey.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29999
|
|
gdbsupport/errors.h declares perror_with_name and leaves the
implementation to the clients.
However gdb and gdbserver's implementations are essentially the
same, resulting in unnecessary code duplication.
Fix this by implementing perror_with_name in gdbsupport. Add an
optional parameter for specifying the errno used to generate the
error message.
Also move the implementation of perror_string to gdbsupport since
perror_with_name requires it.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
|
|
This commit introduces the idea of loading only part of an array in
order to print it, what I call "limited length" arrays.
The motivation behind this work is to make it possible to print slices
of very large arrays, where very large means bigger than
`max-value-size'.
Consider this GDB session with the current GDB:
(gdb) set max-value-size 100
(gdb) p large_1d_array
value requires 400 bytes, which is more than max-value-size
(gdb) p -elements 10 -- large_1d_array
value requires 400 bytes, which is more than max-value-size
notice that the request to print 10 elements still fails, even though 10
elements should be less than the max-value-size. With a patched version
of GDB:
(gdb) p -elements 10 -- large_1d_array
$1 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9...}
So now the print has succeeded. It also has loaded `max-value-size'
worth of data into value history, so the recorded value can be accessed
consistently:
(gdb) p -elements 10 -- $1
$2 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9...}
(gdb) p $1
$3 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, <unavailable> <repeats 75 times>}
(gdb)
Accesses with other languages work similarly, although for Ada only
C-style [] array element/dimension accesses use history. For both Ada
and Fortran () array element/dimension accesses go straight to the
inferior, bypassing the value history just as with C pointers.
Co-Authored-By: Maciej W. Rozycki <macro@embecosm.com>
|
|
Add a `-nonl' option to `gdb_test' making it possible to match output
from commands such as `output' that do not produce a new line sequence
at the end, e.g.:
(gdb) output 0
0(gdb)
|
|
While it makes sense to allow accessing out-of-bounds elements in the
debuggee and see whatever there might happen to be there in memory (we
are a debugger and not a programming rules enforcement facility and we
want to make people's life easier in chasing bugs), e.g.:
(gdb) print one_hundred[-1]
$1 = 0
(gdb) print one_hundred[100]
$2 = 0
(gdb)
we shouldn't really pretend that we have any meaningful data around
values recorded in history (what these commands really retrieve are
current debuggee memory contents outside the original data accessed,
really confusing in my opinion). Mark values recorded in history as
such then and verify accesses to be in-range for them:
(gdb) print one_hundred[-1]
$1 = <unavailable>
(gdb) print one_hundred[100]
$2 = <unavailable>
Add a suitable test case, which also covers integer overflows in data
location calculation.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Consistently use the LONGEST and ULONGEST types for value byte/bit
offsets and lengths respectively, avoiding silent truncation for ranges
exceeding the 32-bit span, which may cause incorrect matching. Also
report a conversion overflow on byte ranges that cannot be expressed in
terms of bits with these data types, e.g.:
(gdb) print one_hundred[1LL << 58]
Integer overflow in data location calculation
(gdb) print one_hundred[(-1LL << 58) - 1]
Integer overflow in data location calculation
(gdb)
Previously such accesses would be let through with unpredictable results
produced.
|
|
We have an inconsistency in value history accesses where array element
accesses cause an error for entries exceeding the currently selected
`max-value-size' setting even where such accesses successfully complete
for elements located in the inferior, e.g.:
(gdb) p/d one
$1 = 0
(gdb) p/d one_hundred
$2 = {0 <repeats 100 times>}
(gdb) p/d one_hundred[99]
$3 = 0
(gdb) set max-value-size 25
(gdb) p/d one_hundred
value requires 100 bytes, which is more than max-value-size
(gdb) p/d one_hundred[99]
$7 = 0
(gdb) p/d $2
value requires 100 bytes, which is more than max-value-size
(gdb) p/d $2[99]
value requires 100 bytes, which is more than max-value-size
(gdb)
According to our documentation the `max-value-size' setting is a safety
guard against allocating an overly large amount of memory. Moreover a
statement in documentation says, concerning this setting, that: "Setting
this variable does not affect values that have already been allocated
within GDB, only future allocations." While in the implementer-speak
the sentence may be unambiguous I think the outside user may well infer
that the setting does not apply to values previously printed.
Therefore rather than just fixing this inconsistency it seems reasonable
to lift the setting for value history accesses, under an implication
that by having been retrieved from the debuggee they have already passed
the safety check. Do it then, by suppressing the value size check in
`value_copy' -- under an observation that if the original value has been
already loaded (i.e. it's not lazy), then it must have previously passed
said check -- making the last two commands succeed:
(gdb) p/d $2
$8 = {0 <repeats 100 times>}
(gdb) p/d $2 [99]
$9 = 0
(gdb)
Expand the testsuite accordingly, covering both value history handling
and the use of `value_copy' by `make_cv_value', used by Python code.
|
|
Use <climits> instead of <limits.h> and ditch local fallback definitions
for minimum and maximum value macros provided by C++11. Add LONGEST_MAX
and LONGEST_MIN definitions.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Python functions implementing DAP requests should not use positional
parameters -- it only makes sense to call them with keyword arguments.
This patch changes the few remaining cases to start with the special
"*" parameter, following this rule.
|
|
Following commit 4e2a80ba606 ("gdb/testsuite: expect SIGSEGV from top
GDB spawn id"), the next failure I get in gdb.gdb/selftest.exp, using
the native-extended-gdbserver, is:
(gdb) PASS: gdb.gdb/selftest.exp: send ^C to child process
signal SIGINT
Continuing with signal SIGINT.
FAIL: gdb.gdb/selftest.exp: send SIGINT signal to child process (timeout)
The problem is that in this gdb_test_multiple:
set description "send SIGINT signal to child process"
gdb_test_multiple "signal SIGINT" "$description" {
-re "^signal SIGINT\r\nContinuing with signal SIGINT.\r\nQuit\r\n.* $" {
pass "$description"
}
}
The "Continuing with signal SIGINT" portion is printed by the top GDB,
while the Quit portion is printed by the bottom GDB. As the
gdb_test_multiple is written, it expects both the the top GDB's spawn
id.
Fix this by splitting the gdb_test_multiple in two. The first one
expects the "Continuing with signal SIGINT" from the top GDB. The
second one expect "Quit" and the "(xgdb)" prompt from
$inferior_spawn_id. When debugging natively, this spawn id will be the
same as the top GDB's spawn id, but it's different when debugging with
GDBserver.
Change-Id: I689bd369a041b48f4dc9858d38bf977d09600da2
|
|
This changes main_info to use std::string. It removes some manual
memory management.
|
|
PR testsuite/30103 reports the following failure on aarch64-linux
(ubuntu 22.04):
...
(gdb) PASS: gdb.base/longjmp.exp: with_probes=0: pattern 1: next to longjmp
next
warning: Breakpoint address adjusted from 0x83dc305fef755015 to \
0xffdc305fef755015.
Warning:
Cannot insert breakpoint 0.
Cannot access memory at address 0xffdc305fef755015
__libc_siglongjmp (env=0xaaaaaaab1018 <env>, val=1) at ./setjmp/longjmp.c:30
30 }
(gdb) KFAIL: gdb.base/longjmp.exp: with_probes=0: pattern 1: gdb/26967 \
(PRMS: next over longjmp)
delete breakpoints
Delete all breakpoints? (y or n) y
(gdb) info breakpoints
No breakpoints or watchpoints.
(gdb) break 63
No line 63 in the current file.
Make breakpoint pending on future shared library load? (y or [n]) n
(gdb) FAIL: gdb.base/longjmp.exp: with_probes=0: pattern 2: setup: breakpoint \
at pattern start (got interactive prompt)
...
The test-case intends to set the breakpoint on line number 63 in
gdb.base/longjmp.c.
It tries to do so by specifying "break 63", which specifies a line in the
"current source file".
Due to the KFAIL PR, gdb stopped in __libc_siglongjmp, and because of presence
of debug info, the "current source file" becomes glibc's ./setjmp/longjmp.c.
Consequently, setting the breakpoint fails.
Fix this by adding a $subdir/$srcfile: prefix to the breakpoint linespecs.
I've managed to reproduce the FAIL on x86_64/-m32, by installing the
glibc-32bit-debuginfo package. This allowed me to confirm the "current source
file" that is used:
...
(gdb) KFAIL: gdb.base/longjmp.exp: with_probes=0: pattern 1: gdb/26967 \
(PRMS: next over longjmp)
info source^M
Current source file is ../setjmp/longjmp.c^M
...
Tested on x86_64-linux, target boards unix/{-m64,-m32}.
Reported-By: Luis Machado <luis.machado@arm.com>
Reviewed-By: Tom Tromey <tom@tromey.com>
PR testsuite/30103
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30103
|
|
Add a new command "maint info frame-unwinders":
...
(gdb) help maint info frame-unwinders
List the frame unwinders currently in effect, starting with the highest \
priority.
...
Output for i386:
...
$ gdb -q -batch -ex "set arch i386" -ex "maint info frame-unwinders"
The target architecture is set to "i386".
dummy DUMMY_FRAME
dwarf2 tailcall TAILCALL_FRAME
inline INLINE_FRAME
i386 epilogue NORMAL_FRAME
dwarf2 NORMAL_FRAME
dwarf2 signal SIGTRAMP_FRAME
i386 stack tramp NORMAL_FRAME
i386 sigtramp SIGTRAMP_FRAME
i386 prologue NORMAL_FRAME
...
Output for x86_64:
...
$ gdb -q -batch -ex "set arch i386:x86-64" -ex "maint info frame-unwinders"
The target architecture is set to "i386:x86-64".
dummy DUMMY_FRAME
dwarf2 tailcall TAILCALL_FRAME
inline INLINE_FRAME
python NORMAL_FRAME
amd64 epilogue NORMAL_FRAME
i386 epilogue NORMAL_FRAME
dwarf2 NORMAL_FRAME
dwarf2 signal SIGTRAMP_FRAME
amd64 sigtramp SIGTRAMP_FRAME
amd64 prologue NORMAL_FRAME
i386 stack tramp NORMAL_FRAME
i386 sigtramp SIGTRAMP_FRAME
i386 prologue NORMAL_FRAME
...
Tested on x86_64-linux.
Reviewed-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Commit 43025f01a0c9 ("RISC-V: Improve link time complexity.") reduced the
time complexity of the linker relaxation but some code portions did not
reflect this change.
This commit fixes a comment describing each relaxation pass and reduces
actual number of passes for the RISC-V linker relaxation from 3 to 2.
Though it does not change the functionality, it marginally improves the
performance while linking large programs (with many relocations).
bfd/ChangeLog:
* elfnn-riscv.c (_bfd_riscv_relax_section): Fix a comment to
reflect current roles of each relaxation pass.
ld/ChangeLog:
* emultempl/riscvelf.em: Reduce the number of linker relaxation
passes from 3 to 2.
|
|
The main one here is the section buffer, which can be quite large.
By using alloc rather than malloc we can leave tidying memory to the
generic bfd code when the bfd is closed. bfd_check_format also
releases memory when object_p fails, so while it wouldn't be wrong
to bfd_release at bad_format_free in mmo_object_p, it's a little extra
code and work for no gain.
* mmo.c (mmo_object_p): bfd_alloc rather than bfd_malloc
lop_stab_symbol. Don't free/release on error.
(mmo_get_spec_section): bfd_zalloc rather than bfd_zmalloc
section buffer.
(mmo_scan): Free fname on another error path.
|
|
"Local labels are never absolute" says the comment. Except when they
are. Testcase
.offset
0:
a=0b
I don't see any particular reason to disallow local labels inside
struct definitions, so delete the comment and assertions.
* expr.c (integer_constant): Delete local label assertions.
|
|
The attribute really specifies that the sum of register and memory
operands is 4. Express it like that in most places, while using the 2nd
(apart from XOP) CPU feature flags (FMA4) in reversed operand matching
logic.
With the use in build_modrm_byte() gone, part of an assertion there
also becomes meaningless - simplify that at the same time.
With all uses of the opcode modifier field gone, also drop that.
|
|
The few XOP insns which used it wrongly didn't have VexVVVV specified.
With that added, the only further missing piece to use more generic code
elsewhere is SwapSources - see e.g. the BMI2 insns for similar operand
patterns.
With the only users gone, drop the #define as well as the special case
code.
|
|
The VPROT* forms with an immediate operand are entirely standard in the
way their ModR/M bytes are built. There's no reason to invoke special
case code. With that the handling of an immediate there can also be
dropped; it was partially bogus anyway, as in its "no memory operands"
portion it ignores the possibility of an immediate operand (which was
okay only because that case was already handled by more generic code).
|
|
This really isn't a "modifier" and rather ought to live next to the base
opcode anyway. Use the bits we presently have available to fit in the
field, renaming it to opcode_space. As an intended side effect this
helps readability at the use sites, by shortening the references quite a
bit.
In generated code arrange for human readable output, by using the
SPACE_* constants there rather than raw numbers. This may aid debugging
down the road.
|
|
Fold adjacent comparisons when, by ORing in a certain mask, the same
effect can be achieved by a single one. In load_insn_p() this extends
to further uses of an already available local variable.
|
|
Now that we have identifiers for the mnemonic strings we can avoid
opcode based comparisons, for (in many cases) being more expensive and
(in a few cases) being a little fragile and not self-documenting.
Note that the MOV optimization can be engaged by the earlier LEA one,
and hence LEA also needs checking for there.
|
|
Anti-fuzzer measure. I'm not sure what the correct fix is for
objcopy. Probably the BFD_MACH_O_S_NON_LAZY_SYMBOL_POINTERS,
BFD_MACH_O_S_LAZY_SYMBOL_POINTERS and BFD_MACH_O_S_SYMBOL_STUBS
contents should be read.
* mach-o.c (bfd_mach_o_section_get_nbr_indirect): Omit sections
with NULL sec->indirect_syms.
|
|
|
|
I've found that I often use dwarf-mode with relatively small test
files. In this situation, it's handy to be able to expand all the
DWARF, rather than moving to each "..." separately and using C-u C-m.
This patch implements this feature. It also makes a couple of other
minor changes:
* I removed a stale FIXME from dwarf-mode. In practice I find I often
use "g" to restore the buffer to a pristine state; checking the file
mtime would work against this.
* I tightened the regexp in dwarf-insert-substructure. This prevents
the C-m binding from trying to re-read a DIE which has already been
expanded.
* Finally, I've bumped the dwarf-mode version number so that this
version can easily be installed using package.el.
2023-02-09 Tom Tromey <tromey@adacore.com>
* dwarf-mode.el: Bump version to 1.8.
(dwarf-insert-substructure): Tighten regexp.
(dwarf-refresh-all): New defun.
(dwarf-mode-map): Bind "A" to dwarf-refresh-all.
(dwarf-mode): Remove old FIXME.
|
|
gdb.rust/fnfield.exp has a comment that, I assume, I copied from some
other test. This patch fixes it.
|
|
rust_language::print_enum computes:
int nfields = variant_type->num_fields ();
... but then does not reuse this in one spot. This patch corrects the
oversight.
|
|
Clang doesn't accept initializer syntax for variable-length
arrays in C. Just use memset instead.
|
|
The command has no effect for the loading of GDB pretty printers and is
removed by this patch to avoid confusion.
Documentation for "set print pretty"
"Cause GDB to print structures in an indented format with one member per line"
|
|
main_type::nfields is a 'short', and has been for many years. PR
c++/29985 points out that 'short' is too narrow for an enum that
contains more than 2^15 constants.
This patch bumps the size of 'nfields'. To verify that the field
isn't directly used, it is also renamed. Note that this does not
affect the size of main_type on x86-64 Fedora 36. And, if it does
have a negative effect somewhere, it's worth considering that types
could be shrunk more drastically by using subclasses for the different
codes.
This is v2 of this patch, which has these changes:
* I changed nfields to 'unsigned', per Simon's request. I looked at
changing all the uses, but this quickly fans out into a very large
patch. (One additional tweak was needed, though.)
* I wrote a test case. I discovered that GCC cannot compile a large
enough C test case, so I resorted to using the DWARF assembler.
This test doesn't reproduce the crash, but it does fail without the
patch.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29985
|
|
I noticed a leftover mention of cooked_index_vector. This updates the
text.
|
|
In PR gdb/29854, Simon pointed out that it would be good to be able to
use C-c when the DWARF cooked index is waiting for finalization. The
idea here is to be able to interrupt a command like "break" -- not to
stop the finalization process itself, which runs in a worker thread.
This patch implements this idea, by changing the index wait functions
to, by default, allow a quit. Polling is done, because there doesn't
seem to be a better way to interrupt a wait on a std::future.
For v2, I realized that the thread compatibility code in thread-pool.h
also needed an update.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29854
|
|
keep_relocs is set by pe_ILF_save_relocs but not used anywhere in the
coff/pe code. It is tested by the xcoff backend but not set.
keep_contents is only used by the xcoff backend when dealing with
the .loader section, and it's easy enough to dispense with it there.
keep_contents is set in various places but that's fairly useless when
the contents aren't freed anyway until later linker support functions,
add_dynamic_symbols and check_dynamic_ar_symbols. There the contents
were freed if keep_contents wasn't set. I reckon we can free them
unconditionally.
* coff-bfd.h (struct coff_section_tdata): Delete keep_relocs
and keep_contents.
* peicode.h (pe_ILF_save_relocs): Don't set keep_relocs.
* xcofflink.c (xcoff_get_section_contents): Cache contents.
Return the contents. Update callers.
(_bfd_xcoff_canonicalize_dynamic_symtab): Don't set
keep_contents for .loader.
(xcoff_link_add_dynamic_symbols): Free .loader contents
unconditionally.
(xcoff_link_check_dynamic_ar_symbols): Likewise.
|
|
|
|
keep_relocs and keep_contents are unused nowadays except by
xcofflink.c, and I can't see a reason why keep_syms needs to be set.
The external syms are read and used by sh_relax_section and used by
sh_relax_delete_bytes. There doesn't appear to be any way that
freeing them will cause trouble.
* coff-sh.c (sh_relax_section): Don't set keep_relocs,
keep_contents or keep_syms.
(sh_relax_delete_bytes): Don't set keep_contents.
|
|
* compress.c (bfd_init_section_compress_status): Free
uncompressed_buffer on error return.
|
|
If file size is calculated by bfd_get_file_size, as it is by
_bfd_alloc_and_read calls in coff_object_p, then it is cached and when
pe_ILF_build_a_bfd converts an archive entry over to BFD_IN_MEMORY,
the file size is no longer valid. Found when attempting objdump -t on
a very small (27 bytes) ILF file and hitting the pr24707 fix (commit
781152ec18f5). So, clear file size when setting BFD_IN_MEMORY on bfds
that may have been read. (It's not necessary in writable bfds,
because caching is ignored by bfd_get_size when bfd_write_p.)
I also think the PR 24707 fix is no longer neeeded. All of the
testcases in that PR and in PR24712 are caught earlier by file size
checks when reading the symbols from file. So I'm reverting that fix,
which just compared the size of an array of symbol pointers against
file size. That's only valid if on-disk symbols are larger than a
host pointer, so the test is better done in format-specific code.
bfd/
* coff-alpha.c (alpha_ecoff_get_elt_at_filepos): Clear cached
file size when making a BFD_IN_MEMORY bfd.
* opncls.c (bfd_make_readable): Likewise.
* peicode.h (pe_ILF_build_a_bfd): Likewise.
binutils/
PR 24707
* objdump.c (slurp_symtab): Revert PR24707 fix. Tidy.
(slurp_dynamic_symtab): Tidy.
|
|
This is the assertion
know (*input_line_pointer != ' ');
after calling operand.
The usual exit from operand calls SKIP_ALL_WHITESPACE.
* expr.c (operand): Call SKIP_ALL_WHITESPACE after call to expr.
|
|
the frame cache
The test gdb.base/frame-view.exp fails like this on AArch64:
frame^M
#0 baz (z1=hahaha, /home/simark/src/binutils-gdb/gdb/value.c:4056: internal-error: value_fetch_lazy_register: Assertion `next_frame != NULL' failed.^M
A problem internal to GDB has been detected,^M
further debugging may prove unreliable.^M
FAIL: gdb.base/frame-view.exp: with_pretty_printer=true: frame (GDB internal error)
The sequence of events leading to this is the following:
- When we create the user frame (the "select-frame view" command), we
create a sentinel frame just for our user-created frame, in
create_new_frame. This sentinel frame has the same id as the regular
sentinel frame.
- When printing the frame, after doing the "select-frame view" command,
the argument's pretty printer is invoked, which does an inferior
function call (this is the point of the test). This clears the frame
cache, including the "real" sentinel frame, which sets the
sentinel_frame global to nullptr.
- Later in the frame-printing process (when printing the second
argument), the auto-reinflation mechanism re-creates the user frame
by calling create_new_frame again, creating its own special sentinel
frame again. However, note that the "real" sentinel frame, the
sentinel_frame global, is still nullptr. If the selected frame had
been a regular frame, we would have called get_current_frame at some
point during the reinflation, which would have re-created the "real"
sentinel frame. But it's not the case when reinflating a user frame.
- Deep down the stack, something wants to fill in the unwind stop
reason for frame 0, which requires trying to unwind frame 1. This
leads us to trying to unwind the PC of frame 1:
#0 gdbarch_unwind_pc (gdbarch=0xffff8d010080, next_frame=...) at /home/simark/src/binutils-gdb/gdb/gdbarch.c:2955
#1 0x000000000134569c in dwarf2_tailcall_sniffer_first (this_frame=..., tailcall_cachep=0xffff773fcae0, entry_cfa_sp_offsetp=0xfffff7f7d450)
at /home/simark/src/binutils-gdb/gdb/dwarf2/frame-tailcall.c:390
#2 0x0000000001355d84 in dwarf2_frame_cache (this_frame=..., this_cache=0xffff773fc928) at /home/simark/src/binutils-gdb/gdb/dwarf2/frame.c:1089
#3 0x00000000013562b0 in dwarf2_frame_unwind_stop_reason (this_frame=..., this_cache=0xffff773fc928) at /home/simark/src/binutils-gdb/gdb/dwarf2/frame.c:1101
#4 0x0000000001990f64 in get_prev_frame_always_1 (this_frame=...) at /home/simark/src/binutils-gdb/gdb/frame.c:2281
#5 0x0000000001993034 in get_prev_frame_always (this_frame=...) at /home/simark/src/binutils-gdb/gdb/frame.c:2376
#6 0x000000000199b814 in get_frame_unwind_stop_reason (frame=...) at /home/simark/src/binutils-gdb/gdb/frame.c:3051
#7 0x0000000001359cd8 in dwarf2_frame_cfa (this_frame=...) at /home/simark/src/binutils-gdb/gdb/dwarf2/frame.c:1356
#8 0x000000000132122c in dwarf_expr_context::execute_stack_op (this=0xfffff7f80170, op_ptr=0xffff8d8883ee "\217\002", op_end=0xffff8d8883ee "\217\002")
at /home/simark/src/binutils-gdb/gdb/dwarf2/expr.c:2110
#9 0x0000000001317b30 in dwarf_expr_context::eval (this=0xfffff7f80170, addr=0xffff8d8883ed "\234\217\002", len=1) at /home/simark/src/binutils-gdb/gdb/dwarf2/expr.c:1239
#10 0x000000000131d68c in dwarf_expr_context::execute_stack_op (this=0xfffff7f80170, op_ptr=0xffff8d88840e "", op_end=0xffff8d88840e "") at /home/simark/src/binutils-gdb/gdb/dwarf2/expr.c:1811
#11 0x0000000001317b30 in dwarf_expr_context::eval (this=0xfffff7f80170, addr=0xffff8d88840c "\221p", len=2) at /home/simark/src/binutils-gdb/gdb/dwarf2/expr.c:1239
#12 0x0000000001314c3c in dwarf_expr_context::evaluate (this=0xfffff7f80170, addr=0xffff8d88840c "\221p", len=2, as_lval=true, per_cu=0xffff90b03700, frame=..., addr_info=0x0,
type=0xffff8f6c8400, subobj_type=0xffff8f6c8400, subobj_offset=0) at /home/simark/src/binutils-gdb/gdb/dwarf2/expr.c:1078
#13 0x000000000149f9e0 in dwarf2_evaluate_loc_desc_full (type=0xffff8f6c8400, frame=..., data=0xffff8d88840c "\221p", size=2, per_cu=0xffff90b03700, per_objfile=0xffff9070b980,
subobj_type=0xffff8f6c8400, subobj_byte_offset=0, as_lval=true) at /home/simark/src/binutils-gdb/gdb/dwarf2/loc.c:1513
#14 0x00000000014a0100 in dwarf2_evaluate_loc_desc (type=0xffff8f6c8400, frame=..., data=0xffff8d88840c "\221p", size=2, per_cu=0xffff90b03700, per_objfile=0xffff9070b980, as_lval=true)
at /home/simark/src/binutils-gdb/gdb/dwarf2/loc.c:1557
#15 0x00000000014aa584 in locexpr_read_variable (symbol=0xffff8f6cd770, frame=...) at /home/simark/src/binutils-gdb/gdb/dwarf2/loc.c:3052
- AArch64 defines a special "prev register" function,
aarch64_dwarf2_prev_register, to handle unwinding the PC. This
function does
frame_unwind_register_unsigned (this_frame, AARCH64_LR_REGNUM);
- frame_unwind_register_unsigned ultimately creates a lazy register
value, saving the frame id of this_frame->next. this_frame is the
user-created frame, to this_frame->next is the special sentinel frame
we created for it. So the saved ID is the sentinel frame ID.
- When time comes to un-lazify the value, value_fetch_lazy_register
calls frame_find_by_id, to find the frame with the ID we saved.
- frame_find_by_id sees it's the sentinel frame ID, so returns the
sentinel_frame global, which is, if you remember, nullptr.
- We hit the `gdb_assert (next_frame != NULL)` assertion in
value_fetch_lazy_register.
The issues I see here are:
- The ID of the sentinel frame created for the user-created frame is
not distinguishable from the ID of the regular sentinel frame. So
there's no way frame_find_by_id could find the right frame, in
value_fetch_lazy_register.
- Even if they had distinguishable IDs, sentinel frames created for
user frames are not registered anywhere, so there's no easy way
frame_find_by_id could find it.
This patch addresses these two issues:
- Give sentinel frames created for user frames their own distinct IDs
- Register sentinel frames in the frame cache, so they can be found
with frame_find_by_id.
I initially had this split in two patches, but I then found that it was
easier to explain as a single patch.
Rergarding the first part of the change: with this patch, the sentinel
frames created for user frames (in create_new_frame) still have
stack_status == FID_STACK_SENTINEL, but their code_addr and stack_addr
fields are now filled with the addresses used to create the user frame.
This ensures this sentinel frame ID is different from the "target"
sentinel frame ID, as well as any other "user" sentinel frame ID. If
the user tries to create the same frame, with the same addresses,
multiple times, create_sentinel_frame just reuses the existing frame.
So we won't end up with multiple user sentinels with the same ID.
Regular "target" sentinel frames remain with code_addr and stack_addr
unset.
The concrete changes for that part are:
- Remove the sentinel_frame_id constant, since there isn't one
"sentinel frame ID" now. Add the frame_id_build_sentinel function
for building sentinel frame IDs and a is_sentinel_frame_id function
to check if a frame id represents a sentinel frame.
- Replace the sentinel_frame_id check in frame_find_by_id with a
comparison to `frame_id_build_sentinel (0, 0)`. The sentinel_frame
global is meant to contain a reference to the "target" sentinel, so
the one with addresses (0, 0).
- Add stack and code address parameters to create_sentinel_frame, to be
able to create the various types of sentinel frames.
- Adjust get_current_frame to create the regular "target" sentinel.
- Adjust create_new_frame to create a sentinel with the ID specific to
the created user frame.
- Adjust sentinel_frame_prev_register to get the sentinel frame ID from
the frame_info object, since there isn't a single "sentinel frame ID"
now.
- Change get_next_frame_sentinel_okay to check for a
sentinel-frame-id-like frame ID, rather than for sentinel_frame
specifically, since this function could be called with another
sentinel frame (and we would want the assert to catch it).
The rest of the change is about registering the sentinel frame in the
frame cache:
- Change frame_stash_add's assertion to allow sentinel frame levels
(-1).
- Make create_sentinel_frame add the frame to the frame cache.
- Change the "sentinel_frame != NULL" check in reinit_frame_cache for a
check that the frame stash is not empty. The idea is that if we only
have some user-created frames in the cache when reinit_frame_cache is
called, we probably want to emit the frames invalid annotation. The
goal of that check is to avoid unnecessary repeated annotations, I
suppose, so the "frame cache not empty" check should achieve that.
After this change, I think we could theoritically get rid of the
sentienl_frame global. That sentinel frame could always be found by
looking up `frame_id_build_sentinel (0, 0)` in the frame cache.
However, I left the global there to avoid slowing the typical case down
for nothing. I however, noted in its comment that it is an
optimization.
With this fix applied, the gdb.base/frame-view.exp now passes for me on
AArch64. value_of_register_lazy now saves the special sentinel frame ID
in the value, and value_fetch_lazy_register is able to find that
sentinel frame after the frame cache reinit and after the user-created
frame was reinflated.
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
Tested-By: Luis Machado <luis.machado@arm.com>
Change-Id: I8b77b3448822c8aab3e1c3dda76ec434eb62704f
|
|
frame cache
Currently, some frame resources are deallocated by iterating on the
frame chain (starting from the sentinel), calling dealloc_cache. The
problem is that user-created frames are not part of that chain, so we
never call dealloc_cache for them.
I propose to make it so the dealloc_cache callbacks are called when the
frames are removed from the frame_stash hash table, by registering a
deletion function to the hash table. This happens when
frame_stash_invalidate is called by reinit_frame_cache. This way, all
frames registered in the cache will get their unwinder's dealloc_cache
callbacks called.
Note that at the moment, the sentinel frames are not registered in the
cache, so we won't call dealloc_cache for them. However, it's just a
theoritical problem, because the sentinel frame unwinder does not
provide this callback. Also, a subsequent patch will change things so
that sentinel frames are registered to the cache.
I moved the obstack_free / obstack_init pair below the
frame_stash_invalidate call in reinit_frame_cache, because I assumed
that some dealloc_cache would need to access some data on that obstack,
so it would be better to free it after clearing the hash table.
Change-Id: If4f9b38266b458c4e2f7eb43e933090177c22190
|
|
A few tdep files include block.h but do not need to. This patch
removes the inclusions. I checked that this worked correctly by
examining the resulting .Po file to make sure that block.h was not
being included by some other route.
|
|
expop.h needs block.h for a single inline function. However, I don't
think most of the check_objfile functions need to be defined in the
header (just the templates). This patch moves the one offending
function and removes the include.
|
|
This patch implements a simplication that I suggested here:
https://sourceware.org/pipermail/gdb-patches/2022-March/186320.html
Currently, the interp::exec virtual method interface is such that
subclass implementations must catch exceptions and then return them
via normal function return.
However, higher up the in chain, for the CLI we get to
interpreter_exec_cmd, which does:
for (i = 1; i < nrules; i++)
{
struct gdb_exception e = interp_exec (interp_to_use, prules[i]);
if (e.reason < 0)
{
interp_set (old_interp, 0);
error (_("error in command: \"%s\"."), prules[i]);
}
}
and for MI we get to mi_cmd_interpreter_exec, which has:
void
mi_cmd_interpreter_exec (const char *command, char **argv, int argc)
{
...
for (i = 1; i < argc; i++)
{
struct gdb_exception e = interp_exec (interp_to_use, argv[i]);
if (e.reason < 0)
error ("%s", e.what ());
}
}
Note that if those errors are reached, we lose the original
exception's error code. I can't see why we'd want that.
And, I can't see why we need to have interp_exec catch the exception
and return it via the normal return path. That's normally needed when
we need to handle propagating exceptions across C code, like across
readline or ncurses, but that's not the case here.
It seems to me that we can simplify things by removing some
try/catch-ing and just letting exceptions propagate normally.
Note, the "error in command" error shown above, which only exists in
the CLI interpreter-exec command, is only ever printed AFAICS if you
run "interpreter-exec console" when the top level interpreter is
already the console/tui. Like:
(gdb) interpreter-exec console "foobar"
Undefined command: "foobar". Try "help".
error in command: "foobar".
You won't see it with MI's "-interpreter-exec console" from a top
level MI interpreter:
(gdb)
-interpreter-exec console "foobar"
&"Undefined command: \"foobar\". Try \"help\".\n"
^error,msg="Undefined command: \"foobar\". Try \"help\"."
(gdb)
nor with MI's "-interpreter-exec mi" from a top level MI interpreter:
(gdb)
-interpreter-exec mi "-foobar"
^error,msg="Undefined MI command: foobar",code="undefined-command"
^done
(gdb)
in both these cases because MI's -interpreter-exec just does:
error ("%s", e.what ());
You won't see it either when running an MI command with the CLI's
"interpreter-exec mi":
(gdb) interpreter-exec mi "-foobar"
^error,msg="Undefined MI command: foobar",code="undefined-command"
(gdb)
This last case is because MI's interp::exec implementation never
returns an error:
gdb_exception
mi_interp::exec (const char *command)
{
mi_execute_command_wrapper (command);
return gdb_exception ();
}
Thus I think that "error in command" error is pretty pointless, and
since it simplifies things to not have it, the patch just removes it.
The patch also ends up addressing an old FIXME.
Change-Id: I5a6432a80496934ac7127594c53bf5221622e393
Approved-By: Tom Tromey <tromey@adacore.com>
Approved-By: Kevin Buettner <kevinb@redhat.com>
|
|
Many gdb.compile C++ tests fail for me on Fedora 36. I think these
are largely bugs in the plugin, though I didn't investigate too
deeply. Once one failure is seen, this often cascades and sometimes
there are many timeouts.
For example, this can happen:
(gdb) compile code var = a->get_var ()
warning: Could not find symbol "_ZZ9_gdb_exprP10__gdb_regsE1a" for compiled module "/tmp/gdbobj-0xdI6U/out2.o".
1 symbols were missing, cannot continue.
I think this is probably a plugin bug because, IIRC, in theory these
symbols should be exempt from a lookup via gdb.
This patch arranges to catch any catastrophic failure and then simply
exit the entire .exp file.
|
|
I had a .gdb_history file in my testsuite directory in the build tree,
and this provoked a failure in gdbhistsize-history.exp. It seems
simple to prevent this file from causing a failure.
|
|
fixup_symbol_section delegates all its work to fixup_section, so merge
the two.
Because there is only a single caller to fixup_symbol_section, we can
also remove some of the introductory logic. For example, this will
never be called with a NULL objfile any more.
The LOC_BLOCK case can be removed, because such symbols are handled by
the buildsym code now.
Finally, a symbol can only appear in a SEC_ALLOC section, so the loop
is modified to skip sections that do not have this flag set.
|
|
Nearly every call to fixup_symbol_section in gdb is incorrect, and if
any such call has an effect, it's purely by happenstance.
fixup_section has a long comment explaining that the call should only
be made before runtime section offsets are applied. And, the loop in
this code (the fallback loop -- the minsym lookup code is "ok") is
careful to remove these offsets before comparing addresses.
However, aside from a single call in dwarf2/read.c, every call in gdb
is actually done after section offsets have been applied. So, these
calls are incorrect.
Now, these calls could be made when the symbol is created. I
considered this approach, but I reasoned that the code has been this
way for many years, seemingly without ill effect. So, instead I chose
to simply remove the offending calls.
|