Age | Commit message (Collapse) | Author | Files | Lines |
|
With commit 18a5766d09c ("[gdbsupport] Add sequential_for_each") I added a
drop-in replacement for gdb::parallel_for_each, but there's nothing making
sure that the two remain in sync.
Extend the unit test for gdb::parallel_for_each to test both.
Do this using a slightly unusual file-self-inclusion. Doing so keep things
readable and maintainable, and avoids macrofying functions.
Tested on x86_64-linux.
|
|
In dwarf2_build_psymtabs_hard, we use a parallel_for_each to distribute CUs
over threads.
Ensuring a fair distribution over the worker threads and main thread in terms
of number of CUs might not be the most efficient way, given that CUs can vary
in size.
Fix this by using per_cu->get_length () as the task size.
I've used this experiment to verify the performance impact:
...
$ for n in $(seq 1 10); do \
time gdb -q -batch ~/firefox/libxul.so-93.0-1.1.x86_64.debug \
2>&1 \
| grep "real:"; \
done
...
and without the patch got:
...
real: 4.71
real: 4.88
real: 4.29
real: 4.30
real: 4.65
real: 4.27
real: 4.27
real: 4.27
real: 4.75
real: 4.41
...
and with the patch:
...
real: 3.68
real: 3.81
real: 3.80
real: 3.68
real: 3.75
real: 3.69
real: 3.69
real: 3.74
real: 3.67
real: 3.74
...
so that seems a reasonable improvement.
With parallel_for_each_debug set to true, we get some more detail about
the difference in behaviour. Without the patch we have:
...
Parallel for: n_elements: 2818
Parallel for: minimum elements per thread: 1
Parallel for: elts_per_thread: 704
Parallel for: elements on worker thread 0 : 705
Parallel for: elements on worker thread 1 : 705
Parallel for: elements on worker thread 2 : 704
Parallel for: elements on worker thread 3 : 0
Parallel for: elements on main thread : 704
...
and with the patch:
...
Parallel for: n_elements: 2818
Parallel for: total_size: 1483674865
Parallel for: size_per_thread: 370918716
Parallel for: elements on worker thread 0 : 752 (size: 371811790)
Parallel for: elements on worker thread 1 : 360 (size: 371509370)
Parallel for: elements on worker thread 2 : 1130 (size: 372681710)
Parallel for: elements on worker thread 3 : 0 (size: 0)
Parallel for: elements on main thread : 576 (size: 367671995)
...
Tested on x86_64-linux.
|
|
Add a task_size parameter to parallel_for_each, defaulting to nullptr, and use
the task size to distribute similarly-sized chunks to the threads.
Tested on x86_64-linux.
|
|
This adds gdb::make_function_view, which lets you create a function
view from a callable without specifying the function_view's template
parameter. For example, this:
auto lambda = [&] (int) { ... };
auto fv = gdb::make_function_view (lambda);
instead of:
auto lambda = [&] (int) { ... };
gdb::function_view<void (int)> fv = lambda;
It is particularly useful if you have a template function with an
optional function_view parameter, whose type depends on the function's
template parameters. Like:
template<typename T>
void my_function (T v, gdb::function_view<void(T)> callback = nullptr);
For such a function, the type of the callback argument you pass must
already be a function_view. I.e., this wouldn't compile:
auto lambda = [&] (int) { ... };
my_function (1, lambda);
With gdb::make_function_view, you can write the call like so:
auto lambda = [&] (int) { ... };
my_function (1, gdb::make_function_view (lambda));
Unit tests included.
Tested by building with GCC 9.4, Clang 10, and GCC 4.8.5, on x86_64
GNU/Linux, and running the unit tests.
Change-Id: I5c4b3b4455ed6f0d8878cf1be189bea3ee63f626
|
|
When debugging cc1 I heavily rely on simple one-parameter debug functions
that allow me to inspect a variable of a common type, like:
- debug_generic_expr
- debug_gimple_stmt
- debug_rtx
and I miss similar functions in gdb.
Add functions to dump variables of types 'value' and 'expression':
- debug_exp, and
- debug_val.
Tested on x86_64-linux, by breaking on varobj_create, and doing:
...
(gdb) call debug_exp (var->root->exp.get ())
&"Operation: OP_VAR_VALUE\n"
&" Block symbol:\n"
&" Symbol: aaa\n"
&" Block: 0x2d064f0\n"
(gdb)
...
and:
...
(gdb) call debug_val (value)
&"5"
(gdb)
...
|
|
gdbarch implements its own registry-like approach. This patch changes
it to instead use registry.h. It's a rather large patch but largely
uninteresting -- it's mostly a straightforward conversion from the old
approach to the new one.
The main benefit of this change is that it introduces type safety to
the gdbarch registry. It also removes a bunch of code.
One possible drawback is that, previously, the gdbarch registry
differentiated between pre- and post-initialization setup. This
doesn't seem very important to me, though.
|
|
So far, the registry hasn't been used to refer to a 'const' type, but
this changes with the gdbarch change. This patch arranges to let the
registry store a pointer-to-const, by removing const in the 'set'
method.
|
|
This changes gdbarch to use new and delete.
|
|
This changes gdbarch to use bool for initialized_p.
|
|
When running test-case gdb.dwarf2/fission-loclists.exp, I noticed:
...
warning: Section .debug_aranges in fission-loclists has duplicate \
debug_info_offset 0x8f, ignoring .debug_aranges.^M
...
Fix this by removing the duplicate .debug_aranges entry.
Tested on x86_64-linux.
|
|
In PR23888 an error is reported:
...
ERROR: tcl error sourcing watchpoint-unaligned.exp.
ERROR: expected boolean value but got ""
while executing
"if {$wpnum} {
...
This presumably happens when:
- skip_hw_watchpoint_tests returns 0 meaning hw watchpoints are supported
- gdb fails to set a hw watchpoint and instead sets a sw watchpoint
That particular situation is handled for arm:
...
-re "Watchpoint (\[0-9\]+): .*\r\n$gdb_prompt $" {
if {[istarget "arm*-*-*"]} {
untested $test
set wpnum 0
}
}
...
but not for any other targets so wpnum remains "", triggering the ERROR.
Possibly this has been fixed for powerpc by commit 8d4e4d13afb ("gdb Power 9
add test for HW watchpoint support."), but it's still possible for other
targets.
Fix this by:
- initializing wpnum to 0 instead of ""
- signalling the failure to set a hw watchpoint by a fail
Tested on x86_64-linux, also by adding:
...
gdb_test_no_output "set can-use-hw-watchpoints 0"
...
and verifying that it triggers the fail.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=23888
|
|
On aarch64, I run into:
...
FAIL: gdb.base/large-frame.exp: optimize=-O0: backtrace
...
The problem is that the architecture-specific prologue analyzer fails to
handle the first two insns in the prologue properly:
...
0000000000400610 <func>:
400610: d2880210 mov x16, #0x4010
400614: cb3063ff sub sp, sp, x16
400618: a9007bfd stp x29, x30, [sp]
40061c: 910003fd mov x29, sp
400620: 910043a0 add x0, x29, #0x10
400624: 97fffff0 bl 4005e4 <blah>
...
so we get:
...
$ gdb -q -batch ./outputs/gdb.base/large-frame/large-frame-O0 -ex "b func"
Breakpoint 1 at 0x400614
...
Fix this by:
- fixing the support for the first insn to extract the immediate operand, and
- adding support for the second insn,
such that we have:
...
Breakpoint 1 at 0x400624
...
Note that we're overshooting by one insn (0x400620 is the first insn after the
prologue), but that's a pre-existing problem.
Tested on aarch64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29408
|
|
In some cases, the objfile owns the per-bfd object. This is yet
another object that can sometimes be destroyed before the registry is
destroyed, possibly reslting in a use-after-free. Also, I noticed
that the condition for deleting the object is not the same as the
condition used to create it -- so it could possibly result in a memory
leak in some situations. This patch fixes the problem by introducing
a new unique_ptr that holds this object when necessary.
|
|
This changes objfile to use an auto_obstack. This helps prevent
use-after-free bugs, because it ensures that anything allocated on the
objfile obstack will live past the point at which the registry object
is destroyed.
|
|
This changes struct objfile to use a gdb_bfd_ref_ptr. In addition to
removing some manual memory management, this fixes a use-after-free
that was introduced by the registry rewrite series. The issue there
was that, in some cases, registry shutdown could refer to memory that
had already been freed. This help fix the bug by delaying the
destruction of the BFD reference (and thus the per-bfd object) until
after the registry has been shut down.
|
|
When building with clang 15, I got this,
CXX amd64-tdep.o
amd64-tdep.c:1410:13: error: variable 'insn' set but not used[-Werror,-Wunused-but-set-variable]
gdb_byte *insn = insn_details->raw_insn + modrm_offset;
^
1 error generated.
The function that uses this variable has been removed in this commit,
commit 870f88f7551b0f2d6aaaa36fb684b5ff8f468107
Date: Mon Apr 18 13:16:27 2016 -0400
remove trivialy unused variables
Fix this by removing unused variable.
Tested by rebuilding on x86_64-linux with clang 15 and gcc 12.
|
|
Commit bc20e562ec0 "Fix use after free in varobj" introduced a
regression. This commit makes sure that the varobj object does not
keeps stale references to object being freed when we unload an objfile.
This includes the "valid_block" field which is reset to nullptr if the
pointed to block is tied to an objfile being freed.
However, at some point varobj_invalidate_iter might try to recreate
varobjs tracking either floating or globals. Varobj tracking globals
are identified as having the "valid_block" field set nullptr, but as
bc20e562ec0 might clear this field, we have lost the ability to
distinguish between varobj referring to globals and non globals.
Fix this by introducing a "global" flag which tracks if a given varobj
was initially created as tracking a global.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29426
|
|
During the development of 40c23d880386d6e8202567eaa2a6b041feb1a652,
the return value of fbsd_nat_target::have_regset was changed from a
simple boolean to returning the size of the register set. The
comments and callers were all updated for this change, but the actual
return type was accidentally left as a bool. This change fixes the
return type to be a size_t.
Current callers of this only checked the value against 0 and thus
still worked correctly.
|
|
The function read_addrmap_from_aranges contains code to issue a warning:
...
if (!insertpair.second)
{
warning (_("Section .debug_aranges in %s has duplicate "
"debug_info_offset %s, ignoring .debug_aranges."),
objfile_name (objfile), sect_offset_str (per_cu->sect_off));
return false;
}
...
but the warning is in fact activated when all_comp_units has duplicate
entries, which is very misleading.
Fix this by:
- adding a test-case that should trigger the warning,
- replacing the current implementation of the warning with an
assert that all_comp_units should not contain duplicates, and
- properly re-implementing the warning, such that it is triggered
by the test-case.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29381
|
|
These two macros print either a 16 digit hex number or an 8 digit
hex number. Unfortunately they depend on both target and host, which
means that the output for 32-bit targets may be either 8 or 16 hex
digits.
Replace them in most cases with code that prints a bfd_vma using
PRIx64. In some cases, deliberately lose the leading zeros.
This change some output, notably in base/offset fields of m68k
disassembly which I think looks better that way, and in error
messages. I've kept leading zeros in symbol dumps (objdump -t)
and in PE header dumps.
bfd/
* bfd-in.h (fprintf_vma, sprintf_vma, printf_vma): Delete.
* bfd-in2.h: Regenerate.
* bfd.c (bfd_sprintf_vma): Don't use sprintf_vma.
(bfd_fprintf_vma): Don't use fprintf_vma.
* coff-rs6000.c (xcoff_reloc_type_tls): Don't use sprintf_vma.
Instead use PRIx64 to print bfd_vma values.
(xcoff_ppc_relocate_section): Likewise.
* cofflink.c (_bfd_coff_write_global_sym): Likewise.
* mmo.c (mmo_write_symbols_and_terminator): Likewise.
* srec.c (srec_write_symbols): Likewise.
* elf32-xtensa.c (print_r_reloc): Similarly for fprintf_vma.
* pei-x86_64.c (pex64_dump_xdata): Likewise.
(pex64_bfd_print_pdata_section): Likewise.
* som.c (som_print_symbol): Likewise.
* ecoff.c (_bfd_ecoff_print_symbol): Use bfd_fprintf_vma.
opcodes/
* dis-buf.c (perror_memory, generic_print_address): Don't use
sprintf_vma. Instead use PRIx64 to print bfd_vma values.
* i386-dis.c (print_operand_value, print_displacement): Likewise.
* m68k-dis.c (print_base, print_indexed): Likewise.
* ns32k-dis.c (print_insn_arg): Likewise.
* ia64-gen.c (_opcode_int64_low, _opcode_int64_high): Delete.
(opcode_fprintf_vma): Delete.
(print_main_table): Use PRIx64 to print opcode.
binutils/
* od-macho.c: Replace all uses of printf_vma with bfd_printf_vma.
* objcopy.c (copy_object): Don't use sprintf_vma. Instead use
PRIx64 to print bfd_vma values.
(copy_main): Likewise.
* readelf.c (CHECK_ENTSIZE_VALUES): Likewise.
(dynamic_section_mips_val): Likewise.
(print_vma): Don't use printf_vma. Instead use PRIx64 to print
bfd_vma values.
(dump_ia64_vms_dynamic_fixups): Likewise.
(process_version_sections): Likewise.
* rddbg.c (stab_context): Likewise.
gas/
* config/tc-i386.c (offset_in_range): Don't use sprintf_vma.
Instead use PRIx64 to print bfd_vma values.
(md_assemble): Likewise.
* config/tc-mips.c (load_register, macro): Likewise.
* messages.c (as_internal_value_out_of_range): Likewise.
* read.c (emit_expr_with_reloc): Likewise.
* config/tc-ia64.c (note_register_values): Don't use fprintf_vma.
Instead use PRIx64 to print bfd_vma values.
(print_dependency): Likewise.
* listing.c (list_symbol_table): Use bfd_sprintf_vma.
* symbols.c (print_symbol_value_1): Use %p to print pointers.
(print_binary): Likewise.
(print_expr_1): Use PRIx64 to print bfd_vma values.
* write.c (print_fixup): Use %p to print pointers. Don't use
fprintf_vma.
* testsuite/gas/all/overflow.l: Update expected output.
* testsuite/gas/m68k/mcf-mov3q.d: Likewise.
* testsuite/gas/m68k/operands.d: Likewise.
* testsuite/gas/s12z/truncated.d: Likewise.
ld/
* deffilep.y (def_file_print): Don't use fprintf_vma. Instead
use PRIx64 to print bfd_vma values.
* emultempl/armelf.em (gld${EMULATION_NAME}_finish): Don't use
sprintf_vma. Instead use PRIx64 to print bfd_vma values.
* emultempl/pe.em (gld${EMULATION_NAME}_finish): Likewise.
* ldlang.c (lang_map): Use %V to print region origin.
(lang_one_common): Don't use sprintf_vma.
* ldmisc.c (vfinfo): Don't use fprintf_vma or sprintf_vma.
* pe-dll.c (pe_dll_generate_def_file): Likewise.
gdb/
* remote.c (remote_target::trace_set_readonly_regions): Replace
uses of sprintf_vma with bfd_sprintf_vma.
|
|
On aarch64-linux, I run into:
...
(gdb) print 16#ffffffffffffffff#^M
$7 = 18446744073709551615^M
(gdb) FAIL: gdb.ada/literals.exp: print 16#ffffffffffffffff#
...
while on x86_64-linux instead, I get:
...
(gdb) print 16#ffffffffffffffff#^M
$7 = -1^M
(gdb) PASS: gdb.ada/literals.exp: print 16#ffffffffffffffff#
...
We can easily reproduce this on x86_64-linux using:
...
$ gdb -q -batch -ex "set lang ada" -ex "set arch i386" \
-ex "print 16#ffffffffffffffff#"
$1 = -1
$ gdb -q -batch -ex "set lang ada" -ex "set arch aarch64" \
-ex "print 16#ffffffffffffffff#"
$1 = 18446744073709551615
...
With i386, we have:
...
(gdb) p int_bits
$3 = 32
(gdb) p long_bits
$4 = 32
(gdb) p long_long_bits
$5 = 64
...
and so in processInt we hit the fits-in-unsigned-long-long case where we use
as type long long:
...
/* Note: Interprets ULLONG_MAX as -1. */
yylval.typed_val.type = type_long_long (par_state);
...
With aarch64, we have instead:
...
(gdb) p int_bits
$1 = 32
(gdb) p long_bits
$2 = 64
(gdb) p long_long_bits
$3 = 64
...
and so in processInt we hit the fits-in-unsigned-long case where we use
as type unsigned long:
...
yylval.typed_val.type
= builtin_type (par_state->gdbarch ())->builtin_unsigned_long;
...
It's not clear why for ada we're using long long for the
fits-in-unsigned-long-long case.
Fix this by using unsigned long long for the fits-in-unsigned-long-long case,
meaning the new reference output is 18446744073709551615 instead of -1.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29416
|
|
Using different ways of passing source file paths to compilers results n
different file and directory paths in the line header. For example:
- gcc foo.c
- gcc ./foo.c
- gcc ../cwd/foo.c
- gcc $PWD/foo.c
Because of this, GDB sometimes failed to look up macros. The previous
patch fixed that as much as possible. This patch adds the corresponding
tests.
Add both a DWARF assembler-based test and a regular test. The DWARF
assembled-based one tests some hard-coded debug info based on what I
have observed some specific versions of gcc and clang generate. We want
to make sure that GDB keeps handling all these cases correctly, even if
it's not always clear whether they are really valid DWARF. Also, they
will be tested no matter what the current target compiler is for a given
test run.
The regular test is compiled using the target compiler, so it may help
find bugs when testing against some other toolchains than what was used
to generate the DWARF assembler-based test.
For the DWARF assembler-based test, add to testsuite/lib/dwarf.exp the
necessary code to generate a DWARF5 .debug_macro section. The design of
the new procs is based on what was done for rnglists and loclists.
To test against a specific compiler one can use this command, for
example:
$ make check TESTS="gdb.base/macro-source-path.exp" RUNTESTFLAGS="CC_FOR_TARGET=clang --target_board unix/gdb:debug_flags=-gdwarf-5"
Change-Id: Iab8da498e57d10cc2a3d09ea136685d9278cfcf6
|
|
The bit of code removed by this patch was introduced to fix the same
kind of problem that the previous patch fixes. That is, to try to match
existing subfiles when different name forms are used to refer to a same
file.
The thread for the patch that introduced this code is:
https://pi.simark.ca/gdb-patches/45F8CBDF.9090501@hq.tensilica.com/
The important bits are that the compiler produced a compilation unit
with:
DW_AT_name : test.c
DW_AT_comp_dir : /home/maxim/W/BadgerPass/PR_14999
and DWARF v2 line table with:
The Directory Table:
/home/maxim/W/BadgerPass/PR_14999
The File Name Table:
Entry Dir Time Size Name
1 1 1173897037 152 test.c
Because the main symtab was created with only DW_AT_name, it was named
"test.c". And because the path built from the line header contained the
"directory" part, it was "/home/maxim/W/BadgerPass/PR_14999/test.c".
Because of this mismatch, thing didn't work, so they added this code to
prepend the compilation directory to the existing subfile names, so that
this specific case would work.
With the changes done earlier in this series, where subfiles are
identified using the "most complete path possible", this case would be
handled. The main subfile's would be
"/home/maxim/W/BadgerPass/PR_14999/test.c" from the start
(DW_AT_comp_dir + DW_AT_name). It's not so different from some DWARF 5
cases actually, which make the compilation directory explicit in the
line table header.
I therefore think that this code is no longer needed. It does feel like
a quick hack to make one specific case work, and we have a more general
solution now. Also, this code was introduced to work around a problem
in the DWARF debug info or the DWARF debug info reader. In general, I
think it's preferable for these hacks to be located in the specific
debug info reader code, rather than in the common code.
Even though this code was added to work around a DWARF reader problem,
it's possible that some other debug info reader has started taking
advantage of this code in the mean time. It's very difficult to
know or verify, but I think the likelyhood is quite small, so I'm
proposing to get rid of it to simplify things a little bit.
Change-Id: I710b8ec0d449d1b110d67ddf9fcbdb2b37108306
|
|
Printing macros defined in the main source file doesn't work reliably
using various toolchains, especially when DWARF 5 is used. For example,
using the binaries produced by either of these commands:
$ gcc --version
gcc (GCC) 11.2.0
$ ld --version
GNU ld (GNU Binutils) 2.38
$ gcc test.c -g3 -gdwarf-5
$ clang --version
clang version 13.0.1
$ clang test.c -gdwarf-5 -fdebug-macro
I get:
$ ./gdb -nx -q --data-directory=data-directory a.out
(gdb) start
Temporary breakpoint 1 at 0x111d: file test.c, line 6.
Starting program: /home/simark/build/binutils-gdb-one-target/gdb/a.out
Temporary breakpoint 1, main () at test.c:6
6 return ZERO;
(gdb) p ZERO
No symbol "ZERO" in current context.
When starting to investigate this (taking the gcc-compiled binary as an
example), we see that GDB fails to look up the appropriate macro scope
when evaluating the expression. While stopped in
macro_lookup_inclusion:
(top-gdb) p name
$1 = 0x62100011a980 "test.c"
(top-gdb) p source.filename
$2 = 0x62100011a9a0 "/home/simark/build/binutils-gdb-one-target/gdb/test.c"
`source` is the macro_source_file that we would expect GDB to find.
`name` comes from the symtab::filename field of the symtab we are
stopped in. GDB doesn't find the appropriate macro_source_file because
the name of the macro_source_file doesn't match exactly the name of the
symtab.
The name of the main symtab comes from the compilation unit's
DW_AT_name, passed to the buildsym_compunit's constructor:
https://gitlab.com/gnutools/binutils-gdb/-/blob/4815d6125ec580cc02a1094d61b8c9d1cc83c0a1/gdb/dwarf2/read.c#L10627-10630
The contents of DW_AT_name, in this case, is "test.c". It is typically
(what I witnessed all compilers do) the same string that was passed to
the compiler on the command-line.
The name of the macro_source_file comes from the line number program
header's file table, from the call to the line_header::file_file_name
method:
https://gitlab.com/gnutools/binutils-gdb/-/blob/4815d6125ec580cc02a1094d61b8c9d1cc83c0a1/gdb/dwarf2/macro.c#L54-65
line_header::file_file_name prepends the directory path that the file
entry refers to, in the file table (if the file name is not already
absolute). In this case, the file name is "test.c", appended to the
directory "/home/simark/build/binutils-gdb-one-target/gdb".
Because the symtab's name is not created the same way as the
macro_source_file's name is created, we get this mismatch. GDB fails to
find the appropriate macro scope for the symtab, and we can't print
macros when stopped in that symtab.
To make this work, we must ensure that paths produced in these two ways
end up identical. This can be tricky because of the different ways a
path can be passed to the compiler by the user.
Another thing to consider is that while the main symtab's name (or
subfile, before it becomes a symtab) is created using DW_AT_name, the
main symtab is also referred to using its entry in the line table
header's file table, when processing the line table. We must therefore
ensure that the same name is produced in both cases, so that a call to
"start_subfile" for the main subfile will correctly find the
already-created subfile, created by buildsym_compunit's constructor. If
we fail to do that, things still often work, because of a fallback: the
watch_main_source_file_lossage method. This method determines that if
the main subfile has no symbols but there exists another subfile with
the same basename (e.g. "test.c") that does have symbols, it's probably
because there was some filename mismatch. So it replaces the main
subfile with that other subfile. I think that heuristic is useful as a
last effort to work around any bug or bad debug info, but I don't think
we should design things such as to rely on it. It's a heuristic, it can
get things wrong. So in my search for a fix, it is important that given
some good debug info, we don't end up relying on that for things to
work.
A first attempt at fixing this was to try to prepend the compilation
directory here or not prepend it there. In practice, because of all the
possible combinations of debug info the compilers produce, it was not
possible to get something that would produce reliable, consistent paths.
Another attempt at fixing this was to make both macro_source_file
objects and symtab objects use the most complete form of path possible.
That means to prepend directories at least until we get an absolute
path. In theory, we should end up with the same path in all cases.
This generally worked, but because it changed the symtab names, it
resulted in user-visible changes (for example, paths to source files in
Breakpoint hit messages becoming always absolute). I didn't find this
very good, first because there is a "set filename-display" setting that
lets the user control how they want the paths to be displayed, and that
would suddenly make this setting completely ineffective (although even
today, it is a bit dependent on the debug info). Second, it would
require a good amount of testsuite tweaks to make tests accept these
suddenly absolute paths.
This new patch is a slight variation of that: it adds a new field called
"filename_for_id" in struct symtab and struct subfile, next to the
existing filename field. The goal is to separate the internal ids used
for finding objects from the names used for presentation. This field is
used for identifying subfiles, symtabs and macro_source_files
internally. For DWARF symtabs, this new field is meant to contain the
"most complete possible" path, as discussed above. So for a given file,
it must always be in the same form, everywhere. The existing
symtab::filename field remains the one used for printing to the user, so
there shouldn't be any change in how paths are printed.
Changes in the core symtab files are:
- Add "name_for_id" and "filename_for_id" fields to "struct subfile"
and "struct symtab", next to existing "name" and "filename" fields.
- Make buildsym_compunit::buildsym_compunit and
buildsym_compunit::start_subfile accept a "name_for_id" parameter
next to the existing "name" ones.
- Make buildsym_compunit::start_subfile use "name_for_id" for looking
up existing subfiles. This is the key thing for making calls
to start_subfile for the main source file look up the existing
subfile successfully, and avoid relying on
watch_main_source_file_lossage.
- Make sal_macro_scope pass "filename_for_id", rather than "filename",
to macro_lookup_inclusion. This is the key thing to making the
lookup work and macro printing work.
Changes in the DWARF files are:
- Make line_header::file_file_name return the "most complete possible"
name. The only pre-existing user of this method is the macro code,
to give the macro_source_file objects their name. And we now want
them to have this "most complete possible" name, which will match the
corresponding symtab's "filename_for_id".
- Make dwarf2_cu::start_compunit_symtab pass the "most complete
possible" name for the main symtab's "filename_for_id". In this
context, where the info comes from the compilation unit's DW_AT_name
/ DW_AT_comp_dir, it means prepending DW_AT_comp_dir to DW_AT_name if
DW_AT_name is not already absolute.
- Change dwarf2_start_subfile to build a name_for_id for the subfile
being started. The simplest way is to re-use
line_header::file_file_name, since the callers always have a
file_entry handy. This ensures that it will get the exact same path
representation as the macro code does, for the same file (since it
also uses line_header::file_file_name).
- Update calls to allocate_symtab to pass the "name_for_id" from the
subfile.
Tests exercising all this are added by the following patch.
Of all the cases I tried, the only one I found that ends up relying on
watch_main_source_file_lossage is the following one:
$ clang --version
clang version 13.0.1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
$ clang ./test.c -g3 -O0 -gdwarf-4
$ ./gdb -nx --data-directory=data-directory -q -readnow -iex "set debug symtab-create 1" a.out
...
[symtab-create] start_subfile: name = test.c, name_for_id = /home/simark/build/binutils-gdb-one-target/gdb/test.c
[symtab-create] start_subfile: name = ./test.c, name_for_id = /home/simark/build/binutils-gdb-one-target/gdb/./test.c
[symtab-create] start_subfile: name = ./test.c, name_for_id = /home/simark/build/binutils-gdb-one-target/gdb/./test.c
[symtab-create] start_subfile: found existing symtab with name_for_id /home/simark/build/binutils-gdb-one-target/gdb/./test.c (/home/simark/build/binutils-gdb-one-target/gdb/./test.c)
[symtab-create] watch_main_source_file_lossage: using subfile ./test.c as the main subfile
As we can see, there are two forms used for "test.c", one with a "." and
one without. This comes from the fact that the compilation unit DIE
contains:
DW_AT_name ("test.c")
DW_AT_comp_dir ("/home/simark/build/binutils-gdb-one-target/gdb")
without a ".", and the line table for that file contains:
include_directories[ 1] = "."
file_names[ 1]:
name: "test.c"
dir_index: 1
When assembling the filename from that entry, we get a ".".
It is a bit unexpected that the main filename resulting from the line
table header does not match exactly the name in the compilation unit.
For instance, gcc uses "./test.c" for the DW_AT_name, which gives
identical paths in the compilation unit and in the line table header.
Similarly, with DWARF 5:
$ clang ./test.c -g3 -O0 -gdwarf-5
clang create two entries that refer to the same file but are of in a different
form.
include_directories[ 0] = "/home/simark/build/binutils-gdb-one-target/gdb"
include_directories[ 1] = "."
file_names[ 0]:
name: "test.c"
dir_index: 0
file_names[ 1]:
name: "test.c"
dir_index: 1
The first file name produces a path without a "." while the second does.
This is not caught by watch_main_source_file_lossage, because of
dwarf_decode_lines that creates a symtab for each file entry in the line
table. It therefore appears as "non-empty" to
watch_main_source_file_lossage. This results in two symtabs:
(gdb) maintenance info symtabs
{ objfile /home/simark/build/binutils-gdb-one-target/gdb/a.out ((struct objfile *) 0x613000005d00)
{ ((struct compunit_symtab *) 0x62100011aca0)
debugformat DWARF 5
producer clang version 13.0.1
name test.c
dirname /home/simark/build/binutils-gdb-one-target/gdb
blockvector ((struct blockvector *) 0x621000129ec0)
user ((struct compunit_symtab *) (null))
{ symtab test.c ((struct symtab *) 0x62100011ad20)
fullname (null)
linetable ((struct linetable *) 0x0)
}
{ symtab ./test.c ((struct symtab *) 0x62100011ad60)
fullname (null)
linetable ((struct linetable *) 0x621000129ef0)
}
}
}
I am not sure what is the consequence of this, but this is also what
happens before my patch, so I think its acceptable to leave it as-is.
To handle these two cases nicely, I think we will need a function that
removes the unnecessary "." from path names, something that can be done
later.
Finally, I made a change in find_file_and_directory is necessary to
avoid breaking test
gdb.dwarf2/dw2-compdir-oldgcc.exp: info source gcc42
Without that change, we would get:
(gdb) info source
Current source file is /dir/d/dw2-compdir-oldgcc42.S
Compilation directory is /dir/d
whereas the expected result is:
(gdb) info source
Current source file is dw2-compdir-oldgcc42.S
Compilation directory is /dir/d
This test was added here:
https://sourceware.org/pipermail/gdb-patches/2012-November/098144.html
Long story short, GCC <= 4.2 apparently had a bug where it would
generate a DW_AT_name with a full path ("/dir/d/dw2-compdir-oldgcc42.S")
and no DW_AT_comp_dir. The line table has one entry with filename
"dw2-compdir-oldgcc42.S", which refers to directory 0. Directory 0
normally refers to the compilation unit's comp dir, but it is
non-existent in this case.
This caused some symtab lookup problems, and to work around them, some
workaround was added, which today reads as:
if (res.get_comp_dir () == nullptr
&& producer_is_gcc_lt_4_3 (cu)
&& res.get_name () != nullptr
&& IS_ABSOLUTE_PATH (res.get_name ()))
res.set_comp_dir (ldirname (res.get_name ()));
Source: https://gitlab.com/gnutools/binutils-gdb/-/blob/6577f365ebdee7dda71cb996efa29d3714cbccd0/gdb/dwarf2/read.c#L9428-9432
It extracts an artificial DW_AT_comp_dir from DW_AT_name, if there is no
DW_AT_comp_dir and DW_AT_name is absolute.
Prior to my patch, a subfile would get created with filename
"/dir/d/dw2-compdir-oldgcc42.S", from DW_AT_name, and another would get
created with filename "dw2-compdir-oldgcc42.S" from the line table's
file table. Then watch_main_source_file_lossage would kick in and merge
them, keeping only the "dw2-compdir-oldgcc42.S" one:
[symtab-create] start_subfile: name = /dir/d/dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: name = dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: name = dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: found existing symtab with name dw2-compdir-oldgcc42.S (dw2-compdir-oldgcc42.S)
[symtab-create] watch_main_source_file_lossage: using subfile dw2-compdir-oldgcc42.S as the main subfile
And so "info source" would show "dw2-compdir-oldgcc42.S" as the
filename.
With my patch applied, but without the change in
find_file_and_directory, both DW_AT_name and the line table would try to
start a subfile with the same filename_for_id, and there was no need for
watch_main_source_file_lossage - which is what we want:
[symtab-create] start_subfile: name = /dir/d/dw2-compdir-oldgcc42.S, name_for_id = /dir/d/dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: name = dw2-compdir-oldgcc42.S, name_for_id = /dir/d/dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: found existing symtab with name_for_id /dir/d/dw2-compdir-oldgcc42.S (/dir/d/dw2-compdir-oldgcc42.S)
[symtab-create] start_subfile: name = dw2-compdir-oldgcc42.S, name_for_id = /dir/d/dw2-compdir-oldgcc42.S
[symtab-create] start_subfile: found existing symtab with name_for_id /dir/d/dw2-compdir-oldgcc42.S (/dir/d/dw2-compdir-oldgcc42.S)
But since the one with name == "/dir/d/dw2-compdir-oldgcc42.S", coming
from DW_AT_name, gets created first, it wins, and the symtab ends up
with "/dir/d/dw2-compdir-oldgcc42.S" as the name, "info source" shows
"/dir/d/dw2-compdir-oldgcc42.S" and the test breaks.
This is not wrong per-se, after all DW_AT_name is
"/dir/d/dw2-compdir-oldgcc42.S", so it wouldn't be wrong to report the
current source file as "/dir/d/dw2-compdir-oldgcc42.S". If you compile
a file passing "/an/absolute/path.c", DW_AT_name typically contains (at
least with GCC) "/an/absolute/path.c" and GDB tells you that the source
file is "/an/absolute/path.c". But we can also keep the existing
behavior fairly easily with a little change in find_file_and_directory.
When extracting an artificial DW_AT_comp_dir from DW_AT_name, we now
modify the name to just keep the file part. The result is coherent with
what compilers do when you compile a file by just passing its filename
("gcc path.c -g"):
DW_AT_name ("path.c")
DW_AT_comp_dir ("/home/simark/build/binutils-gdb-one-target/gdb")
With this change, filename_for_id is still the full name,
"/dir/d/dw2-compdir-oldgcc42.S", but the filename of the subfile /
symtab (what ends up shown by "info source") is just
"dw2-compdir-oldgcc42.S", and that makes the test happy.
Change-Id: I8b5cc4bb3052afdb172ee815c051187290566307
|
|
In the following patch, there will be some callers of file_file_name
that will already have access to the file_entry object for which they
want the file name. It would be inefficient to have them pass an index,
only for line_header::file_file_name to re-lookup the same file_entry
object. Change line_header::file_file_name to accept a file_entry
object reference, instead of an index to look up.
I think this change makes sense in any case. Callers that have an index
can first obtain a file_entry using line_header::file_name_at or
line_header::file_names.
When passing a file_entry object, we can assume that the file_entry's
index is valid, unlike when passing an index. So, push the special case
about an invalid index to the sole current caller of file_file_name,
macro_start_file. I think that error belongs there anyway, since it
specifically talks about "bad file number in macro information".
This requires recording the file index in the file_entry structure, so
add that.
Change-Id: Ic6e44c407539d92b7863d7ba82405ade17f384ad
|
|
The following patch changes line_header::file_file_name to prepend the
compilation directory to the file name, if needed. For that, the line
header needs to know about the compilation directory. Prepare for that
by adding a constructor that takes it as a parameter, and passing the
value down everywhere needed. Add a second constructor for the special
case of building a line_header for doing a hash table lookup, since that
case doesn't require a compilation directory value.
Change-Id: Iba3ba0293e4e2d13a64b257cf9a3094684d54330
|
|
Add a few debug prints in buildsym.c that were helpful to me in writing
this series.
Change-Id: If10a818feaee3ce1b78a2a254013b62dd578002b
|
|
Introduce symtab_create_debug_printf and symtab_create_debug_printf_v,
to print the debug messages enabled by "set debug symtab-create".
Change-Id: I442500903f72d4635c2dd9eaef770111f317dc04
|
|
On aarch64-linux I run into this failure with gcc 7.5.0:
...
(gdb) print $item.started^M
$1 = (-5312, 65535, 4202476)^M
(gdb) FAIL: gdb.ada/convvar_comp.exp: print $item.started
...
The test-case expects (0, 0, 0), but we're getting another value due to
incorrect location information.
Work around this by:
- first printing the value, and then
- verifying that the convenience variable matches the printed value.
I've verified that the test-case still checks what it should by disabling
the fix from commit cc0e770c0d0 ("memory error printing component of record
from convenience variable") and observing the test-case fail.
Tested on x86_64-linux and aarch64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29420
|
|
On aarch64 (and likewise on arm), I run into:
...
(gdb) PASS: gdb.threads/killed-outside.exp: get pid of inferior
Executing on target: kill -9 11516 (timeout = 300)
builtin_spawn -ignore SIGHUP kill -9 11516^M
continue^M
Continuing.^M
Unable to fetch general registers: No such process.^M
(gdb) [Thread 0xfffff7d511e0 (LWP 11518) exited]^M
^M
Program terminated with signal SIGKILL, Killed.^M
The program no longer exists.^M
FAIL: gdb.threads/killed-outside.exp: prompt after first continue (timeout)
...
due to a mismatch between the actual "No such process" line and the expected
one:
...
set no_such_process_msg "Couldn't get registers: No such process\."
...
Fix this by updating the regexp.
Tested on aarch64-linux, and x86_64-linux.
|
|
The Guile code generally checks to see if an htab is non-null before
destroying it. However, the registry code already ensures this, so we
can change these checks to asserts and simplify the code a little.
|
|
The registry code creates "registry_data" objects that hold the free
function and the index; then the registry keys refer to this object.
However, only the index is really useful, and now that registries have
a private implementation, just the index can be stored and we can
reduce the memory use of registries a little bit. This also
simplifies the code somewhat.
|
|
This rewrites registry.h, removing all the macros and replacing it
with relatively ordinary template classes. The result is less code
than the previous setup. It replaces large macros with a relatively
straightforward C++ class, and now manages its own cleanup.
The existing type-safe "key" class is replaced with the equivalent
template class. This approach ended up requiring relatively few
changes to the users of the registry code in gdb -- code using the key
system just required a small change to the key's declaration.
All existing users of the old C-like API are now converted to use the
type-safe API. This mostly involved changing explicit deletion
functions to be an operator() in a deleter class.
The old "save/free" two-phase process is removed, and replaced with a
single "free" phase. No existing code used both phases.
The old "free" callbacks took a parameter for the enclosing container
object. However, this wasn't truly needed and is removed here as
well.
|
|
The guile code has a couple of unused functions that touch on the
registry API. This patch removes them.
|
|
When an objfile is destroyed, types that are still in use and
allocated on that objfile are copied. A temporary hash map is created
during this process, and it is allocated on the destroyed objfile's
obstack -- which normally is fine, as that is going to be destroyed
shortly anyway.
However, this approach requires that the objfile be passed to registry
destruction, and this won't be possible in the rewritten registry.
This patch changes the copied type hash table to simply use the heap
instead. It also removes the 'objfile' parameter from
copy_type_recursive, to make this all more clear.
This patch also fixes an apparent bug in copy_type_recursive.
Previously it was copying the dynamic property list to the dying
objfile's obstack:
- = copy_dynamic_prop_list (&objfile->objfile_obstack,
However I think this is incorrect -- that obstack is about to be
destroyed.
|
|
This changes address_space to use new and delete, and makes some other
small C++-ification changes as well, like changing address_space_num
to be a method.
This patch was needed for the subsequent patch to rewrite the registry
system.
|
|
PR python/18385
v7:
This version addresses the issues pointed out by Tom.
Added nullchecks for Python object creations.
Changed from using PyLong_FromLong to the gdb_py-versions.
Re-factored some code to make it look more cohesive.
Also added the more safe Python reference count decrement PY_XDECREF,
even though the BreakpointLocation type is never instantiated by the
user (explicitly documented in the docs) decrementing < 0 is made
impossible with the safe call.
Tom pointed out that using the policy class explicitly to decrement a
reference counted object was not the way to go, so this has instead been
wrapped in a ref_ptr that handles that for us in blocpy_dealloc.
Moved macro from py-internal to py-breakpoint.c.
Renamed section at the bottom of commit message "Patch Description".
v6:
This version addresses the points Pedro gave in review to this patch.
Added the attributes `function`, `fullname` and `thread_groups`
as per request by Pedro with the argument that it more resembles the
output of the MI-command "-break-list". Added documentation for these attributes.
Cleaned up left overs from copy+paste in test suite, removed hard coding
of line numbers where possible.
Refactored some code to use more c++-y style range for loops
wrt to breakpoint locations.
Changed terminology, naming was very inconsistent. Used a variety of "parent",
"owner". Now "owner" is the only term used, and the field in the
gdb_breakpoint_location_object now also called "owner".
v5:
Changes in response to review by Tom Tromey:
- Replaced manual INCREF/DECREF calls with
gdbpy_ref ptrs in places where possible.
- Fixed non-gdb style conforming formatting
- Get parent of bploc increases ref count of parent.
- moved bploc Python definition to py-breakpoint.c
The INCREF of self in bppy_get_locations is due
to the individual locations holding a reference to
it's owner. This is decremented at de-alloc time.
The reason why this needs to be here is, if the user writes
for instance;
py loc = gdb.breakpoints()[X].locations[Y]
The breakpoint owner object is immediately going
out of scope (GC'd/dealloced), and the location
object requires it to be alive for as long as it is alive.
Thanks for your review, Tom!
v4:
Fixed remaining doc issues as per request
by Eli.
v3:
Rewritten commit message, shortened + reworded,
added tests.
Patch Description
Currently, the Python API lacks the ability to
query breakpoints for their installed locations,
and subsequently, can't query any information about them, or
enable/disable individual locations.
This patch solves this by adding Python type gdb.BreakpointLocation.
The type is never instantiated by the user of the Python API directly,
but is produced by the gdb.Breakpoint.locations attribute returning
a list of gdb.BreakpointLocation.
gdb.Breakpoint.locations:
The attribute for retrieving the currently installed breakpoint
locations for gdb.Breakpoint. Matches behavior of
the "info breakpoints" command in that it only
returns the last known or currently inserted breakpoint locations.
BreakpointLocation contains 7 attributes
6 read-only attributes:
owner: location owner's Python companion object
source: file path and line number tuple: (string, long) / None
address: installed address of the location
function: function name where location was set
fullname: fullname where location was set
thread_groups: thread groups (inferiors) where location was set.
1 writeable attribute:
enabled: get/set enable/disable this location (bool)
Access/calls to these, can all throw Python exceptions (documented in
the online documentation), and that's due to the nature
of how breakpoint locations can be invalidated
"behind the scenes", either by them being removed
from the original breakpoint or changed,
like for instance when a new symbol file is loaded, at
which point all breakpoint locations are re-created by GDB.
Therefore this patch has chosen to be non-intrusive:
it's up to the Python user to re-request the locations if
they become invalid.
Also there's event handlers that handle new object files etc, if a Python
user is storing breakpoint locations in some larger state they've
built up, refreshing the locations is easy and it only comes
with runtime overhead when the Python user wants to use them.
gdb.BreakpointLocation Python type
struct "gdbpy_breakpoint_location_object" is found in python-internal.h
Its definition, layout, methods and functions
are found in the same file as gdb.Breakpoint (py-breakpoint.c)
1 change was also made to breakpoint.h/c to make it possible
to enable and disable a bp_location* specifically,
without having its LOC_NUM, as this number
also can change arbitrarily behind the scenes.
Updated docs & news file as per request.
Testsuite: tests the .source attribute and the disabling of
individual locations.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=18385
Change-Id: I302c1c50a557ad59d5d18c88ca19014731d736b0
|
|
Fix:
In gdb_mbuild.sh line 174:
continue
^------^ SC2104 (error): In functions, use return instead of continue.
Change-Id: I5ce95b01359c5cfbb1612f2f48b80bfeea66c96c
|
|
Commit 05c06f318fd9a112529dfc313e6512b399a645e4 enabled GDB to access
memory while threads are running. It did this by accessing
/proc/PID/task/LWP/mem.
Unfortunately, this interface is not implemented for writing in older
kernels (such as RHEL6). This means that GDB is unable to insert
breakpoints on these hosts:
$ ./gdb -q gdb -ex start
Reading symbols from gdb...
Temporary breakpoint 1 at 0x40fdd5: file ../../src/gdb/gdb.c, line 28.
Starting program: /home/rhel6/fsf/linux/gdb/gdb
Warning:
Cannot insert breakpoint 1.
Cannot access memory at address 0x40fdd5
(gdb)
Before this patch, linux_proc_xfer_memory_partial (previously called
linux_proc_xfer_partial) would return TARGET_XFER_EOF if the write to
/proc/PID/mem failed. [More specifically, linux_proc_xfer_partial
would not "bother for one word," but the effect is the essentially
same.]
This status was checked by linux_nat_target::xfer_partial, which would
then fallback to using ptrace to perform the operation.
This is the specific hunk that removed the fallback:
- xfer = linux_proc_xfer_partial (object, annex, readbuf, writebuf,
- offset, len, xfered_len);
- if (xfer != TARGET_XFER_EOF)
- return xfer;
+ return linux_proc_xfer_memory_partial (readbuf, writebuf,
+ offset, len, xfered_len);
+ }
return inf_ptrace_target::xfer_partial (object, annex, readbuf, writebuf,
offset, len, xfered_len);
This patch makes linux_nat_target::xfer_partial go straight to writing
memory via ptrace if writing via /proc/pid/mem is not possible in the
running kernel, enabling GDB to insert breakpoints on these older
kernels. Note that a recent patch changed the return status from
TARGET_XFER_EOF to TARGET_XFER_E_IO.
Tested on {unix,native-gdbserver,native-extended-gdbserver}/-m{32,64}
on x86_64, s390x, aarch64, and ppc64le.
Change-Id: If1d884278e8c4ea71d8836bedd56e6a6c242a415
|
|
Probe whether /proc/pid/mem is writable, by using it to write to a GDB
variable. This will be used in the following patch to avoid falling
back to writing to inferior memory with ptrace if /proc/pid/mem _is_
writable.
Change-Id: If87eff0b46cbe5e32a583e2977a9e17d29d0ed3e
|
|
According to LoongArch ELF ABI specification [1], handle the function
return value of various types.
[1] https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html#_return_values
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
|
|
Fix some code style issues suggested by Tom Tromey and Andrew Burgess,
thank you.
(1) Put an introductory comment to explain the purpose for some functions.
(2) Modify the the attribute code to make it portable.
(3) Remove globals and pass pointers to locals.
(4) Remove "*" in the subsequent comment lines.
(5) Put two spaces before "{" and "}".
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
|
|
When running test-case gdb.opt/inline-small-func.exp with clang 12.0.1, I run
into:
...
gdb compile failed, /usr/bin/ld: inline-small-func0.o: in function `main':
inline-small-func.c:21: undefined reference to `callee'
clang-12.0: error: linker command failed with exit code 1 \
(use -v to see invocation)
UNTESTED: gdb.opt/inline-small-func.exp: failed to prepare
...
Fix this by using __attribute__((always_inline)).
Tested on x86_64-linux.
|
|
I tried building GDB on GNU/Hurd, and ran into this error:
CXX gnu-nat.o
gnu-nat.c: In member function ‘virtual int gnu_nat_target::find_memory_regions(find_memory_region_ftype, void*)’:
gnu-nat.c:2620:21: error: too few arguments to function
2620 | (*func) (last_region_address,
| ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
2621 | last_region_end - last_region_address,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2622 | last_protection & VM_PROT_READ,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2623 | last_protection & VM_PROT_WRITE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2624 | last_protection & VM_PROT_EXECUTE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2625 | 1, /* MODIFIED is unknown, pass it as true. */
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2626 | data);
| ~~~~~
gnu-nat.c:2635:13: error: too few arguments to function
2635 | (*func) (last_region_address, last_region_end - last_region_address,
| ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2636 | last_protection & VM_PROT_READ,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2637 | last_protection & VM_PROT_WRITE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2638 | last_protection & VM_PROT_EXECUTE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2639 | 1, /* MODIFIED is unknown, pass it as true. */
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2640 | data);
| ~~~~~
make[2]: *** [Makefile:1926: gnu-nat.o] Error 1
This is because in this commit:
commit 68cffbbd4406b4efe1aa6e18460b1d7ca02549f1
Date: Thu Mar 31 11:42:35 2022 +0100
[AArch64] MTE corefile support
Added a new argument to find_memory_region_ftype, but did not pass it to
the function in gnu-nat.c. Fix this by passing memory_tagged as false.
As Luis pointed out, similar bugs may also appear on FreeBSD and NetBSD,
and I have reproduced them on both systems. This patch fixes them
incidentally.
Tested by rebuilding on GNU/Hurd, FreeBSD/amd64 and NetBSD/amd64.
|
|
I ran into this error when building GDB on NetBSD:
CXX netbsd-nat.o
netbsd-nat.c: In member function 'virtual bool nbsd_nat_target::info_proc(const char*, info_proc_what)':
netbsd-nat.c:314:3: error: 'gdb_argv' was not declared in this scope
gdb_argv built_argv (args);
^~~~~~~~
netbsd-nat.c:314:3: note: suggested alternative: 'gdbarch'
gdb_argv built_argv (args);
^~~~~~~~
gdbarch
netbsd-nat.c:315:7: error: 'built_argv' was not declared in this scope
if (built_argv.count () == 0)
^~~~~~~~~~
netbsd-nat.c:315:7: note: suggested alternative: 'buildargv'
if (built_argv.count () == 0)
^~~~~~~~~~
buildargv
gmake[2]: *** [Makefile:1893: netbsd-nat.o] Error 1
Fix this by adding the missing header file, as it is obvious.
Tested by rebuilding on NetBSD/amd64.
|
|
After the commit:
commit 08106042d9f5fdff60c129bf33190639f1a98b2a
Date: Thu May 19 13:20:17 2022 +0100
gdb: move the type cast into gdbarch_tdep
GDB would no longer build using g++ 4.8. The issue appears to be some
confusion caused by GDB having 'struct gdbarch_tdep', but also a
templated function called 'gdbarch_tdep'. Prior to the above commit
the gdbarch_tdep function was not templated, and this compiled just
fine. Note that the above commit compiles just fine with later
versions of g++, so this issue was clearly fixed at some point, though
I've not tried to track down exactly when.
In this commit I propose to fix the g++ 4.8 build problem by renaming
'struct gdbarch_tdep' to 'struct gdbarch_tdep_base'. This rename
better represents that the struct is only ever used as a base class,
and removes the overloading of the name, which allows GDB to build
with g++ 4.8.
I've also updated the comment on 'struct gdbarch_tdep_base' to fix a
typo, and the comment on the 'gdbarch_tdep' function, to mention that
in maintainer mode a run-time type check is performed.
|
|
The varobj_invalidate function is meant to be called when restarting a
process, and check at this point if some of the previously existing
varobj can be recreated in the context of the new process.
Two kind of varobj are subject to re-creation: global varobj (i.e.
varobj which reference a global variable), and floating varobj (i.e.
varobj which are always re-evaluated in the context of whatever is
the currently selected frame at the time of evaluation).
However, in the re-creation process, the varobj_invalidate_iter
recreates floating varobj as non-floating, due to an invalid parameter.
This patches fixes this and adds an assertion to check that if a varobj
is indeed recreated, it matches the original varobj "floating" property.
Another issue is that if at this recreation time the expression watched
by the floating varobj is not in scope, then the varobj is marked as
invalid. If later the user selects a frame where the expression becomes
valid, the varobj remains invalid and this is wrong. This patch also
make sure that floating varobj are not invalidated if they cannot be
evaluated.
The last important thing to note is that due to the previous patch, when
varobj_invalidate is executed (in the context of a new process), any
global var have already been invalidated (this has been done when the
objfile it referred to got invalidated). As a consequence,
varobj_invalidate tries to recreate vars which are already marked as
invalid. This does not entirely feels right, but I keep this behavior
for backward compatibility.
Tested on x86_64-linux
|
|
Varobj object contains references to types, variables (i.e. struct
variable) and expression. All of those can reference data on an
objfile's obstack. It is possible for this objfile to be deleted (and
the obstack to be feed), while the varobj remains valid. Later, if the
user uses the varobj, this will result in a use-after-free error. With
address sanitizer build, this leads to a plain error. For non address
sanitizer build we might see undefined behaviour, which manifest
themself as assertion failures when accessing data backed by feed
memory.
This can be observed if we create a varobj that refers to ta symbol in a
shared library, after either the objfile gets reloaded (using the `file`
command) or after the shared library is unloaded (with a call to dlclose
for example).
This patch fixes those issues by:
- Adding cleanup procedure to the free_objfile observable. When
activated this observer clears expressions referencing the objfile
being freed, and removes references to blocks belonging to this
objfile.
- Adding varobj support in the `preserve_values` (gdb.value.c). This
ensures that before the objfile is unloaded, any type owned by the
objfile referenced by the varobj is replaced by an equivalent type
not owned by the objfile. This process is done here instead of in the
free_objfile observer in order to reuse the type hash table already
used for similar purpose when replacing types of values kept in the
value history.
This patch also makes sure to keep a reference to the expression's
gdbarch and language_defn members when the varobj->root->exp is
initialized. Those structures outlive the objfile, so this is safe.
This is done because those references might be used initialize a python
context even after exp is invalidated. Another approach could have been
to initialize the python context with default gdbarch and language_defn
(i.e. nullptr) if expr is NULL, but since we might still try to display
the value which was obtained by evaluating exp when it was still valid,
keeping track of the context which was used at this time seems
reasonable.
Tested on x86_64-Linux.
Co-Authored-By: Pedro Alves <pedro@palves.net>
|
|
With the CLI testsuite's runto proc, we can pass "allow-pending" as an
option, like:
runto func allow-pending
That is currently not possible with MI's mi_runto, however. This
patch makes it possible, by adding a new "-pending" option to
mi_runto.
A pending breakpoint shows different MI attributes compared to a
breakpoint with a location, so the regexp returned by
mi_make_breakpoint isn't suitable. Thus, add a new
mi_make_breakpoint_pending proc for pending breakpoints.
Tweak mi_runto to let it take and pass down arguments.
Change-Id: I185fef00ab545a1df2ce12b4dbc3da908783a37c
|
|
GDB uses the environment variable PYTHONDONTWRITEBYTECODE to
determine whether or not to write the result of byte-compiling
python modules when the "python dont-write-bytecode" setting
is "auto". Simon noticed that GDB's implementation doesn't
follow the Python documentation.
At present, GDB only checks for the existence of this environment
variable. That is not sufficient though. Regarding
PYTHONDONTWRITEBYTECODE, this document...
https://docs.python.org/3/using/cmdline.html
...says:
If this is set to a non-empty string, Python won't try to write
.pyc files on the import of source modules.
This commit fixes GDB's handling of PYTHONDONTWRITEBYTECODE by adding
an empty string check.
This commit also corrects the set/show command documentation for
"python dont-write-bytecode". The current doc was just a copy
of that for set/show python ignore-environment.
During his review of an earlier version of this patch, Eli Zaretskii
asked that the help text that I proposed for "set/show python
dont-write-bytecode" be expanded. I've done that in addition to
clarifying the documentation of this option in the GDB manual.
|