Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Seen on x86_64-linux Ubuntu 24.04.2 using gcc-13.3.0 with
CFLAGS="-m32 -g -O2 -fsanitize=address,undefined"
In function ‘sprintf’,
inlined from ‘s_mri_for’ at gas/config/tc-m68k.c:6941:5:
/usr/include/bits/stdio2.h:30:10: error: null destination pointer [-Werror=format-overflow=]
30 | return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
31 | __glibc_objsize (__s), __fmt,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
32 | __va_arg_pack ());
| ~~~~~~~~~~~~~~~~~
Rewrite the code without sprintf, as in other parts of s_mri_for.
See also commit 760fb390fd4c and following commits.
Note that adding -D_FORTIFY_SOURCE=0 to CFLAGS (which is a good idea
when building with sanitizers) merely transforms the sprintf_chk error
here into one regarding plain sprintf.
|
|
Tidy early out errors which didn't free matching_vector. Don't
bfd_preserve_restore if we get to err_ret from the first
bfd_preserve_save, which might fail from a memory allocation leaving
preserve.marker NULL. Also take bfd_lock a little earlier before
modifying abfd->format to simplify error return path from a lock
failure.
|
|
Also free malloc'd relocs.
|
|
These are only used by cutu_reader, so make them methods of cutu_reader.
This makes it a bit more obvious in which context this code is called.
lookup_dwo_unit_in_dwp can't be made a method of cutu_reader, as it is
used in another context (lookup_dwp_signatured_type /
lookup_signatured_type), which happens during CU expansion.
Change-Id: Ic62c3119dd6ec198411768323aaf640ed165f51b
Approved-By: Tom Tromey <tom@tromey.com>
|
|
get_dwp_file lazily looks for a .dwp file for the given objfile. It is
called by indexing workers, when a cutu_reader object looks for a DWO
file. It is called with the "dwo_lock" held, meaning that the first
worker to get there will do the work, while the others will wait at the
lock.
I'm trying to reduce the time where this lock is taken and do other
refactorings to make it easier to reason about the DWARF reader code.
Moving the lookup of the .dwp file ahead, before we start parallelizing
work, helps makes things simpler, because we can then assume everywhere
else that we have already checked for a .dwp file.
Put the call to open_and_init_dwp_file in dwarf2_has_info, right next to
where we look up .dwz files. I used the same try-catch pattern as for
the .dwz file lookup.
Change-Id: I615da85f62a66d752607f0dbe9f0372dfa04b86b
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Following a previous patch, these functions can accept a per_bfd
instead of a per_objfile.
Change-Id: Iacc8924d2e49a05920d9a7fde2f7584f709fbdd2
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Instead of passing a boolean to create_dwp_hash_table to select the
section to read, it's simpler to just pass the section.
Change-Id: Ie043c31e80518239f6403288dcf03f7769c58e8c
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The sections would have been read already in
dwarf2_locate_common_dwp_sections or dwarf2_locate_dwo_sections, with
this call:
dw_sect->read (objfile);
Change-Id: Ice0ed5d9a2070967826a59b2d6f724451ace22f4
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It is no longer needed.
Change-Id: I22b21b12dc9f74a423bca355d4d83f0167e75f34
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The comment over dwarf2_section_info::get_size says:
In other cases, you must call this function, because for compressed
sections the size field is not set correctly until the section has
been read
From what I can see (while debugging a test case compiled with -gz on
Linux), that's not true. For compressed sections, bfd_section_size
returns the uncompressed size. asection::size contains the uncompressed
size while asection::compressed_size contains the compressed size:
(top-gdb) p sec
$13 = (asection *) 0x521000119778
(top-gdb) p sec.compressed_size
$14 = 6191
(top-gdb) p sec.size
$15 = 12116
I therefore propose to remove dwarf2_section_info::get_size, as it
appears that reading in the section is orthogonal to knowing its size.
If the assumption above is false, it would be nice to document in which
case it's false.
I checked the callers, and I don't think that we need to add any
dwarf2_section_info::read calls to compensate for the fact that get_size
used to do it.
Change-Id: I428571e532301d49f1d8242d687e1fcb819b75c1
Approved-By: Tom Tromey <tom@tromey.com>
|
|
|
|
I noticed a few spots in ada-varobj.c that refer to calling xfree,
where the type in question has changed to std::string. This patch
removes these obsolete comments.
|
|
|
|
gprofng/ChangeLog
2025-04-17 Vladimir Mezentsev <vladimir.mezentsev@oracle.com>
* src/gp-archive.cc: Fix the --help output.
* src/gp-collect-app.cc: Likewise.
* src/gp-display-src.cc: Likewise.
* src/gp-display-text.cc: Likewise.
* src/gprofng.cc: Likewise.
|
|
A fix for commit c4fce3ef2927.
|
|
The test added in commit 4fe96ddaf614 results in an asan complaint:
loongarch-parse.y:225:16: runtime error: left shift of negative value -1
To avoid the complaint, perform left shifts as unsigned (which gives
the same result on 2's complement machines). Do the same for
addition, subtraction and multiplication. Furthermore, warn on
divide/modulus by zero.
|
|
|
|
After building gdb with -fsanitize=threads, and running test-case
gdb.cp/cplusfuncs.exp, I run into a single timeout:
...
FAIL: gdb.cp/cplusfuncs.exp: info function operator=( (timeout)
...
and the test-case takes 2m33s to finish.
This is due to expanding CUs from libstdc++.
After de-installing package libstdc++6-debuginfo, the timeout disappears and
testing time goes down to 9 seconds.
Fix this by not running to main, which brings testing time down to 3 seconds.
With a gdb built without -fsanitize=threads, testing time goes down from 11
seconds to less than 1 second.
Tested on x86_64-linux.
Reviewed-By: Keith Seitz <keiths@redhat.com>
|
|
value_struct_elt_bitpos is weird: it takes an in/out value parameter,
and it takes an error string parameter. However, it only has a single
caller, which never uses the "out" value.
I think it was done this way to mimic value_struct_elt. However,
value_struct_elt is pretty ugly and I don't think it's worth
imitating.
This patch cleans up value_struct_elt_bitpos a bit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This test is not supposed to use -z force-bti
|
|
Both BTI and memory sealing require use of GNU properties and
we should check that there is no interference.
|
|
Fixes [1]. Previously iteration over GNU properties of an input file
could stop before reaching GNU_PROPERTY_AARCH64_FEATURE_1_AND which
would result in incorrect inference of properties of the output file.
In the particular use case described in [1], the memory seal property
GNU_PROPERTY_MEMORY_SEAL with number 3, if present in the input file,
prevented reading information from GNU_PROPERTY_AARCH64_FEATURE_1_AND
property due to filtering by property number.
[1] PR32818 https://sourceware.org/bugzilla/show_bug.cgi?id=32818
|
|
With test-case gdb.threads/clone-attach-detach.exp I usually get:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) PASS: $exp: bg attach <n>: attach
[Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
...
but sometimes I run into:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) [Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
FAIL: $exp: bg attach <n>: attach (timeout)
...
I managed to reproduce this using make target check-readmore and
READMORE_SLEEP=100.
Fix this using -no-prompt-anchor.
Tested on x86_64-linux.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
gdb/copyright.py currently changes some files that it shouldn't:
- despite having a `gnulib/import` entry in EXCLUDE_LIST, it does
change the files under that directory
- it is missing `sim/Makefile.in`
Change the exclude list logic to use glob patterns. This makes it
easier to specify exclusions of full directories or files by basename,
while simplifying the code.
Merge EXCLUDE_LIST and NOT_FSF_LIST, since there's no fundamental reason
to keep them separate (they are treated identically). I kept the
comment that explains that some files are excluded due to not being
FSF-licensed.
Merge EXCLUDE_ALL_LIST in EXCLUDE_LIST, converting the entries to glob
patterns that match everywhere in the tree (e.g. `**/configure`).
Tested by running the script on the parent commit of d01e823438c7
("Update copyright dates to include 2025") and diff'ing the result with
d01e823438c7. The only differences are:
- the files that we don't want to modify (gnulib/import and
sim/Makefile.in)
- the files that need to be modified by hand
Running the script on latest master produces no diff.
Change-Id: I318dc3bff34e4b3a9b66ea305d0c3872f69cd072
Reviewed-By: Guinevere Larsen <guinevere@redhat.com>
|
|
Add `pyright: strict` at the top of the file, then adjust the fallouts.
This annotation is understood by pyright, and thus any IDE using pyright
behind the scenes (VSCode and probably others).
I presume that any GDB developer running this script is using a recent
enough version of Python, so specify the type annotations using the
actual types when possible (e.g. `list[str]` instead of
`typing.List[str]`). I believe this required Python 3.9.
Change-Id: I3698e28555e236a03126d4cd010dae4b5647ce48
Reviewed-By: Guinevere Larsen <guinevere@redhat.com>
|
|
|
|
With test-case gdb.base/bg-execution-repeat.exp, occasionally I run into a
timeout:
...
(gdb) c 1&
Will stop next time breakpoint 1 is reached. Continuing.
(gdb) PASS: $exp: c 1&: c 1&
Breakpoint 2, foo () at bg-execution-repeat.c:23
23 return 0; /* set break here */
PASS: $exp: c 1&: breakpoint hit 1
Will stop next time breakpoint 2 is reached. Continuing.
(gdb) PASS: $exp: c 1&: repeat bg command
print 1
$1 = 1
(gdb) PASS: $exp: c 1&: input still accepted
interrupt
(gdb) PASS: $exp: c 1&: interrupt
Program received signal SIGINT, Interrupt.
foo () at bg-execution-repeat.c:24
24 }
PASS: $exp: c 1&: interrupt received
set var do_wait=0
(gdb) PASS: $exp: c 1&: set var do_wait=0
continue&
Continuing.
(gdb) PASS: $exp: c 1&: continue&
FAIL: $exp: c 1&: breakpoint hit 2 (timeout)
...
I can reproduce it reliably by adding a "sleep (1)" before the "do_wait = 1"
in the .c file.
The timeout happens as follows:
- with the inferior stopped at main, gdb continues (command c 1&)
- the inferior hits the breakpoint at foo
- gdb continues (using the repeat command functionality)
- the inferior is interrupted
- inferior variable do_wait gets set to 0. The assumption here is that the
inferior has progressed enough that do_wait is set to 1 at that point, but
that happens not to be the case. Consequently, this has no effect.
- gdb continues
- the inferior sets do_wait to 1
- the inferior enters the wait function, and wait for do_wait to become 0,
which never happens.
Fix this by moving the "do_wait = 1" to before the first call to foo.
Tested on x86_64-linux.
Reviewed-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
* rescoff.c (read_coff_res_dir): Add more sanity checking.
Tidy and correct existing checks.
|
|
Also remove unnecessary casts on memory alloc function returns.
|
|
Commit 9e68cae4fdfb broke the check I added in commit 4846e543de95.
Add missing "return NULL".
|
|
|
|
This changes the Ada symbol cache to use gdb::unordered_set rather
than an htab.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This patch changes decoded_names_store to use a gdb::string_set rather
than an htab.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
On s390x-linux, with test-case gdb.ada/overloads.exp and gcc 7.5.0 I run into:
...
(gdb) print Oload(CA)^M
Could not find a match for oload^M
(gdb) FAIL: $exp: print Oload(CA)
...
The mismatch happens here in ada_type_match:
...
return ftype->code () == atype->code ();
...
with:
...
(gdb) p ftype->code ()
$3 = TYPE_CODE_TYPEDEF
(gdb) p atype->code ()
$4 = TYPE_CODE_ARRAY
...
At the start of ada_type_match, typedefs are skipped:
...
ftype = ada_check_typedef (ftype);
atype = ada_check_typedef (atype);
...
but immediately after this, refs are skipped:
...
if (ftype->code () == TYPE_CODE_REF)
ftype = ftype->target_type ();
if (atype->code () == TYPE_CODE_REF)
atype = atype->target_type ();
...
which in this case makes ftype a typedef.
Fix this by using ada_check_typedef after skipping the refs as well.
Tested on x86_64-linux and s390x-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR ada/32409
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32409
|
|
On s390x-linux, with test-case gdb.ada/scalar_storage.exp we have:
...
(gdb) print V_LE^M
$1 = (value => 126, another_value => 12, color => 3)^M
(gdb) FAIL: gdb.ada/scalar_storage.exp: print V_LE
print V_BE^M
$2 = (value => 125, another_value => 9, color => green)^M
(gdb) KFAIL: $exp: print V_BE (PRMS: DW_AT_endianity on enum types)
...
The KFAIL is incorrect in the sense that gdb is behaving as expected.
The problem is incorrect debug info, so change this into an xfail.
Furthermore, extend the xfail to cover V_LE.
Tested on s390x-linux and x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR testsuite/32875
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32875
|
|
When compiling with -gsplit-dwarf -fdebug-types-section, DWARF 5
.debug_info.dwo sections may contain some type units:
$ llvm-dwarfdump -F -color a-test.dwo | head -n 5
a-test.dwo: file format elf64-x86-64
.debug_info.dwo contents:
0x00000000: Type Unit: length = 0x000008a0, format = DWARF32, version = 0x0005, unit_type = DW_UT_split_type, abbr_offset = 0x0000, addr_size = 0x08, name = 'vector<int, std::allocator<int> >', type_signature = 0xb499dcf29e2928c4, type_offset = 0x0023 (next unit at 0x000008a4)
In this case, create_dwo_cus_hash_table wrongly creates a dwo_unit for
it and adds it to dwo_file::cus. create_dwo_debug_type_hash_table later
correctly creates a dwo_unit that it puts in dwo_file::tus.
This can be observed with:
$ ./gdb -nx -q --data-directory=data-directory -ex 'maint set dwarf sync on' -ex "maint set worker-threads 0" -ex "set debug dwarf-read 2" -ex "file a.out" -batch
...
[dwarf-read] create_dwo_cus_hash_table: Reading .debug_info.dwo for /home/smarchi/build/binutils-gdb/gdb/a-test.dwo:
[dwarf-read] create_dwo_cus_hash_table: offset 0x0, dwo_id 0xb499dcf29e2928c4
[dwarf-read] create_dwo_cus_hash_table: offset 0x8a4, dwo_id 0x496a8791a842701b
[dwarf-read] create_dwo_cus_hash_table: offset 0x941, dwo_id 0xefd13b3f62ea9fea
...
[dwarf-read] create_dwo_debug_type_hash_table: Reading .debug_info.dwo for /home/smarchi/build/binutils-gdb/gdb/a-test.dwo
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x0, signature 0xb499dcf29e2928c4
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x8a4, signature 0x496a8791a842701b
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x941, signature 0xefd13b3f62ea9fea
...
Fix it by skipping anything that isn't a compile unit in
create_dwo_cus_hash_table. After this patch, the debug output of
create_dwo_cus_hash_table only shows one created dwo_unit, as we expect.
I couldn't find any user-visible problem related to this, I just noticed
it while debugging.
Change-Id: I7dddf766fe1164123b6702027b1beb56114f25b1
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
Rename some functions to make it clearer that they are only relevant
when dealing with DWO files.
Change-Id: Ia0cd3320bf16ebdbdc3c09d7963f372e6679ef7c
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
The flag already exists but it's not been exposed to user.
Signed-off-by: Marek Pikuła <m.pikula@partner.samsung.com>
|
|
|
|
Fix commit c4fce3ef2927 brace position.
|
|
windres code has the habit of exiting on any error. That's not so
bad, but it does make oss-fuzz ineffective when testing windres. Fix
many places that print errors and exit to instead print the error and
pass status up the call chain. In the process of doing this, I
noticed write_res_file was calling bfd_close without checking return
status. Fixing that resulted in lots of testsuite failures. The
problem was a lack of bfd_set_format in windres_open_as_binary, which
leaves the output file as bfd_unknown format. As it happens this
doesn't make any difference in writing the output binary file, except
for the bfd_close return status.
|
|
oss-fuzz testcase manages to hit a buffer overflow. Sanity check
by passing the buffer length to bin_to_res_toolbar and ensuring reads
don't go off the end of the buffer.
|
|
Add -fPIC when compiling the test, to fix complaints on some targets
about certains relocation not being valid for shared libraries.
|
|
There's currently no test for unwinding the SVE registers from a signal
frame, so add one.
Tested on aarch64-linux-gnu native.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
While commit ef6379e16dd1 ("Set the default DLL chracteristics to 0 for
Cygwin based targets") tried to undo the too broad earlier 514b4e191d5f
("Change the default characteristics of DLLs built by the linker to more
secure settings"), it didn't go quite far enough. Apparently the
assumption was that if it's not MinGW, it must be Cygwin. Whether it
really is okay to default three of the flags to non-zero on MinGW also
remains unclear - sadly neither of the commits came with any description
whatsoever. (Documentation also wasn't updated to indicate the restored
default.)
Setting effectively any of the DLL characteristics flags depends on
properties of the binary being linked. While defaulting to "more secure"
is a fair goal, it's only the programmer who can know whether their code
is actually compatible with the respective settings. On the assumption
that the change of defaults was indeed deliberate (and justifiable) for
MinGW, limit them to just that. In particular, don't default any of the
flags to set also for non-MinGW, non-Cygwin targets, like e.g. UEFI. At
least the mere applicability of the high-entropy-VA bit is pretty
questionable there in the first place - UEFI applications, after all,
run in "physical mode", i.e. either unpaged or (where paging is a
requirement, like for x86-64) direct-mapped.
The situation is particularly problematic with NX-compat: Many UEFI
implementations respect the "physical mode" property, where permissions
can't be enforced anyway. Some, like reportedly OVMF, even have a build
option to behave either way. Hence successfully testing a UEFI binary on
any number of systems does not guarantee it won't crash elsewhere if the
flag is wrongly set.
Get rid of excess semicolons as well.
|
|
There's no reason not to do as the comment says, just like all other
architectures do when they need custom field: Call the allocation method
of the "superclass". Which is the ELF one, of which in turn the BFD one
is the "superclass", dealt with accordingly by
_bfd_elf_link_hash_newfunc().
|
|
The need for this has disappeared with c65c21e1ffd1 ("various i386-aout
and i386-coff target removal"), with a few other users having got
removed just a few days earlier; avoid the unnecessary indirection.
|
|
While COFF, unlike ELF, doesn't have a generic way to express symbol
size, there is a means to do so for functions. When inputs are ELF,
propagate function sizes, including the fact that a symbol denotes a
function, to the output's symbol table.
Note that this requires hackery (cross-object-format processing) in two
places - when linking, global symbols are entered into a global hash
table, and hence relevant information needs to be updated there in that
case, while otherwise the original symbol structures can be consulted.
For the setting of ->u.syment.n_type the later writing of the field to
literal 0 needs to be dropped from coff_write_alien_symbol(). It was
redundant anyway with an earlier write of the field using C_NUL.
|
|
... to be all in one group. This helps code generation for code like
|| is_cpu (&i.tm, CpuPadLock)
|| is_cpu (&i.tm, CpuPadLockRNG2)
|| is_cpu (&i.tm, CpuPadLockPHE2)
|| is_cpu (&i.tm, CpuPadLockXMODX)
that we have (effectively) twice.
|