Age | Commit message (Collapse) | Author | Files | Lines |
|
GDB's Python documentation does make it clear that keywords arguments
are supported for functions that take 2 or more arguments. The
documentation makes no promise for keyword argument support on
functions that only take a single argument.
That said, I'm a fan of keyword arguments, I think they help document
the code, and make intentions clearer, even for single argument
functions.
As I'm changing gdb.Color anyway (see previous commit), I'd like to
add keyword argument support to gdb.Color.escape_sequence, even though
this is a single argument method. This should be harmless for anyone
who doesn't want to use keywords, but adds the option for those of us
that do.
I've also removed a redundant check that the 'self' argument was a
gdb.Color object; Python already ensures this is the case.
And I have folded the check that the single argument is a bool into
the gdb_PyArg_ParseTupleAndKeywords call, this means that the error
message will include the incorrect type name now, which should make
debugging issues easier.
Tests have been extended to cover both cases -- it appears the
incorrect argument type error was not previously tested, so it is
now.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
GDB's Python API documentation is clear:
Functions and methods which have two or more optional arguments allow
them to be specified using keyword syntax.
The gdb.Color.__init__ method matches this description, but doesn't
support keyword arguments.
This commit fixes this by adding keyword argument support.
There's a new test to cover this functionality.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
While reading through the documentation for the new gdb.Color class I
spotted a couple of things which I thought could be improved:
* I replaced @code{Color} with @code{gdb.Color}. Most of the other
classes are referenced with the 'gdb.' prefix, so this makes
gdb.Color consistent. Including the 'gdb.' prefix makes it far
easier to search the documentation to find relevant content. And
finally, my understanding is that usually in Python code, the
class would be written as 'gdb.Color' unless the user specifically
pulls 'Color' into the current scope using 'from gdb import
Color'.
* Replace 'colorspace' with 'color space'. There was already a use
of the two word form in the documentation (for gdb.Color), so this
just makes things consistent.
* Removed use of @var on two @defun lines. No other @defun lines
use @var, so the use of @var here was making the output
inconsistent, e.g. in the 'info' output, @var causes the string to
be capitalised.
* Rename the 'color-space' argument to 'color_space' for
Color.__init__. In the next commit I plan to add Python keyword
argument support to this function, which means the argument name
needs to be a valid keyword (i.e. must not contain the '-'
character).
* Added a pointer to where the @samp{COLORSPACE_} constants can be
found. These constants are referenced before they are defined in
the documentation, which is fine, but I think it is a good idea to
let the user know where the constants can be found when we first
reference them.
* Remove use of 'self' for the Color.escape_sequence documentation.
There are a few functions that do include 'self' as an argument (I
think this is a mistake) but the vast majority don't. I think not
including 'self' is the better approach; a user wouldn't be
expected to explicitly pass 'self', this is done automatically by
Python as a result of calling the method on an object. So I've
removed the reference to 'self' from this method.
Approved-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I've been reviewing all uses of PyObject_IsInstance, and I believe
that the use of PyObject_IsInstance in py-unwind.c is not entirely
correct. The use of PyObject_IsInstance is in this code in
frame_unwind_python::sniff:
if (PyObject_IsInstance (pyo_unwind_info,
(PyObject *) &unwind_info_object_type) <= 0)
error (_("A Unwinder should return gdb.UnwindInfo instance."));
The problem is that PyObject_IsInstance can return -1 to indicate an
error, in which case a Python error will have been set. Now, the
above code appears to handle this case, it checks for '<= 0', however,
frame_unwind_python::sniff has this near the start:
gdbpy_enter enter_py (gdbarch);
And looking in python.c at 'gdbpy_enter::~gdbpy_enter ()', you'll
notice that if an error is set then the error is printed, but also, we
get a warning about an unhandled Python exception. Clearly, all
exceptions should have been handled by the time the gdbpy_enter
destructor is called.
I've added a test as part of this commit that exposes this problem,
the current output is:
(gdb) backtrace
Python Exception <class 'RuntimeError'>: error in Blah.__class__
warning: internal error: Unhandled Python exception
Python Exception <class 'gdb.error'>: A Unwinder should return gdb.UnwindInfo instance.
#0 corrupt_frame_inner () at /home/andrew/projects/binutils-gdb/build.dev-g/gdb/testsuite/../../../src.dev-g/gdb/test>
(gdb)
An additional observation is that we use PyObject_IsInstance to check
that the return value is a gdb.UnwindInfo, or a sub-class. However,
gdb.UnwindInfo lacks the Py_TPFLAGS_BASETYPE flag, and so cannot be
sub-classed. As such, PyObject_IsInstance is not really needed, we
could use PyObject_TypeCheck instead. The PyObject_TypeCheck function
only returns 0 or 1, there is no -1 error case. Switching to
PyObject_TypeCheck then, fixes the above problem.
There's a new test that exposes the problems that originally existed.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In python/py-registers.c we make use of PyObject_IsInstance. The
PyObject_IsInstance can return -1 for an error, 0 for false, or 1 for
true.
In py-registers.c we treat the return value from PyObject_IsInstance
as a boolean, which means both -1 and 1 will be treated as true.
If PyObject_IsInstance returns -1 for an error, this will be treated
as true, we will then invoke undefined behaviour as the pyo_reg_id
object will be treated as a gdb.RegisterDescriptor, even though it
might not be.
I noticed that the gdb.RegisterDescriptor class does not have the
Py_TPFLAGS_BASETYPE flag, and therefore cannot be inherited from. As
such, using PyObject_IsInstance is not necessary, we can use
PyObject_TypeCheck instead. The PyObject_TypeCheck function only
returns 0 or 1, so we don't need to worry about the error case.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Because it runs so many variations, the test
gdb.dwarf2/macro-source-path.exp takes about 2:40 minutes to run for me,
in a non-optimized build. These days I often run all tests under
gdb.dwarf2, as a sanity test for my changes, and so I often have to wait
for this test to complete.
Split the test, to allow it to complete faster when running the
testsuite in parallel. After this patch, running all the
gdb.dwarf2/macro-source-path-*.exp tests in parallel takes me about 1
minute. It's more that I would expect, I would expect the time to be
divided by nearly 5, but it's already better than what we have now.
Change-Id: I07e4e1f234cf57d9b0c1c027f08061615714a4d5
Acked-By: Tom de Vries <tdevries@suse.de>
|
|
When I (Guinevere) pushed commit
b9c7eed0c2409fc640129a38d80a2bf1212b464a I accidentally used an outdated
version of the patch. This current patch fixes the importation of that
patch based on the actually approved version instead.
|
|
With a gdb 16.2 based package, I ran into:
...
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: input still accepted
interrupt
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: interrupt
set var do_wait=0
(gdb) PASS: gdb.base/bg-execution-repeat.exp: c 1&: set var do_wait=0
continue&
Cannot execute this command while the selected thread is running.
(gdb)
Program received signal SIGINT, Interrupt.
PASS: gdb.base/bg-execution-repeat.exp: c 1&: continue&
0x00007ffff7cf1503 in clock_nanosleep@GLIBC_2.2.5 () from /lib64/libc.so.6
FAIL: gdb.base/bg-execution-repeat.exp: c 1&: breakpoint hit 2 (timeout)
...
Fix this by waiting for "Program received signal SIGINT, Interrupt" after
issuing the interrupt command.
Tested on x86_64-linux.
|
|
The gdbpy_is_color function uses PyObject_IsInstance, and converts the
return from PyObject_IsInstance to a bool.
Unfortunately, PyObject_IsInstance can return -1, 0, or 1, for error,
failure, or success respectively. When converting to a bool both -1
and 1 will convert to true.
Additionally, when PyObject_IsInstance returns -1 an error will be
set.
What this means is that, if gdbpy_is_color is called with a non
gdb.Color object, and the PyObject_IsInstance check raises an error,
then (a) GDB will continue as if the object is a gdb.Color object,
which is likely going to invoke undefined behaviour, see
gdbpy_get_color for example, and (b) when GDB eventually returns to
the Python interpreter, due to an error being set, we'll see:
Python Exception <class 'SystemError'>: PyEval_EvalFrameEx returned a result with an error set
Error occurred in Python: PyEval_EvalFrameEx returned a result with an error set
However, after the previous commit, gdb.Color can no longer be
sub-classed, this means that fixing the above problems is easy, we can
replace the PyObject_IsInstance check with a PyObject_TypeCheck, the
PyObject_TypeCheck function only returns 0 or 1, there's no -1 error
case.
It's also worth noting that PyObject_TypeCheck is the function that is
more commonly used within GDB's Python API implementation, include the
py-color.c use there were only 4 PyObject_IsInstance uses. Of the
remaining 3, 2 are fine, and one other (in py-disasm.c) is also
wrong. I'll address that in a separate patch.
There's also a new test included which exposes the above issue.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Remove the Py_TPFLAGS_BASETYPE flag from the gdb.Color type. This
effectively makes gdb.Color final; users can no longer create classes
that inherit from gdb.Color.
Right now I cannot think of any cases where inheritance would be
needed over composition for a simple type like gdb.Color. If I'm
wrong, then it's easy to add Py_TPFLAGS_BASETYPE back in later, this
would be an extension of the API. But it's much harder to remove the
flag later as that might break existing user code (note: there has
been no release of GDB yet that includes the gdb.Color type).
Introducing this restriction makes the next commit easier.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The PyObject_IsInstance function can return -1 for errors, 0 to
indicate false, and 1 to indicate true.
I noticed in python/py-disasm.c that we treat the result of
PyObject_IsInstance as a bool. This means that if PyObject_IsInstance
returns -1, then this will be treated as true. The consequence of
this is that we will invoke undefined behaviour by treating the result
from the _print_insn call as if it was a DisassemblerResult object,
even though PyObject_IsInstance raised an error, and the result might
not be of the required type.
I could fix this by taking the -1 result into account, however,
gdb.DisassemblerResult cannot be sub-classed, the type doesn't have
the Py_TPFLAGS_BASETYPE flag. As such, we can switch to using
PyObject_TypeCheck instead, which only return 0 or 1, with no error
case.
I have also taken the opportunity to improve the error message emitted
if the result has the wrong type. Better error message make debugging
issues easier.
I've added a test which exposes the problem when using
PyObject_IsInstance, and I've updated the existing test for the
improved error message.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Commit b9c7eed0c2409fc640129a38d80a2bf1212b464a recently introduced
a build failure, because the file gdb/riscv-canonicalize-syscall-gen.c
hasn't been added to the ALL_64_TARGET_OBS variable in the makefile,
leading to a linker issue. This commit fixes that.
Also, turns out, the new file was slightly outdated, as the gdb_old_mmap
syscall has been renamed to gdb_sys_old_mmap in commit
432eca4113d5748ad284a068873455f9962b44fe. This commit also fixes that
on the generated file itself, to quickly fix the build. A followup
commit will fix the python file responsible for generating the .c file.
|
|
Continuing to improve GDB's ability to debug linker namespaces, this
commit adds the command "info linker- namespaces". The command is
similar to "info sharedlibrary" but focused on improved readability
when the inferior has multiple linker namespaces active. This command
can be used in 2 different ways, with or without an argument.
When called without argument, the command will print the number of
namespaces, and for each active namespace, it's identifier, how many
libraries are loaded in it, and all the libraries (in a similar table to
what "info sharedlibrary" shows). As an example, this is what GDB's
output could look like:
(gdb) info linker-namespaces
There are 2 linker namespaces loaded
There are 3 libraries loaded in linker namespace [[0]]
Displaying libraries for linker namespace [[0]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7ebc000 0x00007ffff7fa2000 Yes (*) /lib64/libm.so.6
0x00007ffff7cc9000 0x00007ffff7ebc000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
There are 4 libraries loaded in linker namespace [[1]]
Displaying libraries for linker namespace [[1]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7fb9000 0x00007ffff7fbe000 Yes gdb.base/dlmopen-ns-ids/dlmopen-lib.so
0x00007ffff7bc4000 0x00007ffff7caa000 Yes (*) /lib64/libm.so.6
0x00007ffff79d1000 0x00007ffff7bc4000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
When called with an argument, the argument must be a namespace
identifier (either with or without the square brackets decorators). In
this situation, the command will truncate the output to only show the
relevant information for the requested namespace. For example:
(gdb) info linker-namespaces 0
There are 3 libraries loaded in linker namespace [[0]]
Displaying libraries for linker namespace [[0]]:
From To Syms Read Shared Object Library
0x00007ffff7fc6000 0x00007ffff7fff000 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7ebc000 0x00007ffff7fa2000 Yes (*) /lib64/libm.so.6
0x00007ffff7cc9000 0x00007ffff7ebc000 Yes (*) /lib64/libc.so.6
(*): Shared library is missing debugging information.
The test gdb.base/dlmopen-ns-id.exp has been extended to test this new
command. Because some gcc and glibc defaults can change between
systems, we are not guaranteed to always have libc and libm loaded in a
namespace, so we can't guarantee the number of libraries, but the range
of the result is 2, so we can still check for glaring issues.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
The next patch will add a new command that will print libraries in a
manner very similar to the existing "info sharedlibrary" command. To
make that patch simpler to review, this commit does the bulk of
refactoring work, since it ends up being a non-trivial diff to review.
No functional changes are expected after this commit.
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
This commit adds 2 simple built-in convenience variables to help users
debug an inferior with multiple linker namespaces. The first is
$_active_linker_namespaces, which just counts how many namespaces have SOs
loaded onto them. The second is $_current_linker_namespace, and it tracks
which namespace the current location in the inferior belongs to.
This commit also introduces a test ensuring that we track namespaces
correctly, and that a user can use the $_current_linker_namespace
variable to set a conditional breakpoint, while linespec changes aren't
finalized to make it more convenient.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
|
|
Extend `print_target_wait_results` to print the target from which the
wait result came.
Approved-By: Pedro Alves <pedro@palves.net>
|
|
It includes changes to the following files:
- gdb/riscv-linux-tdep.c, gdb/riscv-linux-tdep.h: adds facilities to record
syscalls.
- gdb/riscv-tdep.c, gdb/riscv-tdep.h: adds facilities to record execution of
rv64gc instructions.
- gdb/configure.tgt: adds new files for compilation.
- gdb/testsuite/lib/gdb.exp: enables testing of full record mode for RISC-V
targets.
- gdb/syscalls/riscv-canonicalize-syscall-gen.py: a script to generate
function that canonicalizes RISC-V syscall. This script can simplify support
for syscalls on rv32 and rv64 system (currently support only for rv64). To
use this script you need to pass a path to a file with syscalls description
from riscv-glibc (example is in the help message). The script produces a
mapping from syscall names to gdb_syscall enum.
- gdb/riscv-canonicalize-syscall.c: the file generated by the previous script.
- gdb/doc/gdb.texinfo: notification that record mode is enabled in RISC-V.
- gdb/NEWS: notification of new functionality.
Approved-By: Guinevere Larsen <guinevere@redhat.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Use '=', not '==', as configure has a #!/bin/sh shebang and must work
with non-bash shells.
Fixes: c4375bc51c861dfa384a01bdb2e460e115710bf9
|
|
In commit a98a6fa2d8e ("s390: Add arch15 instructions"), support for
new instructions was added to libopcodes, but the added tests only exercise
this for gas.
Add a unit test disassemble-s390x that checks gdb's ability to
disassemble one of these instructions:
...
$ gdb -q -batch -ex "maint selftest -v disassemble-s390x"
Running selftest disassemble-s390x.
0xb9 0x68 0x00 0x03 -> clzg %r0,%r3
Ran 1 unit tests, 0 failed
...
Tested on x86_64-linux and s390x-linux.
|
|
Since commit 7b80401da00 ("Handle DWARF 5 separate debug sections"), test-case
gdb.debuginfod/fetch_src_and_symbols.exp fails here:
...
(gdb) file fetch_src_and_symbols_alt.o^M
Reading symbols from fetch_src_and_symbols_alt.o...^M
warning: could not find supplementary DWARF file \
(fetch_src_and_symbols_dwz.o) for fetch_src_and_symbols_alt.o^M
(gdb) FAIL: $exp: no_url: file fetch_src_and_symbols_alt.o
...
because this is expected:
...
(gdb) file fetch_src_and_symbols_alt.o^M
Reading symbols from fetch_src_and_symbols_alt.o...^M
warning: could not find '.gnu_debugaltlink' file for \
fetch_src_and_symbols_alt.o^M
(gdb) PASS: $exp: no_url: file fetch_src_and_symbols_alt.o
...
Fix this by updating the regexp.
Tested on x86_64-linux.
|
|
Added tests for division/modulo by zero for instruction expressions.
|
|
This fixes an inconsistency in the linker map file, where string merge
sections (other than the first) kept their sizes. String merge
sections of like entsize all are accounted in the fisrt string merge
section size.
* ldlang.c (print_input_section): Print SEC_EXCLUDE section size
as zero.
|
|
|
|
No uses of %F remain.
* ldmisc.c (vfinfo): Remove %F handling.
|
|
This adds a "-5" flag to cc-with-tweaks, mirroring dwz's "-5" flag,
and also adds a new cc-with-dwz-5 target board.
The "-5" flag tells dwz to use the DWARF 5 .debug_sup section in
multi-file mode.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32808
|
|
DWARF 5 standardized the .gnu_debugaltlink section that dwz emits in
multi-file mode. This is handled via some new forms, and a new
.debug_sup section.
This patch adds support for this to gdb. It is largely
straightforward, I think, though one oddity is that I chose not to
have this code search the system build-id directories for the
supplementary file. My feeling was that, while it makes sense for a
distro to unify the build-id concept with the hash stored in the
.debug_sup section, there's no intrinsic need to do so.
This in turn means that a few tests -- for example those that test the
index cache -- will not work in this mode.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32808
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
dwz_file::read_string calls 'read' on the section, but this isn't
needed as the sections have all been pre-read.
This patch makes this change, and refactors dwz_file a bit to make
this more obvious -- by making it clear that only the "static
constructor" can create a dwz_file.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Tested-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
There was a comment in gdb.python/py-color.exp that was probably left
over from a copy & paste, it incorrectly described what the test
script was testing.
Fixed in this commit.
There's no change in what is tested with this commit.
|
|
A few minor GNU/GDB coding style issues in py-color.c:
- Space after '&' reference operator in one place.
- Some excessive indentation on a couple of lines.
- Spaces after '!' logical negation operator.
- Using a pointer as a bool in a couple of places.
There should be no functional changes after this commit.
|
|
Spotted a stray white space at the end of an error message. Removed,
and updated the py-breakpoint.exp test to check this case.
|
|
In this review:
https://inbox.sourceware.org/gdb-patches/86sem6ase5.fsf@gnu.org
it was pointed out that I should use @samp{} around some text I was
adding to the documentation. However, the offending snippet of
documentation was something I copied from elsewhere in python.texi.
This commit fixes the original to use @samp{}.
|
|
I noticed that this commit:
commit 6447969d0ac774b6dec0f95a0d3d27c27d158690
Date: Sat Oct 5 22:27:44 2024 +0300
Add an option with a color type.
has an unnecessary `Py_INCREF (self);` in gdb.Color.__init__. This
means that the reference count on all gdb.Color objects (that pass
through __init__) will be +1 from where they should normally be, and
this will stop the gdb.Color objects from being deallocated.
Fix by removing the Py_INCREF call.
Add a test which exposes the memory leak.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
We currently have two memory leak tests in gdb.python/ and there's a
lot of duplication between these two.
In the next commit I'd like to add yet another memory leak test, which
would mean a third set of scripts which duplicate the existing two.
And three is where I draw the line.
This commit factors out the core of the memory leak tests into a new
module gdb_leak_detector.py, which can then be imported by each
tests's Python file in order to make writing the memory leak tests
easier.
I've also added a helper function to lib/gdb-python.exp which captures
some of the common steps needed in the TCL file in order to run a
memory leak test.
Finally, I use this new infrastructure to rewrite the two existing
memory leak tests.
What I considered, but ultimately didn't do, is merge the two memory
leak tests into a single TCL script. I did consider this, and for the
existing tests this would be possible, but future tests might require
different enough setup that this might not be possible for all future
tests, and now that we have helper functions in a central location,
the each individual test is actually pretty small now, so leaving them
separate seemed OK.
There should be no change in what is actually being tested after this
commit.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
ui_file::reset_style doesn't seem to be needed. This patch removes
it. Regression tested on x86-64 Fedora 40.
|
|
This fixes a crash on Windows NT 4.0, where windows-nat failed dynamic loading
some Win32 functions and print a warning message with styled string, which
depends on ui-style regex. By using `compiled_regex` constructor, the regex is
guaranteed to be initialized before `_initialize_xxx` functions.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
gas/config/
* tc-aarch64.c (aarch64_sframe_get_abi_arch): Fix typo in
comment on SFrame identifier.
* tc-aarch64.h (aarch64_sframe_get_abi_arch,
sframe_get_abi_arch): Likewise.
* tc-i386.c (x86_sframe_get_abi_arch): Likewise.
* tc-i386.h (x86_sframe_get_abi_arch, sframe_get_abi_arch):
Likewise.
Reported-by: Indu Bhagat <indu.bhagat@oracle.com>
Signed-off-by: Jens Remus <jremus@linux.ibm.com>
|
|
gprofng/ChangeLog
2025-04-18 Vladimir Mezentsev <vladimir.mezentsev@oracle.com>
* doc/gprofng_ug.texi: Fix typo.
|
|
On Intel, gprofng should adjusts return addresses, including user leaf functions.
gprofng/ChangeLog
2025-04-18 Vladimir Mezentsev <vladimir.mezentsev@oracle.com>
* src/CallStack.cc (add_stack): Adjust return addresses on Intel.
|
|
For linux target, when trying to run a program from gdb, the
following defect is seen:
Program received signal SIGILL, Illegal instruction.
0x48004674 in _dl_debug_state () from target:/lib/ld.so.1
* microblaze-linux-tdep.c (microblaze_linux_memory_remove_breakpoint):
Call make_scoped_restore_show_memory_breakpoints
Signed-off-by: Gopi Kumar Bulusu <gopi@sankhya.com>
Signed-off-by: Michael J. Eager <eager@eagercon.com>
|
|
|
|
|
|
Seen on x86_64-linux Ubuntu 24.04.2 using gcc-13.3.0 with
CFLAGS="-m32 -g -O2 -fsanitize=address,undefined"
In function ‘sprintf’,
inlined from ‘s_mri_for’ at gas/config/tc-m68k.c:6941:5:
/usr/include/bits/stdio2.h:30:10: error: null destination pointer [-Werror=format-overflow=]
30 | return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
31 | __glibc_objsize (__s), __fmt,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
32 | __va_arg_pack ());
| ~~~~~~~~~~~~~~~~~
Rewrite the code without sprintf, as in other parts of s_mri_for.
See also commit 760fb390fd4c and following commits.
Note that adding -D_FORTIFY_SOURCE=0 to CFLAGS (which is a good idea
when building with sanitizers) merely transforms the sprintf_chk error
here into one regarding plain sprintf.
|
|
Tidy early out errors which didn't free matching_vector. Don't
bfd_preserve_restore if we get to err_ret from the first
bfd_preserve_save, which might fail from a memory allocation leaving
preserve.marker NULL. Also take bfd_lock a little earlier before
modifying abfd->format to simplify error return path from a lock
failure.
|
|
Also free malloc'd relocs.
|
|
These are only used by cutu_reader, so make them methods of cutu_reader.
This makes it a bit more obvious in which context this code is called.
lookup_dwo_unit_in_dwp can't be made a method of cutu_reader, as it is
used in another context (lookup_dwp_signatured_type /
lookup_signatured_type), which happens during CU expansion.
Change-Id: Ic62c3119dd6ec198411768323aaf640ed165f51b
Approved-By: Tom Tromey <tom@tromey.com>
|
|
get_dwp_file lazily looks for a .dwp file for the given objfile. It is
called by indexing workers, when a cutu_reader object looks for a DWO
file. It is called with the "dwo_lock" held, meaning that the first
worker to get there will do the work, while the others will wait at the
lock.
I'm trying to reduce the time where this lock is taken and do other
refactorings to make it easier to reason about the DWARF reader code.
Moving the lookup of the .dwp file ahead, before we start parallelizing
work, helps makes things simpler, because we can then assume everywhere
else that we have already checked for a .dwp file.
Put the call to open_and_init_dwp_file in dwarf2_has_info, right next to
where we look up .dwz files. I used the same try-catch pattern as for
the .dwz file lookup.
Change-Id: I615da85f62a66d752607f0dbe9f0372dfa04b86b
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Following a previous patch, these functions can accept a per_bfd
instead of a per_objfile.
Change-Id: Iacc8924d2e49a05920d9a7fde2f7584f709fbdd2
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Instead of passing a boolean to create_dwp_hash_table to select the
section to read, it's simpler to just pass the section.
Change-Id: Ie043c31e80518239f6403288dcf03f7769c58e8c
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The sections would have been read already in
dwarf2_locate_common_dwp_sections or dwarf2_locate_dwo_sections, with
this call:
dw_sect->read (objfile);
Change-Id: Ice0ed5d9a2070967826a59b2d6f724451ace22f4
Approved-By: Tom Tromey <tom@tromey.com>
|
|
It is no longer needed.
Change-Id: I22b21b12dc9f74a423bca355d4d83f0167e75f34
Approved-By: Tom Tromey <tom@tromey.com>
|