Age | Commit message (Collapse) | Author | Files | Lines |
|
Remove the 'struct' keyword in occurrences of 'struct thread_info'.
This is a code clean-up.
Tested by rebuilding.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
|
|
After the commit:
commit 03ad29c86c232484f9090582bbe6f221bc87c323
Date: Wed Jun 19 11:14:08 2024 +0100
gdb: 'target ...' commands now expect quoted/escaped filenames
it was no longer possible to pass GDB the name of a core file
containing any special characters (white space or quote characters) on
the command line. For example:
$ gdb -c /tmp/core\ file.core
Junk after filename "/tmp/core": file.core
(gdb)
The problem is that the above commit changed the 'target core' command
to expect quoted filenames, so before the above commit a user could
write:
(gdb) target core /tmp/core file.core
[New LWP 2345783]
Core was generated by `./mkcore'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000000000401111 in ?? ()
(gdb)
But after the above commit the user must write:
(gdb) target core /tmp/core\ file.core
or
(gdb) target core "/tmp/core file.core"
This is part of a move to make GDB's filename argument handling
consistent.
Anyway, the problem with the '-c' command line flag is that it
forwards the filename unmodified through to the 'core-file' command,
which in turn forwards to the 'target core' command.
So when the user, at a shell writes:
$ gdb -c "core file.core"
this arrives in GDB as the unquoted string 'core file.core' (without
the single quotes). GDB then forwards this to the 'core-file'
command as if the user had written this at a GDB prompt:
(gdb) core-file core file.core
Which then fails to parse due to the unquoted white space between
'core' and 'file.core'.
The solution I propose is to escape any special characters in the core
file name passed from the command line before calling 'core-file'
command from main.c.
I've updated the corefile.exp test to include a test for passing a
core file containing a white space character. While I was at it I've
modernised the part of corefile.exp that I was touching.
|
|
The core_target_open function is only used in corelow.c, so lets make
it static.
There should be no user visible changes after this commit.
|
|
Make the 'struct breakpoint *' argument 'const' in user_breakpoint_p
and pending_breakpoint_p. And make the 'struct bp_location *'
argument 'const' in bl_address_is_meaningful.
There should be no user visible changes after this commit.
|
|
.cfi directives only support the use of register numbers and not
register names or aliases.
This commit adds support for 4 formats, for example:
.cfi_offset r1, 8
.cfi_offset ra, 8
.cfi_offset $r1,8
.cfi_offset $ra,8
The above .cfi directives are equivalent and all represent dwarf
register number 1.
Display register aliases as specified in the psABI during disassembly.
|
|
|
|
|
|
In the spirit of encapsulation, I'm looking to remove the need for
external code to access the "ptid -> thread" map of process_info, making
it an internal implementation detail. The only remaining use is in
function clear_inferiors, and it led me down this rabbit hole:
- clear_inferiors is really only used by the Windows port and doesn't
really make sense in the grand scheme of things, I think (when would
you want to remove all threads of all processes, without removing
those processes?)
- ok, so let's remove clear_inferiors and inline the code where it's
called, in function win32_clear_inferiors
- the Windows port does not support multi-process, so it's not really
necessary to loop over all processes like this:
for_each_process ([] (process_info *process)
{
process->thread_list ().clear ();
process->thread_map ().clear ();
});
We could just do:
current_process ()->thread_list ().clear ();
current_process ()->thread_map ().clear ();
(or pass down the process from the caller, but it's not important
right now)
- so, the code that we've inlined in win32_clear_inferiors does 3
things:
- clear the process' thread list and map (which deletes the
thread_info objects)
- clear the dll list, which just basically frees some objects
- switch to no current process / no current thread
- let's now look at where this win32_clear_inferiors function is used:
- in win32_process_target::kill, where the process is removed just
after
- in win32_process_target::detach, where the process is removed
just after
- in win32_process_target::wait, when handling a process exit.
After this returns, we could be in handle_target_event (if async)
or resume (if sync), both in `server.cc`. In both of these
cases, target_mourn_inferior gets called, we end up in
win32_process_target::mourn, which removes the process
- in all 3 cases above, we end up removing the process, which takes
care of the 3 actions listed above:
- the thread list and map get cleared when the process gets
destroyed
- same with the dll list
- remove_process switches to no current process / current thread
if the process being removed is the current one
- I conclude that it's probably unnecessary to do the cleanup in
win32_clear_inferiors, because it's going to get done right after
anyway.
Therefore, this patch does:
- remove clear_inferiors, remove the call in win32_clear_inferiors
- remove clear_dlls, which is now unused
- remove process_info::thread_map, which is now unused
- rename win32_clear_inferiors to win32_clear_process, which seems more
accurate
win32_clear_inferiors also does:
for_each_thread (delete_thread_info);
which also makes sure to delete all threads, but it also deletes the
Windows private data object (windows_thread_info), so I'll leave this
one there for now. But if we could make the thread private data
destruction automatic, on thread destruction, it could be removed, I
think.
There should be no user-visible change with this patch. Of course,
operations don't happen in the same order as before, so there might be
some important detail I'm missing. I'm only able to build-test this, if
someone could give it a test run on Windows, it would be appreciated.
Change-Id: I4a560affe763a2c965a97754cc02f3083dbe6fbf
|
|
|
|
Don't use "." to source .em files, use "source_em".
|
|
Fix an oversight in commit 8991986e2413 ("gdb: pass program space to
objfile::make").
Change-Id: I263eec6e94dde0a9763f831d2d87b4d300b6a36a
|
|
Remove some includes reported as unused by clangd. Add some to files
that actually need it.
Change-Id: I01c61c174858c1ade5cb54fd7ee1f582b17c3363
|
|
The recent commit <HASH> moved an initialization of an objfile_holder in
syms_from_objfile_1 much earlier in the function, to better deal with
when GDB is unable to read the objfile format.
However, there is an early exit from syms_from_objfile_1 when the
objfile can be understood, but has no symbols. That was not releasing
the objfile_holder, so the objfile was being unlinked from the program
space, but the process of reading the objfile was being continued,
leading to use-after-frees flagged by the Address Sanitizer.
This commit fixes that UAF by making the objfile_holder release the
objfile right before the early exit.
This commit also changes the test gdb.base/dump.exp since that was the
original test that flagged the UAF, but at the end of the test the
generated files were being deleted, meaning we couldn't redo the test
manually after the fact. That final deletion was removed
Reported-by: Simon Marchi <simark@simark.ca>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Currently we have duplicate code for each place where
windows_thread_info::context is touched, since for WOW64 processes
it has to do the equivalent with wow64_context instead.
For example this code...:
#ifdef __x86_64__
if (windows_process.wow64_process)
{
th->wow64_context.ContextFlags = WOW64_CONTEXT_ALL;
CHECK (Wow64GetThreadContext (th->h, &th->wow64_context));
...
}
else
#endif
{
th->context.ContextFlags = CONTEXT_DEBUGGER_DR;
CHECK (GetThreadContext (th->h, &th->context));
...
}
...changes to look like this instead:
windows_process.with_context (th, [&] (auto *context)
{
context->ContextFlags = WindowsContext<decltype(context)>::all;
CHECK (get_thread_context (th->h, context));
...
}
The actual choice if context or wow64_context are used, is handled by
this new function in windows_process_info:
template<typename Function>
auto with_context (windows_thread_info *th, Function function)
{
#ifdef __x86_64__
if (wow64_process)
return function (th != nullptr ? th->wow64_context : nullptr);
else
#endif
return function (th != nullptr ? th->context : nullptr);
}
The other parts to make this work are the templated WindowsContext class
which give the appropriate ContextFlags for both types.
And there are also overloaded helper functions, like in the case of
get_thread_context here, call either GetThreadContext or
Wow64GetThreadContext.
According git log --stat, this results in 120 lines less code.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Update expected outputs in testsuite/pr26936.sh to match the assembler
outputs changed by:
a96a8b7367b2cd51ff32c69e516dfbe0204c8008 is the first bad commit
commit a96a8b7367b2cd51ff32c69e516dfbe0204c8008 (HEAD)
Author: Jan Beulich <jbeulich@suse.com>
Date: Mon Dec 2 09:38:47 2024 +0100
x86: always set ISA_1_BASELINE property for 64-bit objects
PR gold/32422
* testsuite/pr26936.sh: Updated.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
For default linker script, if a symbol's value outsides the bounds of the
defined section, then it may cross the data segment alignment, so we should
reserve more size about MAXPAGESIZE and COMMONPAGESIZE when doing gp
relaxations. Otherwise we may meet the truncated errors since the data
segment alignment might move the section forward.
bfd/
PR 27566
* elfnn-riscv.c (_bfd_riscv_relax_lui): Consider MAXPAGESIZE and
COMMONPAGESIZE if the symbol's value outsides the bounds of the
defined section.
(_bfd_riscv_relax_pc): Likewise.
ld/
PR 27566
* testsuite/ld-riscv-elf/ld-riscv-elf.exp: Updated.
* testsuite/ld-riscv-elf/relax-data-segment-align*: New testcase
for pr27566. Without this patch, the rv32 binutils will meet
truncated errors for this testcase.
|
|
|
|
Fix this:
gdbserver/win32-low.cc: In function ‘void child_delete_thread(DWORD, DWORD)’:
gdbserver/win32-low.cc:192:7: error: ‘all_threads’ was not declared in this scope; did you mean ‘using_threads’?
192 | if (all_threads.size () == 1)
| ^~~~~~~~~~~
| using_threads
Commit 9f77b3aa0bfc ("gdbserver: change 'all_processes' and
'all_threads' list type") changed the type of `all_thread` to an
intrusive_list, without changing this particular use, which broke the
build because an intrusive_list doesn't know its size, so it doesn't
have a `size()` method. The subsequent commit removed `all_threads`,
leading to the error above.
Fix it by using the number of threads of the concerned process instead.
My rationale: as far as I know, GDBserver on Windows only supports one
process at a time, so there's no need to iterate over all processes. If
we made GDBserver for Windows support multiple processes, my intuition
is that we'd want this check to use the number of threads of the
concerned process, not the number of threads overall.
Add the method `process_info::thread_count`, to get the number of
threads of the process.
I'm not really sure what this check is for in the first place, Hannes
Domani said that this check didn't seem to trigger on Windows 7 and 11.
Perhaps it was necessary before.
Change-Id: I84d6226532b887d99248cf3be90f5065fb7a074a
Tested-By: Hannes Domani <ssbssa@yahoo.de>
|
|
Add an overload of `process_info::find_thread` that finds a thread by
ptid. Use it in two spots.
Change-Id: I2b7fb819bf4f83f7bd37f8641c38e878119b3814
|
|
global variable.
|
|
In this patch, we will support AVX10.2 satcvt instructions. All of them
are new instruction forms. In current documentation, it is still
VCVTTNEBF162I[,U]BS, but it will change to VCVTTBF162I[,U]BS eventually.
In table part, we used temporary <sign> iterator to reduce redundancy.
It definitely could be done for legacy cvt insns, but it is out of this
patch's scope.
gas/ChangeLog:
* testsuite/gas/i386/i386.exp: Add AVX10.2 tests.
* testsuite/gas/i386/x86-64.exp: Ditto.
* testsuite/gas/i386/avx10_2-512-satcvt-intel.d: New test.
* testsuite/gas/i386/avx10_2-512-satcvt.d: Ditto.
* testsuite/gas/i386/avx10_2-512-satcvt.s: Ditto.
* testsuite/gas/i386/avx10_2-256-satcvt-intel.d: Ditto.
* testsuite/gas/i386/avx10_2-256-satcvt.d: Ditto.
* testsuite/gas/i386/avx10_2-256-satcvt.s: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-512-satcvt-intel.d: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-512-satcvt.d: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-512-satcvt.s: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-256-satcvt-intel.d: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-256-satcvt.d: Ditto.
* testsuite/gas/i386/x86-64-avx10_2-256-satcvt.s: Ditto.
opcodes/ChangeLog:
* i386-dis-evex-prefix.h: Add PREFIX_EVEX_MAP5_68, PREFIX_EVEX_MAP5_69,
PREFIX_EVEX_MAP5_6A, PREFIX_EVEX_MAP5_6B, PREFIX_EVEX_MAP5_6C,
PREFIX_EVEX_MAP5_6D.
* i386-dis-evex-w.h: Add EVEX_W_MAP5_6C_P_0, EVEX_W_MAP5_6C_P_2,
EVEX_W_MAP5_6D_P_0, EVEX_W_MAP5_6D_P_2.
* i386-dis-evex.h (prefix_table): Add PREFIX_EVEX_MAP5_68,
* PREFIX_EVEX_MAP5_69, PREFIX_EVEX_MAP5_6A, PREFIX_EVEX_MAP5_6B.
* i386-dis.c: (PREFIX_EVEX_MAP5_68): New.
(PREFIX_EVEX_MAP5_69): Ditto.
(PREFIX_EVEX_MAP5_6A): Ditto.
(PREFIX_EVEX_MAP5_6B): Ditto.
(PREFIX_EVEX_MAP5_6C): Ditto.
(PREFIX_EVEX_MAP5_6D): Ditto.
(EVEX_MAP5_6C_P_0): Ditto.
(EVEX_MAP5_6C_P_2): Ditto.
(EVEX_MAP5_6D_P_0): Ditto.
(EVEX_MAP5_6D_P_2): Ditto.
* i386-opc.tbl: Add AVX10.2 instructions.
* i386-mnem.h: Regenerated.
* i386-tbl.h: Ditto.
Co-authored-by: Zewei Mo <zewei.mo@intel.com>
Co-authored-by: Haochen Jiang <haochen.jiang@intel.com>
Co-authored-by: Levy Hsu <admin@levyhsu.com>
|
|
For several instructions including vps{l,r}l{d,q,w,dq} and vpsra{d,w},
their VEX part do not have the following version:
vpsrlw $0x1f,(%r15,%rcx,4),%xmm0
Thus, {evex} prefix should not be inserted when their second operand is
memory, while we still need them for register as second operand. Add a
new macro %ME to solve this problem.
For vpsraq, there is no VEX version, so the {evex} prefix should always
be eliminated.
gas/ChangeLog:
PR binutils/32403
* testsuite/gas/i386/i386.exp: Run new test.
* testsuite/gas/i386/x86-64.exp: Ditto.
* testsuite/gas/i386/evex-only.d: New test.
* testsuite/gas/i386/evex-only.s: Ditto.
* testsuite/gas/i386/x86-64-evex-only.d: Ditto.
* testsuite/gas/i386/x86-64-evex-only.s: Ditto.
opcodes/ChangeLog:
PR binutils/32403
* i386-dis-evex-reg.h: Use %ME instead of %XE for vps{l,r}l{w,dq}
and vpsraw. Split table for vpsra{d,q}.
* i386-dis-evex-w.h: Use %ME instead of %XE for vps{l,r}l{d,q}
and vpsrad. Eliminate vpsraq {evex} prefix.
* i386-dis-evex.h: Split table for vpsra{d,q}.
* i386-dis.c: (EVEX_W_0F72_R_4): New.
(EVEX_W_0FE2): Ditto.
(struct dis386): Add comment for %ME.
(putop): Handle %ME.
Co-authored-by: Haochen Jiang <haochen.jiang@intel.com>
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
After the commit:
commit b9de07a5ff74663ff39bf03632d1b2ea417bf8d5
Date: Thu Oct 10 11:37:34 2024 +0100
gdb: fix handling of DW_AT_entry_pc of inlined subroutines
GDB's buildbot CI testing highlighted this assertion failure:
(gdb) c
Continuing.
../../binutils-gdb/gdb/block.h:203: internal-error: set_entry_pc: Assertion `start >= this->start () && start < this->end ()' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
----- Backtrace -----
FAIL: gdb.base/break-probes.exp: run til our library loads (GDB internal error)
This assertion was in the new function set_entry_pc and is asserting
that the default_entry_pc() value is within the blocks start/end
range.
The default_entry_pc() is the value GDB will use as the entry-pc if
the DWARF doesn't specifically override the entry-pc. This value is
calculated as:
1. The start address of the first sub-range within the block, if the
block has more than 1 range, or
2. The low address (from DW_AT_low_pc) for the block.
If the block only has a single range then this means the block was
defined with low/high pc attributes (case #2 above). These low/high
pc values are what block::start() and block::end() return. This means
that by definition, if the block is continuous, the above assert
cannot trigger as 'start', the default_entry_pc() would be equivalent
to block::start().
This means that, for the assert to trigger, the block must have
multiple ranges, and the first address of the first range is not
within the blocks low/high address range. This seems wrong.
I inspected the state at the time the assert triggered and discovered
the block's start() address. Then I removed the assert and restarted
GDB. I was now able to inspect the blocks at the offending address:
(gdb) maintenance info blocks 0x7ffff7dddaa4
Blocks at 0x7ffff7dddaa4:
from objfile: [(objfile *) 0x44a37f0] /lib64/ld-linux-x86-64.so.2
[(block *) 0x46b30c0] 0x7ffff7ddd5a0..0x7ffff7dde8a6
entry pc: 0x7ffff7ddd5a0
is global block
symbol count: 4
is contiguous
[(block *) 0x46b3020] 0x7ffff7ddd5a0..0x7ffff7dde8a6
entry pc: 0x7ffff7ddd5a0
is static block
symbol count: 9
is contiguous
[(block *) 0x46b2f70] 0x7ffff7ddda00..0x7ffff7dddac3
entry pc: 0x7ffff7ddda00
function: __GI__dl_find_dso_for_object
symbol count: 4
is contiguous
[(block *) 0x46b2e10] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
inline function: __GI__dl_find_dso_for_object
symbol count: 5
is contiguous
[(block *) 0x46b2a40] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
symbol count: 1
is contiguous
[(block *) 0x46b2970] 0x7ffff7dddaa4..0x7ffff7dddac3
entry pc: 0x7ffff7dddaa4
symbol count: 2
address ranges:
0x7ffff7ddda0e..0x7ffff7ddda77
0x7ffff7ddda90..0x7ffff7ddda96
I've left everything in for context, but the only really interesting
bit is the very last block, it's low/high range is:
0x7ffff7dddaa4..0x7ffff7dddac3
but it has separate ranges:
0x7ffff7ddda0e..0x7ffff7ddda77
0x7ffff7ddda90..0x7ffff7ddda96
which are all outside the low/high range. This is what triggers the
assert. But why does that block exist at all?
What I believe is happening is that we're running into a bug in older
versions of GCC. The buildbot failure was with an 8.5 gcc, and Tom de
Vries also reported seeing failures when using version 7 and 8 gcc,
but not with gcc 9 and onward.
Looking at the DWARF I can see that the problematic block is created
from this DIE:
<4><15efb>: Abbrev Number: 83 (DW_TAG_lexical_block)
<15efc> DW_AT_abstract_origin: <0x15e9f>
<15efe> DW_AT_low_pc : 0x7ffff7dddaa4
<15f06> DW_AT_high_pc : 31
which links via DW_AT_abstract_origin to:
<2><15e9f>: Abbrev Number: 80 (DW_TAG_lexical_block)
<15ea0> DW_AT_ranges : 0x38e0
<15ea4> DW_AT_sibling : <0x15eca>
And so we can see that <15efb> has got both low/high pc attributes and
a ranges attribute.
If I widen my checking to parents of DIE <15efb> then I see that they
also have DW_AT_abstract_origin, however, there is something
interesting going on, the parent DIEs are linking to a different DIE
tree than <15efb>.
What I believe is happening is this, we have an abstract instance
tree, this is rooted at a DW_AT_subprogram, and contains all the
blocks, variables, parameters, etc, that you would expect. As this is
an abstract instance, then there are no low/high pc attributes, and no
ranges attributes in this tree. This makes sense.
Now elsewhere we have a DW_TAG_subprogram (not
DW_TAG_inlined_subroutine) which links via
DW_AT_abstract_origin to the abstract DW_AT_subprogram. This case is
documented in the DWARF 5 spec in section 3.3.8.3, and describes an
Out-of-Line Instance of an Inlined Subroutine. Within this out of
line instance many of the DIE correctly link back, using
DW_AT_abstract_origin to the abstract instance tree. This tree also
includes the DIE <15e9f>, which is where our problem DIE references.
Now, to really confuse things, within this out-of-line instance we
have a DW_TAG_inlined_subroutine, which is another instance of the
same abstract instance tree! This would seem to indicate a recursive
call to the inline function, and the compiler, for some reason, needed
to instantiate an out of line instance of this function.
And it is within this nested, inlined subroutine, that the problem DIE
exists. The problem DIE is referencing the corresponding DIE within
the out of line instance tree, but I am convinced this must be a (long
fixed) GCC bug, and that the problem DIE should be referencing the DIE
within the abstract instance tree.
I'm aware that the above is pretty confusing. The actual DWARF would
be a around 200 lines long, so I'd like to avoid dumping it in here.
But here's my attempt at representing what's going on in a minimal
example. The numbers down the side represent the section offset, not
the nesting level, and I've removed any attributes that are not
relevant:
<1> DW_TAG_subprogram
<2> DW_TAG_lexical_block
<3> DW_TAG_subprogram
DW_AT_abstract_origin <1>
<4> DW_TAG_lexical_block
DW_AT_ranges ...
<5> DW_TAG_inlined_subroutine
DW_AT_abstract_origin <1>
<6> DW_TAG_lexical_block
DW_AT_abstract_origin <4>
DW_AT_low_pc ...
DW_AT_high_pc ...
The lexical block at <6> is linking to <4> when it should be linking
to <2>.
There is one additional thing that we might wonder about, which is,
when calculating the low/high pc range for a block, why does GDB not
make use of the range information and expand the range beyond the
defined low/high values?
The answer to this is in dwarf_get_pc_bounds_ranges_or_highlow_pc in
dwarf/read.c. This is where the low/high bounds are calculated. What
we see is that GDB first checks for a low/high attribute pair, and if
that is present, this defines the address range for the block. Only
if there is no DW_AT_low_pc do we check for the DW_AT_ranges, and use
that to define the extent of the block. And this makes sense, section
3.5 of the DWARF-5 spec says:
The lexical block entry may have either a DW_AT_low_pc and DW_AT_high_pc
pair of attributes or a DW_AT_ranges attribute whose values encode the
contiguous or non-contiguous address ranges, respectively, of the machine
instructions generated for the lexical block...
Section 3.5 is specifically about lexical blocks, but the same
wording, about it being either low/high OR ranges is repeated for
other DW_TAG_ types.
So this explains why GDB doesn't use the ranges to expand the problem
blocks ranges; as the first DIE has low/high addresses, these are
used, and the ranges is not consulted.
It is only later in dwarf2_record_block_ranges that we create a range
based off the low/high pc, and then also process the ranges data, this
allows the problem block to exist with ranges that are outside the
low/high range.
To solve this I considered a number of options:
1. Prevent loading certain attributes from an abstract instance.
Section 3.3.8.1 of the DWARF-5 spec talks about which attributes are
appropriate to place in an abstract instance. Any attribute that
might vary between instances should not appear in an abstract
instance. DW_AT_ranges is included as an example in the
non-exhaustive list of attributes that should not appear in an
abstract instance.
Currently in dwarf2_attr (dwarf2/read.c), when we see a
DW_AT_abstract_origin attribute, we always follow this to try and find
the attribute we are looking for. But we could change this function
so that we prevent this following for attributes that we know should
not be looked up in an abstract instance. This would solve the
problem in this case by preventing us finding the DW_AT_ranges in the
incorrect abstract instance.
2. Filter the ranges.
Having established a blocks low/high address range in
dwarf_get_pc_bounds_ranges_or_highlow_pc, we could allow
dwarf2_record_block_ranges to parse the ranges, but we could reject
any range that extends outside the blocks defined start and end
addresses.
For well behaved DWARF where we have either low/high or ranges, then
the blocks start/end are defined from the range data, and so, by
definition, every range would be acceptable.
But in our problem case we would reject all of the invalid ranges.
This is my least favourite solution as it feels like rejecting the
ranges is tackling the problem too late on.
3. Don't try to parse ranges when we have low/high attributes.
This option involves updating dwarf2_record_block_ranges to match the
behaviour of dwarf_get_pc_bounds_ranges_or_highlow_pc, and, I believe,
to match the DWARF spec: don't try to read range data from
DW_AT_ranges if we have low/high pc attributes.
In our case this solves the issue because the problematic DIE has the
low/high attributes, and it then links to the wrong DIE which happens
to have DW_AT_ranges. With this change in place we don't even look
for the DW_AT_ranges.
If the problem were reversed, and the initial DIE had DW_AT_ranges,
but the incorrectly referenced DIE had the low/high pc attributes,
we would pick up the wrong addresses, but this wouldn't trigger any
asserts. The reason is that dwarf_get_pc_bounds_ranges_or_highlow_pc
would also find the low/high addresses from the incorrectly referenced
DIE, and so we would just end up with a block which had the wrong
address ranges, but the block would be self consistent, which is
different to the problem we hit here.
In the end, in this commit I went with solution #3, having
dwarf_get_pc_bounds_ranges_or_highlow_pc and
dwarf2_record_block_ranges be consistent seems sensible. However, I
do wonder if in the future we might want to explore solution #1 as an
additional safety feature.
With this patch in place I'm able to run the gdb.base/break-probes.exp
without seeing the assert that CI testing highlighted. I see no
regressions when testing on x86-64 GNU/Linux with gcc 9.3.1.
Note: the diff in this commit looks big, but it's really just me
indenting the code.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In commit 18d2988e5da ("gdb, gdbserver, gdbsupport: remove includes of early
headers") all includes of gdbsupport/common-defs.h where removed, but
commit c1cdee0e2c1 ("gdb: LoongArch: Add support for hardware watchpoint")
reintroduced some.
Fix this by removing them.
Tested by doing this on x86_64-linux:
...
$ make \
nat/loongarch-hw-point.o \
nat/loongarch-linux.o \
nat/loongarch-linux-hw-point.o
CXX nat/loongarch-hw-point.o
CXX nat/loongarch-linux.o
CXX nat/loongarch-linux-hw-point.o
...
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
The mingw-w64 build breaks currently:
...
In file included from gdb/cli/cli-cmds.c:58:
gdbsupport/eintr.h: In function ‘pid_t gdb::waitpid(pid_t, int*, int)’:
gdbsupport/eintr.h:77:35: error: ‘::waitpid’ has not been declared; \
did you mean ‘gdb::waitpid’?
77 | return gdb::handle_eintr (-1, ::waitpid, pid, wstatus, options);
| ^~~~~~~
| gdb::waitpid
gdbsupport/eintr.h:75:1: note: ‘gdb::waitpid’ declared here
75 | waitpid (pid_t pid, int *wstatus, int options)
| ^~~~~~~
...
This is a regression since commit 658a03e9e85 ("[gdbsupport] Add
gdb::{waitpid,read,write,close}"), which moved the use of ::waitpid from
run_under_shell, where it was used conditionally:
...
#if defined(CANT_FORK) || \
(!defined(HAVE_WORKING_VFORK) && !defined(HAVE_WORKING_FORK))
...
#else
...
int ret = gdb::handle_eintr (-1, ::waitpid, pid, &status, 0);
...
to gdb::waitpid, where it's used unconditionally:
...
inline pid_t
waitpid (pid_t pid, int *wstatus, int options)
{
return gdb::handle_eintr (-1, ::waitpid, pid, wstatus, options);
}
...
Likewise for ::wait.
Guard these uses with HAVE_WAITPID and HAVE_WAIT.
Reproduced and tested by doing a mingw-w64 cross-build on x86_64-linux.
Reported-By: Simon Marchi <simark@simark.ca>
Co-Authored-By: Tom de Vries <tdevries@suse.de>
|
|
Make the field private, change the free function to be a method.
Change-Id: I05010e7d1bd58ce3895802eb263c029528427758
Approved-By: Tom Tromey <tom@tromey.com>
|
|
thread_info
Make the field private, change the free functions to be methods.
Change-Id: Ifd8ed2775dddefc73a0e00126182e1db02688af4
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Replace the macro with a static inline function.
Change-Id: I1cccf5b38d6a412d251763f0316902b07cc28d16
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I was thinking of changing this macro to a function, but I don't think
it adds much value over just accessing the field directly.
Change-Id: I5dc63e9db0773549c5b55a1279212e2d1213f50c
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A failure of 'runto_main' in 'start_structs_test' results in a TCL
error. The return value of 'start_structs_test' function is evaluated
inside an if conditional clause, which expects a boolean value. Return
'-1' on failure to avoid the error.
Reviewed-By: Keith Seitz <keiths@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
In commit 922ab963e1c ("[gdb/python] Handle empty PYTHONDONTWRITEBYTECODE") I
added a test in gdb.python/py-startup-opt.exp that checks the
"show python dont-write-bytecode" output.
Then in commit 348290c7ef4 ("[gdb/python] Warn and ignore ineffective python
settings") I changed the output of "show python dont-write-bytecode" after
python initialization.
I tested these changes individually, and found no problems but after
committing both the test started failing, which the Linaro CI reported.
Fix this by updating the expected output.
While we're at it, make the test a bit more generic by testing
"show python $setting" in all cases.
Tested on x86_64-linux, using:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
|
|
While working on an earlier patch, I noticed that all the
register-related "maint print" commands used the wrong command name in
an error message. This fixes them.
Reviewed-by: Christina Schimpe <christina.schimpe@intel.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This changes the "maint print reggroups" command to use a ui-out table
rather than printf.
It also fixes a typo I noticed in a related test case name; and lets
us finally remove the leading \s from the regexp in completion.exp.
Reviewed-by: Christina Schimpe <christina.schimpe@intel.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
This changes various "maint print" register commands to use ui-out
tables rather than the current printf approach.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
|
|
With test-case gdb.arch/pr25124.exp, I run into:
...
PASS: gdb.arch/pr25124.exp: disassemble thumb instruction (1st try)
PASS: gdb.arch/pr25124.exp: disassemble thumb instruction (2nd try)
DUPLICATE: gdb.arch/pr25124.exp: disassemble thumb instruction (2nd try)
...
Fix this by using a comma instead of parentheses.
Tested on arm-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
A common problem is that python may fail to initialize if PYTHONHOME is
set incorrectly, or points to incompatible default libraries.
Likewise if PYTHONPATH points to incompatible modules.
For instance, say PYTHONHOME is foo, then we get:
...
$ gdb -q
Python path configuration:
PYTHONHOME = 'foo'
PYTHONPATH = (not set)
program name = '/usr/bin/python'
isolated = 0
environment = 1
user site = 1
safe_path = 0
import site = 1
is in build tree = 0
stdlib dir = 'foo/lib64/python3.12'
sys._base_executable = '/usr/bin/python'
sys.base_prefix = 'foo'
sys.base_exec_prefix = 'foo'
sys.platlibdir = 'lib64'
sys.executable = '/usr/bin/python'
sys.prefix = 'foo'
sys.exec_prefix = 'foo'
sys.path = [
'foo/lib64/python312.zip',
'foo/lib64/python3.12',
'foo/lib64/python3.12/lib-dynload',
]
Python Exception <class 'ModuleNotFoundError'>: No module named 'encodings'
Python not initialized
$
...
In this case, it might be easy to figure out what went wrong because of the
obviously incorrect pathnames, but that might not be the case if PYTHONHOME
points to an incompatible python installation.
Fix this by adding a warning with a description of the possible cause and what
to do about it:
...
Python initialization failed: \
failed to get the Python codec of the filesystem encoding
gdb: warning: Python failed to initialize with PYTHONHOME set. Maybe because \
it is set incorrectly? Maybe because it points to incompatible standard \
libraries? Consider changing or unsetting it, or ignoring it using "set \
python ignore-environment on" at early initialization.
...
Likewise for PYTHONPATH:
...
Python initialization failed: \
failed to get the Python codec of the filesystem encoding
gdb: warning: Python failed to initialize with PYTHONPATH set. Maybe because \
it points to incompatible modules? Consider changing or unsetting it, or \
ignoring it using "set python ignore-environment on" at early \
initialization.
...
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR python/32379
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32379
|
|
When using PYTHONDONTWRITEBYTECODE with an empty string we get:
...
$ PYTHONDONTWRITEBYTECODE= gdb -q -batch -ex "show python dont-write-bytecode"
Python's dont-write-bytecode setting is auto (currently on).
...
This is incorrect, it should be off.
The actual setting is correct, that was already fixed in commit 24d2cbc42cc
("set/show python dont-write-bytecode fixes"), in function
python_write_bytecode.
Fix this by:
- factoring out new function env_python_dont_write_bytecode out of
python_write_bytecode, and
- using it in show_python_dont_write_bytecode.
Tested on x86_64-linux, using test-case gdb.python/py-startup-opt.exp and:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
Approved-By: Tom Tromey <tom@tromey.com>
PR python/32389
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32389
|
|
PYTHONDONTWRITEBYTECODE
When running test-case gdb.python/py-startup-opt.exp with empty
PYTHONDONTWRITEBYTECODE:
...
$ cd build/gdb/testsuite
$ PYTHONDONTWRITEBYTECODE= make check \
RUNTESTFLAGS=gdb.python/py-startup-opt.exp
...
I get:
...
end^M
dont_write_bytecode is off^M
(gdb) FAIL: $exp: attr=dont_write_bytecode: testname: input 6: end
...
The problem is that the test-case expects dont_write_bytecode to be
on, which is incorrect because PYTHONDONTWRITEBYTECODE only has effect if set
to a non-empty string [1].
Fix this by correctly setting expectations in the test-case.
Tested on x86_64-linux, with:
- PYTHONDONTWRITEBYTECODE=
- PYTHONDONTWRITEBYTECODE=1
- unset PYTHONDONTWRITEBYTECODE
Approved-By: Tom Tromey <tom@tromey.com>
[1] https://docs.python.org/3/using/cmdline.html#envvar-PYTHONDONTWRITEBYTECODE
|
|
Configuration flags "python dont-write-bytecode" and
"python ignore-environment" have effect only at Python initialization.
For instance, setting "python dont-write-bytecode" here has no effect:
...
$ gdb -q
(gdb) show python dont-write-bytecode
Python's dont-write-bytecode setting is auto (currently off).
(gdb) python import sys
(gdb) python print (sys.dont_write_bytecode)
False
(gdb) set python dont-write-bytecode on
(gdb) python print (sys.dont_write_bytecode)
False
...
This is not clear in the code: we set Py_DontWriteBytecodeFlag and
Py_IgnoreEnvironmentFlag in set_python_ignore_environment and
set_python_dont_write_bytecode. Fix this by moving the setting of those
variables to py_initialization.
Furthermore, this is not clear to the user: after Python initialization, the
user can still modify the configuration flags, and observe the changed setting:
...
$ gdb -q
(gdb) show python ignore-environment
Python's ignore-environment setting is off.
(gdb) set python ignore-environment on
(gdb) show python ignore-environment
Python's ignore-environment setting is on.
(gdb)
...
Fix this by emitting a warning when trying to set these configuration flags
after Python initialization:
...
$ gdb -q
(gdb) set python ignore-environment on
warning: Setting python ignore-environment after Python initialization has \
no effect, try setting this during early initialization
(gdb) set python dont-write-bytecode on
warning: Setting python dont-write-bytecode after Python initialization has \
no effect, try setting this during early initialization, or try setting \
sys.dont_write_bytecode
...
and by keeping the values constant after Python initialization.
Since the auto setting for python dont-write-bytecode depends on the current
value of environment variable PYTHONDONTWRITEBYTECODE, we simply avoid it
after Python initialization:
...
$ gdb -q -batch \
-eiex "show python dont-write-bytecode" \
-iex "show python dont-write-bytecode"
Python's dont-write-bytecode setting is auto (currently off).
Python's dont-write-bytecode setting is off.
...
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR python/32388
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32388
|
|
I added ATTRIBUTE_UNUSED to py_initialize_catch_abort as a quick fix to deal
with it being unused for PY_VERSION_HEX >= 0x030a0000, but forgot to fix this
before committing.
Fix this now, by removing the attribute and using
'#if PY_VERSION_HEX < 0x030a0000' instead.
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Function do_start_initialization has a large part dedicated to initializing
the python interpreter, as opposed to the rest of the function where
gdb-specific python support is initialized.
Factor out this part, as new function py_initialize, and rename the existing
py_initialize to py_initialize_catch_abort.
Refactor the new function py_initialize by getting rid of the nested:
...
#ifdef WITH_PYTHON_PATH
#if PY_VERSION_HEX < 0x030a0000
#else
#endif
#else
#endif
...
In particular, this changes behaviour for the "!defined (WITH_PYTHON_PATH)"
case.
For the "defined (WITH_PYTHON_PATH)" case, we've started using
Py_InitializeFromConfig () for PY_VERSION_HEX >= 0x030a0000 to deal with the
deprecation of Py_SetProgramName in 3.11.
For the "!defined (WITH_PYTHON_PATH)" case, we don't use Py_SetProgramName so
we stuck with Py_Initialize ().
However, in 3.12 Py_DontWriteBytecodeFlag and Py_IgnoreEnvironmentFlag got
deprecated and also here we need Py_InitializeFromConfig () to deal with this,
but the "!defined (WITH_PYTHON_PATH)" case didn't get updated.
This should be taken care of, now that we have this behavior:
- for PY_VERSION_HEX < 0x030a0000 we use Py_Initialize
- for PY_VERSION_HEX >= 0x030a0000 we use Py_InitializeFromConfig
I'm not sure how to test the "!defined (WITH_PYTHON_PATH)" though.
Tested on aarch64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Commit de2b4ab50de ("Convert dwarf2_cu::call_site_htab to new hash
table") removed this nullptr check for no good reason. This causes a
crash if `m_call_site_htab` is not set, as shown in PR 32410. My guess
is that when doing this change, I tried to make `m_call_site_htab` not a
pointer, removed this check, then realized it wasn't so obvious, and
forgot to re-add the check.
Change-Id: I455e00cdc0519dfb412dc7826d17a839b77aae69
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32410
Approved-By: Tom Tromey <tom@tromey.com>
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
The test gdb.reverse/i386-avx-reverse.exp was assuming that if the CPU
was like x86, it would have AVX instructions because I didn't know how
to check for AVX instruction support explicitly. This commit updates
that to use the pre-existing TCL proc have_avx.
Also update the comment at the top of the test, since it was a copy of a
different test.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
The function stabsect_build_psymtabs, defined in the dbxread file, is no
longer used in any parts of GDB, so this commit just removes it.
Tested by rebuilding.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
When building gdb with --with-expat=no and running test-case
gdb.base/reset-catchpoint-cond.exp we get:
...
(gdb) catch syscall write^M
warning: Can not parse XML syscalls information; \
XML support was disabled at compile time.^M
Unknown syscall name 'write'.^M
(gdb) FAIL: $exp: mode=syscall: catch syscall write
...
Fix this by skipping the test for --with-expat=no.
Tested on x86_64-linux.
|
|
When building gdb with --disable-tui, we run into:
...
(gdb) python print(type(gdb.TuiWindow))^M
Python Exception <class 'AttributeError'>: \
module 'gdb' has no attribute 'TuiWindow'^M
Error occurred in Python: module 'gdb' has no attribute 'TuiWindow'^M
(gdb) FAIL: gdb.python/python.exp: gdb.TuiWindow is registered
...
Fix this by skipping the test for --disable-tui.
Tested on x86_64-linux.
|
|
If a user starts an inferior composed of objfiles that GDB is unable to
read, there is an error thrown in find_sym_fns, printing the famous "I'm
sorry, Dave, I can't do that" and the objfile stops being read. However,
the objfile will already have been linked to the program space, and
future interactions with the objfile will assume that it is readable.
Relevant to this commit, if GDB tries to find out the section that
contains a PC, and this section happens to land in the unreadable
objfile, GDB will try to create a section mapping, eventually calling
update_section_map. Since that function uses bfd to calculate the
sections, it'll think there are sections to be ordered, but when trying
to access the objfile::section_offsets, it'll be indexing a size 0
std::vector, which will end up segfaulting.
Currently, it isn't easy to trigger this crash, but the upcoming
possibility to disable support for some file formats would make the
crash very easy to reproduce, by attempting to debug an unsupported
inferior and using "break *<instruction>" command, or simply connecting
to a gdbserver loaded with an unsupported inferior.
The struct objfile_up seems to have been created to catch these kinds of
errors and unlink the partially-read objfile from the program space, as
the objfile isn't useful to GDB anymore, but it seems to have been added
before find_sym_fns would throw errors for unreadable objfiles, as the
instance in syms_from_objfile_1 (that could save GDB from this crash) is
declared well after find_sym_fns, too late to guard us. This commit
moves the declaration up to the top of the function, so it works as
intended.
Further discussion on the mailing list also agreed that the name
"objfile_up" implies some level of ownership of the pointer, which this
struct doesn't have. So this commit renames the struct to
scoped_objfile_unlinker, which is more descriptive of what the struct is
actually meant to do.
The final change this commit does is add an assertion to
objfile::section_offset and objfile::set_section_offset, which ensures
that the section_offsets vector is large enough to return the desired
offset. This ensures that we won't misteriously segfault or worse,
continue going with garbage data.
Reported-By: Andrew Burgess <aburgess@redhat.com>
Approved-By: Andrew Burgess <aburgess@redhat.com>
|