Age | Commit message (Collapse) | Author | Files | Lines |
|
This changes value_bitpos to be a method of value. Much of this patch
was written by script.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes value_bitsize to be a method of value. Much of this patch
was written by script.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes value_arch to be a method of value. Much of this patch
was written by script.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes deprecated_set_value_type to be a method of value. Much
of this patch was written by script.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes value_type to be a method of value. Much of this patch
was written by script.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This moves struct value to value.h. For now, all members remain
public, but this is a temporary state -- by the end of the series
we'll add 'private'.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
struct value is going to move to value.h, but to avoid having
excessive code there, first move the destructor body out-of-line.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This renames all the fields of struct value, in preparation for the
coming changes.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This commit introduces the idea of loading only part of an array in
order to print it, what I call "limited length" arrays.
The motivation behind this work is to make it possible to print slices
of very large arrays, where very large means bigger than
`max-value-size'.
Consider this GDB session with the current GDB:
(gdb) set max-value-size 100
(gdb) p large_1d_array
value requires 400 bytes, which is more than max-value-size
(gdb) p -elements 10 -- large_1d_array
value requires 400 bytes, which is more than max-value-size
notice that the request to print 10 elements still fails, even though 10
elements should be less than the max-value-size. With a patched version
of GDB:
(gdb) p -elements 10 -- large_1d_array
$1 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9...}
So now the print has succeeded. It also has loaded `max-value-size'
worth of data into value history, so the recorded value can be accessed
consistently:
(gdb) p -elements 10 -- $1
$2 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9...}
(gdb) p $1
$3 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, <unavailable> <repeats 75 times>}
(gdb)
Accesses with other languages work similarly, although for Ada only
C-style [] array element/dimension accesses use history. For both Ada
and Fortran () array element/dimension accesses go straight to the
inferior, bypassing the value history just as with C pointers.
Co-Authored-By: Maciej W. Rozycki <macro@embecosm.com>
|
|
While it makes sense to allow accessing out-of-bounds elements in the
debuggee and see whatever there might happen to be there in memory (we
are a debugger and not a programming rules enforcement facility and we
want to make people's life easier in chasing bugs), e.g.:
(gdb) print one_hundred[-1]
$1 = 0
(gdb) print one_hundred[100]
$2 = 0
(gdb)
we shouldn't really pretend that we have any meaningful data around
values recorded in history (what these commands really retrieve are
current debuggee memory contents outside the original data accessed,
really confusing in my opinion). Mark values recorded in history as
such then and verify accesses to be in-range for them:
(gdb) print one_hundred[-1]
$1 = <unavailable>
(gdb) print one_hundred[100]
$2 = <unavailable>
Add a suitable test case, which also covers integer overflows in data
location calculation.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Consistently use the LONGEST and ULONGEST types for value byte/bit
offsets and lengths respectively, avoiding silent truncation for ranges
exceeding the 32-bit span, which may cause incorrect matching. Also
report a conversion overflow on byte ranges that cannot be expressed in
terms of bits with these data types, e.g.:
(gdb) print one_hundred[1LL << 58]
Integer overflow in data location calculation
(gdb) print one_hundred[(-1LL << 58) - 1]
Integer overflow in data location calculation
(gdb)
Previously such accesses would be let through with unpredictable results
produced.
|
|
We have an inconsistency in value history accesses where array element
accesses cause an error for entries exceeding the currently selected
`max-value-size' setting even where such accesses successfully complete
for elements located in the inferior, e.g.:
(gdb) p/d one
$1 = 0
(gdb) p/d one_hundred
$2 = {0 <repeats 100 times>}
(gdb) p/d one_hundred[99]
$3 = 0
(gdb) set max-value-size 25
(gdb) p/d one_hundred
value requires 100 bytes, which is more than max-value-size
(gdb) p/d one_hundred[99]
$7 = 0
(gdb) p/d $2
value requires 100 bytes, which is more than max-value-size
(gdb) p/d $2[99]
value requires 100 bytes, which is more than max-value-size
(gdb)
According to our documentation the `max-value-size' setting is a safety
guard against allocating an overly large amount of memory. Moreover a
statement in documentation says, concerning this setting, that: "Setting
this variable does not affect values that have already been allocated
within GDB, only future allocations." While in the implementer-speak
the sentence may be unambiguous I think the outside user may well infer
that the setting does not apply to values previously printed.
Therefore rather than just fixing this inconsistency it seems reasonable
to lift the setting for value history accesses, under an implication
that by having been retrieved from the debuggee they have already passed
the safety check. Do it then, by suppressing the value size check in
`value_copy' -- under an observation that if the original value has been
already loaded (i.e. it's not lazy), then it must have previously passed
said check -- making the last two commands succeed:
(gdb) p/d $2
$8 = {0 <repeats 100 times>}
(gdb) p/d $2 [99]
$9 = 0
(gdb)
Expand the testsuite accordingly, covering both value history handling
and the use of `value_copy' by `make_cv_value', used by Python code.
|
|
The gdbarch "return_value" can't correctly handle variably-sized
types. The problem here is that the TYPE_LENGTH of such a type is 0,
until the type is resolved, which requires reading memory. However,
gdbarch_return_value only accepts a buffer as an out parameter.
Fixing this requires letting the implementation of the gdbarch method
resolve the type and return a value -- that is, both the contents and
the new type.
After an attempt at this, I realized I wouldn't be able to correctly
update all implementations (there are ~80) of this method. So,
instead, this patch adds a new method that falls back to the current
method, and it updates gdb to only call the new method. This way it's
possible to incrementally convert the architectures that I am able to
test.
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
A user noticed that gdb would crash when printing a packed array after
doing "set lang c". Packed arrays don't exist in C, but it's
occasionally useful to print things in C mode when working in a non-C
language -- this lets you see under the hood a little bit.
The bug here is that generic value printing does not handle packed
arrays at all. This patch fixes the bug by introducing a new function
to extract a value from a bit offset and width.
The new function includes a hack to avoid problems with some existing
test cases when using -fgnat-encodings=all. Cleaning up this code
looked difficult, and since "all" is effectively deprecated, I thought
it made sense to simply work around the problems.
|
|
Currently, every internal_error call must be passed __FILE__/__LINE__
explicitly, like:
internal_error (__FILE__, __LINE__, "foo %d", var);
The need to pass in explicit __FILE__/__LINE__ is there probably
because the function predates widespread and portable variadic macros
availability. We can use variadic macros nowadays, and in fact, we
already use them in several places, including the related
gdb_assert_not_reached.
So this patch renames the internal_error function to something else,
and then reimplements internal_error as a variadic macro that expands
__FILE__/__LINE__ itself.
The result is that we now should call internal_error like so:
internal_error ("foo %d", var);
Likewise for internal_warning.
The patch adjusts all calls sites. 99% of the adjustments were done
with a perl/sed script.
The non-mechanical changes are in gdbsupport/errors.h,
gdbsupport/gdb_assert.h, and gdb/gdbarch.py.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Change-Id: Ia6f372c11550ca876829e8fd85048f4502bdcf06
|
|
This changes GDB to use frame_info_ptr instead of frame_info *
The substitution was done with multiple sequential `sed` commands:
sed 's/^struct frame_info;/class frame_info_ptr;/'
sed 's/struct frame_info \*/frame_info_ptr /g' - which left some
issues in a few files, that were manually fixed.
sed 's/\<frame_info \*/frame_info_ptr /g'
sed 's/frame_info_ptr $/frame_info_ptr/g' - used to remove whitespace
problems.
The changed files were then manually checked and some 'sed' changes
undone, some constructors and some gets were added, according to what
made sense, and what Tromey originally did
Co-Authored-By: Bruno Larsen <blarsen@redhat.com>
Approved-by: Tom Tomey <tom@tromey.com>
|
|
This replaces frame_id_eq with operator== and operator!=. I wrote
this for a version of this series that I later abandoned; but since it
simplifies the code, I left this patch in.
Approved-by: Tom Tomey <tom@tromey.com>
|
|
Remove the macro, replace all uses with calls to type::length.
Change-Id: Ib9bdc954576860b21190886534c99103d6a47afb
|
|
Remove the macro, replace all uses by calls to type::target_type.
Change-Id: Ie51d3e1e22f94130176d6abd723255282bb6d1ed
|
|
When an objfile is destroyed, types that are still in use and
allocated on that objfile are copied. A temporary hash map is created
during this process, and it is allocated on the destroyed objfile's
obstack -- which normally is fine, as that is going to be destroyed
shortly anyway.
However, this approach requires that the objfile be passed to registry
destruction, and this won't be possible in the rewritten registry.
This patch changes the copied type hash table to simply use the heap
instead. It also removes the 'objfile' parameter from
copy_type_recursive, to make this all more clear.
This patch also fixes an apparent bug in copy_type_recursive.
Previously it was copying the dynamic property list to the dying
objfile's obstack:
- = copy_dynamic_prop_list (&objfile->objfile_obstack,
However I think this is incorrect -- that obstack is about to be
destroyed.
|
|
Varobj object contains references to types, variables (i.e. struct
variable) and expression. All of those can reference data on an
objfile's obstack. It is possible for this objfile to be deleted (and
the obstack to be feed), while the varobj remains valid. Later, if the
user uses the varobj, this will result in a use-after-free error. With
address sanitizer build, this leads to a plain error. For non address
sanitizer build we might see undefined behaviour, which manifest
themself as assertion failures when accessing data backed by feed
memory.
This can be observed if we create a varobj that refers to ta symbol in a
shared library, after either the objfile gets reloaded (using the `file`
command) or after the shared library is unloaded (with a call to dlclose
for example).
This patch fixes those issues by:
- Adding cleanup procedure to the free_objfile observable. When
activated this observer clears expressions referencing the objfile
being freed, and removes references to blocks belonging to this
objfile.
- Adding varobj support in the `preserve_values` (gdb.value.c). This
ensures that before the objfile is unloaded, any type owned by the
objfile referenced by the varobj is replaced by an equivalent type
not owned by the objfile. This process is done here instead of in the
free_objfile observer in order to reuse the type hash table already
used for similar purpose when replacing types of values kept in the
value history.
This patch also makes sure to keep a reference to the expression's
gdbarch and language_defn members when the varobj->root->exp is
initialized. Those structures outlive the objfile, so this is safe.
This is done because those references might be used initialize a python
context even after exp is invalidated. Another approach could have been
to initialize the python context with default gdbarch and language_defn
(i.e. nullptr) if expr is NULL, but since we might still try to display
the value which was obtained by evaluating exp when it was still valid,
keeping track of the context which was used at this time seems
reasonable.
Tested on x86_64-Linux.
Co-Authored-By: Pedro Alves <pedro@palves.net>
|
|
Building GDB currently fails to build with libc++, because libc++ is
stricter about which headers "leak" entities they're not guaranteed
to support. The following headers have been added:
* `<iterator>`, to support `std::back_inserter`
* `<utility>`, to support `std::move` and `std::swap`
* `<vector>`, to support `std::vector`
Change-Id: Iaeb15057c5fbb43217df77ce34d4e54446dbcf3d
|
|
Replace with equivalent method.
Change-Id: I0e033095e7358799930775e61028b48246971a7d
|
|
Remove all macros related to getting and setting some symbol value:
#define SYMBOL_VALUE(symbol) (symbol)->value.ivalue
#define SYMBOL_VALUE_ADDRESS(symbol) \
#define SET_SYMBOL_VALUE_ADDRESS(symbol, new_value) \
#define SYMBOL_VALUE_BYTES(symbol) (symbol)->value.bytes
#define SYMBOL_VALUE_COMMON_BLOCK(symbol) (symbol)->value.common_block
#define SYMBOL_BLOCK_VALUE(symbol) (symbol)->value.block
#define SYMBOL_VALUE_CHAIN(symbol) (symbol)->value.chain
#define MSYMBOL_VALUE(symbol) (symbol)->value.ivalue
#define MSYMBOL_VALUE_RAW_ADDRESS(symbol) ((symbol)->value.address + 0)
#define MSYMBOL_VALUE_ADDRESS(objfile, symbol) \
#define BMSYMBOL_VALUE_ADDRESS(symbol) \
#define SET_MSYMBOL_VALUE_ADDRESS(symbol, new_value) \
#define MSYMBOL_VALUE_BYTES(symbol) (symbol)->value.bytes
#define MSYMBOL_BLOCK_VALUE(symbol) (symbol)->value.block
Replace them with equivalent methods on the appropriate objects.
Change-Id: Iafdab3b8eefc6dc2fd895aa955bf64fafc59ed50
|
|
Bug 28980 shows that trying to value_copy an entirely optimized out
value causes an internal error. The original bug report involves MI and
some Python pretty printer, and is quite difficult to reproduce, but
another easy way to reproduce (that is believed to be equivalent) was
proposed:
$ ./gdb -q -nx --data-directory=data-directory -ex "py print(gdb.Value(gdb.Value(5).type.optimized_out()))"
/home/smarchi/src/binutils-gdb/gdb/value.c:1731: internal-error: value_copy: Assertion `arg->contents != nullptr' failed.
This is caused by 5f8ab46bc691 ("gdb: constify parameter of
value_copy"). It added an assertion that the contents buffer is
allocated if the value is not lazy:
if (!value_lazy (val))
{
gdb_assert (arg->contents != nullptr);
This was based on the comment on value::contents, which suggest that
this is the case:
/* Actual contents of the value. Target byte-order. NULL or not
valid if lazy is nonzero. */
gdb::unique_xmalloc_ptr<gdb_byte> contents;
However, it turns out that it can also be nullptr also if the value is
entirely optimized out, for example on exit of
allocate_optimized_out_value. That function creates a lazy value, marks
the entire value as optimized out, and then clears the lazy flag. But
contents remains nullptr.
This wasn't a problem for value_copy before, because it was calling
value_contents_all_raw on the input value, which caused contents to be
allocated before doing the copy. This means that the input value to
value_copy did not have its contents allocated on entry, but had it
allocated on exit. The result value had it allocated on exit. And that
we copied bytes for an entirely optimized out value (i.e. meaningless
bytes).
From here I see two choices:
1. respect the documented invariant that contents is nullptr only and
only if the value is lazy, which means making
allocate_optimized_out_value allocate contents
2. extend the cases where contents can be nullptr to also include
values that are entirely optimized out (note that you could still
have some entirely optimized out values that do have contents
allocated, it depends on how they were created) and adjust
value_copy accordingly
Choice #1 is safe, but less efficient: it's not very useful to allocate
a buffer for an entirely optimized out value. It's even a bit less
efficient than what we had initially, because values coming out of
allocate_optimized_out_value would now always get their contents
allocated.
Choice #2 would be more efficient than what we had before: giving an
optimized out value without allocated contents to value_copy would
result in an optimized out value without allocated contents (and the
input value would still be without allocated contents on exit). But
it's more risky, since it's difficult to ensure that all users of the
contents (through the various_contents* accessors) are all fine with
that new invariant.
In this patch, I opt for choice #2, since I think it is a better
direction than choice #1. #1 would be a pessimization, and if we go
this way, I doubt that it will ever be revisited, it will just stay that
way forever.
Add a selftest to test this. I initially started to write it as a
Python test (since the reproducer is in Python), but a selftest is more
straightforward.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28980
Change-Id: I6e2f5c0ea804fafa041fcc4345d47064b5900ed7
|
|
Now that filtered and unfiltered output can be treated identically, we
can unify the printf family of functions. This is done under the name
"gdb_printf". Most of this patch was written by script.
|
|
In a following patch, I have a const value I want to copy using a
value_copy. However, value_copy takes a non-const source value, at the
moment. Change the paramter to be const,
If the source value is not lazy, we currently call
value_contents_all_raw, which calls allocate_value_contents, to get a
view on the contents. They both take a non-const value, that's a
problem. My first attempt at solving it was to add a const version of
value_contents_all_raw, make allocate_value_contents take a const value,
and either:
- make value::contents mutable
- make allocate_value_contents cast away the const
The idea being that allocating the value contents buffer does modify the
value at the bit level, but logically that doesn't change its state.
That was getting a bit complicated, so what I ended up doing is make
value_copy not call value_contents_all_raw. We know at this point that
the value is not lazy, so value::contents must have been allocate
already.
Change-Id: I3741ab362bce14315f712ec24064ccc17e3578d4
|
|
No kind of internal var uses it remove it. This makes the transition to
using a variant easier, since we don't need to think about where this
should be called (in a destructor or not), if it can throw, etc.
Change-Id: Iebbc867d1ce6716480450d9790410d6684cbe4dd
|
|
This adds initializers to bound_minimal_symbol, allowing for the
removal of some calls to memset.
|
|
Add a new function gdb.history_count to the Python api, this function
returns an integer, the number of items in GDB's value history.
This is useful if you want to pull items from the history by their
absolute number, for example, if you wanted to show a complete history
list. Previously we could figure out how many items are in the
history list by trying to fetch the items, and then catching the
exception when the item is not available, but having this function
seems nicer.
|
|
Many otherwise ordinary commands choose to use unfiltered output
rather than filtered. I don't think there's any reason for this, so
this changes many such commands to use filtered output instead.
Note that complete_command is not touched due to a comment there
explaining why unfiltered output is believed to be used.
|
|
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
Change a few relatively obvious spots using value contents to propagate
the use array_view a bit more.
Change-Id: I5338a60986f06d5969fec803d04f8423c9288a15
|
|
An assertion was recently added to array_view::operator[] to ensure we
don't do out of bounds accesses. However, when the array_view is copied
to or from using memcpy, it bypasses that safety.
To address this, add a `copy` free function that copies data from an
array view to another, ensuring that the destination and source array
views have the same size. When copying to or from parts of an
array_view, we are expected to use gdb::array_view::slice, which does
its own bounds check. With all that, any copy operation that goes out
of bounds should be caught by an assertion at runtime.
copy is implemented using std::copy and std::copy_backward, which, at
least on libstdc++, appears to pick memmove when copying trivial data.
So in the end there shouldn't be much difference vs using a bare memcpy,
as we do right now. When copying non-trivial data, std::copy and
std::copy_backward assigns each element in a loop.
To properly support overlapping ranges, we must use std::copy or
std::copy_backward, depending on whether the destination is before the
source or vice-versa. std::copy and std::copy_backward don't support
copying exactly overlapping ranges (where the source range is equal to
the destination range). But in this case, no copy is needed anyway, so
we do nothing.
The order of parameters of the new copy function is based on std::copy
and std::copy_backward, where the source comes before the destination.
Change a few randomly selected spots to use the new function, to show
how it can be used.
Add a test for the new function, testing both with arrays of a trivial
type (int) and of a non-trivial type (foo). Test non-overlapping
ranges as well as three kinds of overlapping ranges: source before dest,
dest before source, and dest == source.
Change-Id: Ibeaca04e0028410fd44ce82f72e60058d6230a03
|
|
In commit 50888e42dcd3 ("gdb: change functions returning value contents
to use gdb::array_view"), I believe I made a mistake with the length of
the array views returned by some functions. All functions return a view
of `TYPE_LENGTH (value_type (type))` length. This is not correct when
the value's enclosing type is larger than the value's type. In that
case, the value's contents buffer is of the size of the enclosing type,
and the value's actual contents is a slice of that (as returned by
value_contents). So, functions value_contents_all_raw,
value_contents_for_printing and value_contents_for_printing_const are
not correct. Since they are meant to return the value's contents buffer
as a whole, they should have the size of the enclosing type.
There is nothing that uses the returned array view size at the moment,
so this didn't cause a problem. But it became apparent when trying to
adjust some callers.
Change-Id: Ib4e8837e1069111d2b2784d3253d5f3002419e68
|
|
While reviewing this patch:
https://sourceware.org/pipermail/gdb-patches/2021-November/183227.html
I spotted that the patch could be improved if we threw
OPTIMIZED_OUT_ERROR rather than GENERIC_ERROR in a few places.
This commit updates error_value_optimized_out and
require_not_optimized_out to throw OPTIMIZED_OUT_ERROR.
I ran the testsuite and saw no regressions. This doesn't really
surprise me, we don't usually write code like:
catch (const gdb_exception_error &ex)
{
(if ex.error == GENERIC_ERROR)
...
else
...
}
There are a three places where we write something like:
catch (const gdb_exception_error &ex)
{
(if ex.error == OPTIMIZED_OUT_ERROR)
...
}
In frame.c:unwind_pc, stack.c:info_frame_command_core, and
value.c:value_optimized_out, but if we are hitting these cases then
it's not significantly changing GDB's behaviour.
|
|
Remove TYPE_FIELD_STATIC_PHYSADDR replace with type::field +
field::loc_physaddr.
Change-Id: Ica9bc4a48f34750ec82ec86c298d3ecece81bcbd
|
|
Remove TYPE_FIELD_STATIC_PHYSNAME, replace with type::field +
field::loc_physname.
Change-Id: Ie35d446b67dd1d02f39998b406001bdb7e6d5abb
|
|
Remove TYPE_FIELD_BITPOS, replace its uses with type::field +
field::loc_bitpos.
Change-Id: Iccd8d5a77e5352843a837babaa6bd284162e0320
|
|
Remove TYPE_FIELD_LOC_KIND, replace its uses with type::field +
field::loc_kind.
Change-Id: Ib124a26365df82ac1d23df7962d954192913bd90
|
|
When building on ARM (32-bits), we errors like this:
/home/smarchi/src/binutils-gdb/gdb/value.c: In function 'gdb::array_view<const unsigned char> value_contents_for_printing(value*)':
/home/smarchi/src/binutils-gdb/gdb/value.c:1252:35: error: narrowing conversion of 'length' from 'ULONGEST' {aka 'long long unsigned int'} to 'size_t' {aka 'unsigned int'} [-Werror=narrowing]
1252 | return {value->contents.get (), length};
| ^~~~~~
Fix that by using gdb::make_array_view, which does the appropriate
conversion.
Change-Id: I7d6f2e75d7440d248b8fb18f8272ee92954b404d
|
|
The bug fixed by this [1] patch was caused by an out-of-bounds access to
a value's content. The code gets the value's content (just a pointer)
and then indexes it with a non-sensical index.
This made me think of changing functions that return value contents to
return array_views instead of a plain pointer. This has the advantage
that when GDB is built with _GLIBCXX_DEBUG, accesses to the array_view
are checked, making bugs more apparent / easier to find.
This patch changes the return types of these functions, and updates
callers to call .data() on the result, meaning it's not changing
anything in practice. Additional work will be needed (which can be done
little by little) to make callers propagate the use of array_view and
reap the benefits.
[1] https://sourceware.org/pipermail/gdb-patches/2021-September/182306.html
Change-Id: I5151f888f169e1c36abe2cbc57620110673816f3
|
|
This makes the Ada-specific "varsize-limit" a synonym for
"max-value-size", and removes the Ada-specific checks of the limit.
I am not certain of the history here, but it seems to me that this
code is fully obsolete now. And, removing this makes it possible to
index large Ada arrays without triggering an error. A new test case
is included to demonstrate this.
|
|
This changes value_zero to create a lazy value. In many cases,
value_zero is called in expression evaluation to wrap a type in a
non-eval context. It seems senseless to allocate a buffer in these
cases.
A new 'is_zero' flag is added so we can preserve the existing
assertions in value_fetch_lazy.
A subsequent patch will add a test where creating a zero value would
fail, due to the variable size check. However, the contents of this
value are never needed, and so creating a lazy value avoids the error
case.
|
|
This adds an is_optimized_out function pointer to lval_funcs, and
changes value_optimized_out to call it. This new function lets gdb
determine if a value is optimized out without necessarily fetching the
value. This is needed for a subsequent patch, where an attempt to
access a lazy value would fail due to the value size limit -- however,
the access was only needed to determine the optimized-out state.
|
|
Remove the `TYPE_FIELD_NAME` and `FIELD_NAME` macros, changing all the
call sites to use field::name directly.
Change-Id: I6900ae4e1ffab1396e24fb3298e94bf123826ca6
|
|
I noticed that pointer_type is declared in language.h and defined in
language.c. However, it really has to do with types, so it should
have been in gdbtypes.h all along.
This patch changes it to be a method on struct type. And, I went
through uses of TYPE_IS_REFERENCE and updated many spots to use the
new method as well. (I didn't update ones that were in arch-specific
code, as I couldn't readily test that.)
|
|
Fixing a bug where the value_copy function did not copy the stack data
and initialized members of the struct value. This is needed for the
next patch where the DWARF expression evaluator is changed to return a
single struct value object.
* value.c (value_copy): Change to also copy the stack data
and initialized members.
|
|
This commit was originally part of this patch series:
(v1): https://sourceware.org/pipermail/gdb-patches/2021-May/179357.html
(v2): https://sourceware.org/pipermail/gdb-patches/2021-June/180208.html
(v3): https://sourceware.org/pipermail/gdb-patches/2021-July/181028.html
However, that series is being held up in review, so I wanted to break
out some of the non-related fixes in order to get these merged.
This commit addresses two semi-related issues, both of which are
problems exposed by using 'set debug frame on'.
The first issue is in frame.c in get_prev_frame_always_1, and was
introduced by this commit:
commit a05a883fbaba69d0f80806e46a9457727fcbe74c
Date: Tue Jun 29 12:03:50 2021 -0400
gdb: introduce frame_debug_printf
This commit replaced fprint_frame with frame_info::to_string.
However, the former could handle taking a nullptr while the later, a
member function, obviously requires a non-nullptr in order to make the
function call. In one place we are not-guaranteed to have a
non-nullptr, and so, there is the possibility of triggering undefined
behaviour.
The second issue addressed in this commit has existed for a while in
GDB, and would cause this assertion:
gdb/frame.c:622: internal-error: frame_id get_frame_id(frame_info*): Assertion `fi->this_id.p != frame_id_status::COMPUTING' failed.
We attempt to get the frame_id for a frame while we are computing the
frame_id for that same frame.
What happens is that when GDB stops we create a frame_info object for
the sentinel frame (frame #-1) and then we attempt to unwind this
frame to create a frame_info object for frame #0.
In the test case used here to expose the issue we have created a
Python frame unwinder. In the Python unwinder we attemt to read the
program counter register.
Reading this register will initially create a lazy register value.
The frame-id stored in the lazy register value will be for the
sentinel frame (lazy register values hold the frame-id for the frame
from which the register will be unwound).
However, the Python unwinder does actually want to examine the value
of the program counter, and so the lazy register value is resolved
into a non-lazy value. This sends GDB into value_fetch_lazy_register
in value.c.
Now, inside this function, if 'set debug frame on' is in effect, then
we want to print something like:
frame=%d, regnum=%d(%s), ....
Where 'frame=%d' will be the relative frame level of the frame for
which the register is being fetched, so, in this case we would expect
to see 'frame=0', i.e. we are reading a register as it would be in
frame #0. But, remember, the lazy register value actually holds the
frame-id for frame #-1 (the sentinel frame).
So, to get the frame_info for frame #0 we used to call:
frame = frame_find_by_id (VALUE_FRAME_ID (val));
Where VALUE_FRAME_ID is:
#define VALUE_FRAME_ID(val) (get_prev_frame_id_by_id (VALUE_NEXT_FRAME_ID (val)))
That is, we start with the frame-id for the next frame as obtained by
VALUE_NEXT_FRAME_ID, then call get_prev_frame_id_by_id to get the
frame-id of the previous frame.
The get_prev_frame_id_by_id function finds the frame_info for the
given frame-id (in this case frame #-1), calls get_prev_frame to get
the previous frame, and then calls get_frame_id.
The problem here is that calling get_frame_id requires that we know
the frame unwinder, so then have to try each frame unwinder in turn,
which would include the Python unwinder.... which is where we started,
and thus we have a loop!
To prevent this loop GDB has an assertion in place, which is what
actually triggers.
Solving the assertion failure is pretty easy, if we consider the code
in value_fetch_lazy_register and get_prev_frame_id_by_id then what we
do is:
1. Start with a frame_id taken from a value,
2. Lookup the corresponding frame,
3. Find the previous frame,
4. Get the frame_id for that frame, and
5. Lookup the corresponding frame
6. Print the frame's level
Notice that steps 3 and 5 give us the exact same result, step 4 is
just wasted effort. We could shorten this process such that we drop
steps 4 and 5, thus:
1. Start with a frame_id taken from a value,
2. Lookup the corresponding frame,
3. Find the previous frame,
6. Print the frame's level
This will give the exact same frame as a result, and this is what I
have done in this patch by removing the use of VALUE_FRAME_ID from
value_fetch_lazy_register.
Out of curiosity I looked to see how widely VALUE_FRAME_ID was used,
and saw it was only used in one other place in valops.c:value_assign,
where, once again, we take the result of VALUE_FRAME_ID and pass it to
frame_find_by_id, thus introducing a redundant frame_id lookup.
I don't think the value_assign case risks triggering the assertion
though, as we are unlikely to call value_assign while computing the
frame_id for a frame, however, we could make value_assign slightly
more efficient, with no real additional complexity, by removing the
use of VALUE_FRAME_ID.
So, in this commit, I completely remove VALUE_FRAME_ID, and replace it
with a use of VALUE_NEXT_FRAME_ID, followed by a direct call to
get_prev_frame_always, this should make no difference in either case,
and resolves the assertion issue from value.c.
As I said, this patch was originally part of another series, the
original test relied on the fixes in that original series. However, I
was able to create an alternative test for this issue by enabling
frame debug within an existing test script.
This commit probably fixes bug PR gdb/27938, though the bug doesn't
have a reproducer attached so it is not possible to know for sure.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27938
|