Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit builds on the previous series of commits to share the
target description caching code between GDB and gdbserver for
x86/Linux targets.
The objective of this commit is to move the four functions (2 each of)
i386_linux_read_description and amd64_linux_read_description into the
gdb/arch/ directory and combine them so we have just a single copy of
each. Then GDB, gdbserver, and the in-process-agent (IPA) will link
against these shared functions.
One curiosity with this patch is the function
x86_linux_post_init_tdesc. On the gdbserver side the two functions
amd64_linux_read_description and i386_linux_read_description have some
functionality that is not present on the GDB side, there is some
additional configuration that is performed as each target description
is created, to setup the expedited registers.
To support this I've added the function x86_linux_post_init_tdesc.
This function is called from the two *_linux_read_description
functions, but is implemented separately for GDB and gdbserver.
An alternative approach that avoids adding x86_linux_post_init_tdesc
would be to have x86_linux_tdesc_for_tid return a non-const target
description, then in x86_target::low_arch_setup we could inspect the
target description to figure out if it is 64-bit or not, and modify
the target description as needed. In the end I think that adding the
x86_linux_post_init_tdesc function is the simpler solution.
The contents of gdbserver/linux-x86-low.cc have moved to
gdb/arch/x86-linux-tdesc-features.c, and gdbserver/linux-x86-tdesc.h
has moved to gdb/arch/x86-linux-tdesc-features.h, this change leads to
some updates in the #includes in the gdbserver/ directory.
This commit also changes how target descriptions are cached.
Previously both GDB and gdbserver used static C-style arrays to act as
the tdesc cache. This was fine, except for two problems. Either the
C-style arrays would need to be placed in x86-linux-tdesc-features.c,
which would allow us to use the x86_linux_*_tdesc_count_1() functions
to size the arrays for us, or we'd need to hard code the array sizes
using separate #defines, which we'd then have to keep in sync with the
rest of the code in x86-linux-tdesc-features.c.
Given both of these problems I decided a better solution would be to
just switch to using a std::unordered_map to act as the cache. This
will resize automatically, and we can use the xcr0 value as the key.
At first inspection, using xcr0 might seem to be a problem; after all
the {i386,amd64}_create_target_description functions take more than
just the xcr0 value. However, this patch is only for x86/Linux
targets, and for x86/Linux all of the other flags passed to the tdesc
creation functions have constant values and so are irrelevant when we
consider tdesc caching.
For testing I've done the following:
- Built on x86-64 GNU/Linux for all targets, and just for the native
target,
- Build on i386 GNU/Linux for all targets, and just for the native
target,
- Build on a 64-bit, non-x86 GNU/Linux for all targets, just for the
native target, and for targets x86_64-*-linux and i386-*-linux.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
|
|
This commit is part of a series which aims to share more of the target
description creation between GDB and gdbserver for x86/Linux.
After some refactoring earlier in this series the shared
x86_linux_tdesc_for_tid function was added into nat/x86-linux-tdesc.c.
However, this function still relies on amd64_linux_read_description
and i386_linux_read_description which are implemented separately for
both gdbserver and GDB. Given that at their core, all these functions
do is:
1. take an xcr0 value as input,
2. mask out some feature bits,
3. look for a cached pre-generated target description and return it
if found,
4. if no cached target description is found then call either
amd64_create_target_description or
i386_create_target_description to create a new target
description, which is then added to the cache. Return the newly
created target description.
The inner functions amd64_create_target_description and
i386_create_target_description are already shared between GDB and
gdbserver (in the gdb/arch/ directory), so the only thing that
the *_read_description functions really do is add the caching layer,
and it feels like this really could be shared.
However, we have a small problem.
Despite using the same {amd64,i386}_create_target_description
functions in both GDB and gdbserver to create the target descriptions,
on the gdbserver side we cache target descriptions based on a reduced
set of xcr0 feature bits.
What this means is that, in theory, different xcr0 values can map to
the same cache entry, which could result in the wrong target
description being used.
However, I'm not sure if this can actually happen in reality. Within
gdbserver we already split the target description cache based on i386,
amd64, and x32. I suspect within a given gdbserver session we'll only
see at most one target description for each of these.
The cache conflicting problem is caused by xcr0_to_tdesc_idx, which
maps an xcr0 value to a enum x86_linux_tdesc value, and there are only
7 usable values in enum x86_linux_tdesc.
In contrast, on the GDB side there are 64, 32, and 16 cache slots for
i386, amd64, and x32 respectively.
On the GDB side it is much more important to cache things correctly as
a single GDB session might connect to multiple different remote
targets, each of which might have slightly different x86
architectures.
And so, if we want to merge the target description caching between GDB
and gdbserver, then we need to first update gdbserver so that it
caches in the same way as GDB, that is, it needs to adopt a mechanism
that allows for the same number of cache slots of each of i386, amd64,
and x32. In this way, when the caching is shared, GDB's behaviour
will not change.
Unfortunately it's a little more complex than that due to the in
process agent (IPA).
When the IPA is in use, gdbserver sends a target description index to
the IPA, and the IPA uses this to find the correct target description
to use, the IPA having first generated every possible target
description.
Interestingly, there is certainly a bug here which results from only
having 7 values in enum x86_linux_tdesc. As multiple possible target
descriptions in gdbserver map to the same enum x86_linux_tdesc value,
then, when the enum x86_linux_tdesc value is sent to the IPA there is
no way for gdbserver to know that the IPA will select the correct
target description. This bug will get fixed by this commit.
** START OF AN ASIDE **
Back in the day I suspect this approach of sending a target
description index made perfect sense. However since this commit:
commit a8806230241d201f808d856eaae4d44088117b0c
Date: Thu Dec 7 17:07:01 2017 +0000
Initialize target description early in IPA
I think that passing an index was probably a bad idea.
We used to pass the index, and then use that index to lookup which
target description to instantiate and use, the target description was
not generated until the index arrived.
However, the above commit fixed an issue where we can't call malloc()
within (certain parts of) the IPA (apparently), so instead we now
pre-compute _every_ possible target description within the IPA. The
index is only used to lookup which of the (many) pre-computed target
descriptions to use.
It would (I think) have been easier all around if the IPA just
self-inspected, figured out its own xcr0 value, and used that to
create the one target description that is required. So long as the
xcr0 to target description code is shared (at compile time) with
gdbserver, then we can be sure that the IPA will derive the same
target description as gdbserver, and we would avoid all this index
passing business, which has made this commit so very, very painful.
I did look at how a process might derive its own xcr0 value, but I
don't believe this is actually that simple, so for now I've just
doubled down on the index passing approach.
While reviewing earlier iterations of this patch there has been
discussion about the possibility of removing the IPA from GDB. That
would certainly make all of the code touched in this patch much
simpler, but I don't really want to do that as part of this series.
** END OF AN ASIDE **
Currently then for x86/linux, gdbserver sends a number between 0 and 7
to the IPA, and the IPA uses this to create a target description.
However, I am proposing that gdbserver should now create one of (up
to) 64 different target descriptions for i386, so this 0 to 7 index
isn't going to be good enough any more (amd64 and x32 have slightly
fewer possible target descriptions, but still more than 8, so the
problem is the same).
For a while I wondered if I was going to have to try and find some
backward compatible solution for this mess. But after seeing how
lightly the IPA is actually documented, I wonder if it is not the case
that there is a tight coupling between a version of gdbserver and a
version of the IPA? At least I'm hoping so, and that's what I've
assumed in this commit.
In this commit I have thrown out the old IPA target description index
numbering scheme, and switched to a completely new numbering scheme.
Instead of the index that is passed being arbitrary, the index is
instead calculated from the set of xcr0 features that are present on
the target. Within the IPA we can then reverse this logic to recreate
the xcr0 value based on the index, and from the xcr0 value we can
choose the correct target description.
With the gdbserver to IPA numbering scheme issue resolved I have then
update the gdbserver versions of amd64_linux_read_description and
i386_linux_read_description so that they cache target descriptions
using the same set of xcr0 features as GDB itself.
After this gdbserver should now always come up with the same target
description as GDB does on any x86/Linux target.
This commit does not introduce any new code sharing between GDB and
gdbserver as previous commits in this series have done. Instead this
commit is all about bringing GDB and gdbserver into alignment
functionally so that the next commit(s) can merge the GDB and
gdbserver versions of these functions.
Notes On The Implementation
---------------------------
Previously, within gdbserver, target descriptions were cached within
arrays. These arrays were sized based on enum x86_linux_tdesc and
xcr0_to_tdesc_idx returned the array (cache) index.
Now we need different array lengths for each of i386, amd64, and x32.
And the index to use within each array is calculated based on which
xcr0 bits are set and valid for a particular target type.
I really wanted to avoid having fixed array sizes, or having the set
of relevant xcr0 bits encoded in multiple places.
The solution I came up with was to create a single data structure
which would contain a list of xcr0 bits along with flags to indicate
which of the i386, amd64, and x32 targets the bit is relevant for. By
making the table constexpr, and adding some constexpr helper
functions, it is possible to calculate the sizes for the cache arrays
at compile time, as well as the bit masks needed to each target type.
During review it was pointed out[1] that possibly the failure to check
the SSE and X87 bits for amd64/x32 targets might be an error, however,
if this is the case then this is an issue that existed long before
this patch. I'd really like to keep this patch focused on reworking
the existing code and try to avoid changing how target descriptions
are actually created, mostly out of fear that I'll break something.
[1] https://inbox.sourceware.org/gdb-patches/MN2PR11MB4566070607318EE7E669A5E28E1B2@MN2PR11MB4566.namprd11.prod.outlook.com
Approved-By: John Baldwin <jhb@FreeBSD.org>
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
|
|
This commit is part of a series to share more of the x86 target
description creation code between GDB and gdbserver.
Unlike previous commits which were mostly refactoring, this commit is
the first that makes a real change, though that change should mostly
be for gdbserver; I've largely adopted the "GDB" way of doing things
for gdbserver, and this fixes a real gdbserver bug.
On a x86-64 Linux target, running the test:
gdb.server/connect-with-no-symbol-file.exp
results in two core files being created. Both of these core files are
from the inferior process, created after gdbserver has detached.
In this test a gdbserver process is started and then, after gdbserver
has started, but before GDB attaches, we either delete the inferior
executable, or change its permissions so it can't be read. Only after
doing this do we attempt to connect with GDB.
As GDB connects to gdbserver, gdbserver attempts to figure out the
target description so that it can send the description to GDB, this
involves a call to x86_linux_read_description.
In x86_linux_read_description one of the first things we do is try to
figure out if the process is 32-bit or 64-bit. To do this we look up
the executable via the thread-id, and then attempt to read the
architecture size from the executable. This isn't going to work if
the executable has been deleted, or is no longer readable.
And so, as we can't read the executable, we default to an i386 target
and use an i386 target description.
A consequence of using an i386 target description is that addresses
are assumed to be 32-bits. Here's an example session that shows the
problems this causes. This is run on an x86-64 machine, and the test
binary (xx.x) is a standard 64-bit x86-64 binary:
shell_1$ gdbserver --once localhost :54321 /tmp/xx.x
shell_2$ gdb -q
(gdb) set sysroot
(gdb) shell chmod 000 /tmp/xx.x
(gdb) target remote :54321
Remote debugging using :54321
warning: /tmp/xx.x: Permission denied.
0xf7fd3110 in ?? ()
(gdb) show architecture
The target architecture is set to "auto" (currently "i386").
(gdb) p/x $pc
$1 = 0xf7fd3110
(gdb) info proc mappings
process 2412639
Mapped address spaces:
Start Addr End Addr Size Offset Perms objfile
0x400000 0x401000 0x1000 0x0 r--p /tmp/xx.x
0x401000 0x402000 0x1000 0x1000 r-xp /tmp/xx.x
0x402000 0x403000 0x1000 0x2000 r--p /tmp/xx.x
0x403000 0x405000 0x2000 0x2000 rw-p /tmp/xx.x
0xf7fcb000 0xf7fcf000 0x4000 0x0 r--p [vvar]
0xf7fcf000 0xf7fd1000 0x2000 0x0 r-xp [vdso]
0xf7fd1000 0xf7fd3000 0x2000 0x0 r--p /usr/lib64/ld-2.30.so
0xf7fd3000 0xf7ff3000 0x20000 0x2000 r-xp /usr/lib64/ld-2.30.so
0xf7ff3000 0xf7ffb000 0x8000 0x22000 r--p /usr/lib64/ld-2.30.so
0xf7ffc000 0xf7ffe000 0x2000 0x2a000 rw-p /usr/lib64/ld-2.30.so
0xf7ffe000 0xf7fff000 0x1000 0x0 rw-p
0xfffda000 0xfffff000 0x25000 0x0 rw-p [stack]
0xff600000 0xff601000 0x1000 0x0 r-xp [vsyscall]
(gdb) info inferiors
Num Description Connection Executable
* 1 process 2412639 1 (remote :54321)
(gdb) shell cat /proc/2412639/maps
00400000-00401000 r--p 00000000 fd:03 45907133 /tmp/xx.x
00401000-00402000 r-xp 00001000 fd:03 45907133 /tmp/xx.x
00402000-00403000 r--p 00002000 fd:03 45907133 /tmp/xx.x
00403000-00405000 rw-p 00002000 fd:03 45907133 /tmp/xx.x
7ffff7fcb000-7ffff7fcf000 r--p 00000000 00:00 0 [vvar]
7ffff7fcf000-7ffff7fd1000 r-xp 00000000 00:00 0 [vdso]
7ffff7fd1000-7ffff7fd3000 r--p 00000000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7fd3000-7ffff7ff3000 r-xp 00002000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ff3000-7ffff7ffb000 r--p 00022000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ffc000-7ffff7ffe000 rw-p 0002a000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ffe000-7ffff7fff000 rw-p 00000000 00:00 0
7ffffffda000-7ffffffff000 rw-p 00000000 00:00 0 [stack]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
(gdb)
Notice the difference between the mappings reported via GDB and those
reported directly from the kernel via /proc/PID/maps, the addresses of
every mapping is clamped to 32-bits for GDB, while the kernel reports
real 64-bit addresses.
Notice also that the $pc value is a 32-bit value. It appears to be
within one of the mappings reported by GDB, but is outside any of the
mappings reported from the kernel.
And this is where the problem arises. When gdbserver detaches from
the inferior we pass the inferior the address from which it should
resume. Due to the 32/64 bit confusion we tell the inferior to resume
from the 32-bit $pc value, which is not within any valid mapping, and
so, as soon as the inferior resumes, it segfaults.
If we look at how GDB (not gdbserver) figures out its target
description then we see an interesting difference. GDB doesn't try to
read the executable. Instead GDB uses ptrace to query the thread's
state, and uses this to figure out the if the thread is 32 or 64 bit.
If we update gdbserver to do it the "GDB" way then the above problem
is resolved, gdbserver now sees the process as 64-bit, and when we
detach from the inferior we give it the correct 64-bit address, and
the inferior no longer segfaults.
Now, I could just update the gdbserver code, but better, I think, to
share one copy of the code between GDB and gdbserver in gdb/nat/.
That is what this commit does.
The cores of x86_linux_read_description from gdbserver and
x86_linux_nat_target::read_description from GDB are moved into a new
file gdb/nat/x86-linux-tdesc.c and combined into a single function
x86_linux_tdesc_for_tid which is called from each location.
This new function does things mostly the GDB way, some changes are
needed to allow for the sharing; we now take some pointers for where
the shared code can cache the xcr0 and xsave layout values.
Another thing to note about this commit is how the functions
i386_linux_read_description and amd64_linux_read_description are
handled. For now I've left these function as implemented separately
in GDB and gdbserver. I've moved the declarations of these functions
into gdb/arch/{i386,amd64}-linux-tdesc.h, but the implementations are
left where they are.
A later commit in this series will make these functions shared too,
but doing this is not trivial, so I've left that for a separate
commit. Merging the declarations as I've done here ensures that
everyone implements the function to the same API, and once these
functions are shared (in a later commit) we'll want a shared
declaration anyway.
Reviewed-By: Felix Willgerodt <felix.willgerodt@intel.com>
Acked-By: John Baldwin <jhb@FreeBSD.org>
|
|
Spotted some declarations in gdbserver/linux-amd64-ipa.cc that are no
longer needed. These are:
1. 'init_registers_amd64_linux' - the comment claims this function
is auto generated, but I don't believe that this is still the case.
Also the function is not used in this file,
2. 'tdesc_amd64_linux' - this variable doesn't seem to exist any
more, I suspect this was renamed to 'tdesc_amd64_linux_no_xml', but
neither are used in this file, so lets remove the declaration.
The amd64 in-process-agent still builds fine after this commit.
There should be no user visible changes after this commit.
Approved-By: Felix Willgerodt <felix.willgerodt@intel.com>
|
|
Now that defs.h, server.h and common-defs.h are included via the
`-include` option, it is no longer necessary for source files to include
them. Remove all the inclusions of these files I could find. Update
the generation scripts where relevant.
Change-Id: Ia026cff269c1b7ae7386dd3619bc9bb6a5332837
Approved-By: Pedro Alves <pedro@palves.net>
|
|
This reverts commit cd9b374ffe372dcaf7e4c15548cf53a301d8dcdd.
|
|
This reverts commit 61bb321605fc74703adc994fd7a78e9d2495ca7a.
|
|
This reverts commit 198ff6ff819c240545f9fc68b39636fd376d4ba9.
|
|
This commit builds on the previous series of commits to share the
target description caching code between GDB and gdbserver for
x86/Linux targets.
The objective of this commit is to move the four functions (2 each of)
i386_linux_read_description and amd64_linux_read_description into
gdb/nat/x86-linux-tdesc.c and combine them so we have just a single
copy of each. Then both GDB and gdbserver will link against these
shared functions.
It is worth reading the description of the previous commit to see why
this merging is not as simple as it seems: on the gdbserver side we
actually have two users of these functions, gdbserver itself, and the
in process agent (IPA).
However, the previous commit streamlined the gdbserver code, and so
now it is simple to move the two functions along with all their
support functions from the gdbserver directory into the gdb/nat/
directory, and then GDB is fine to call these functions.
One small curiosity with this patch is the function
x86_linux_post_init_tdesc. On the gdbserver side the two functions
amd64_linux_read_description and i386_linux_read_description have some
functionality that is not present on the GDB side, that is some
additional configuration that is performed as each target description
is created to setup the expedited registers.
To support this I've added the function x86_linux_post_init_tdesc.
This function is called from the two *_linux_read_description
functions, but is implemented separately for GDB and gdbserver.
This does mean adding back some non-shared code when this whole series
has been about sharing code, but now the only non-shared bit is the
single line that is actually different between GDB and gdbserver, all
the rest, which is identical, is now shared.
I did need to add a new rule to the gdbserver Makefile, this is to
allow the nat/x86-linux-tdesc.c file to be compiled for the IPA.
Approved-By: John Baldwin <jhb@FreeBSD.org>
|
|
This commit is part of a series which aims to share more of the target
description creation between GDB and gdbserver for x86/Linux.
After some refactoring, the previous commit actually started to share
some code, we added the shared x86_linux_tdesc_for_tid function into
nat/x86-linux-tdesc.c. However, this function still relies on
amd64_linux_read_description and i386_linux_read_description which are
implemented separately for both gdbserver and GDB. Given that at
their core, all these functions to is:
1. take an xcr0 value as input,
2. mask out some feature bits,
3. look for a cached pre-generated target description and return it
if found,
4. if no cached target description is found then call either
amd64_create_target_description or
i386_create_target_description to create a new target
description, which is then added to the cache. Return the newly
created target description.
The inner functions amd64_create_target_description and
i386_create_target_description are already shared between GDB and
gdbserver (in the gdb/arch/ directory), so the only thing that
the *_read_description functions really do is add the caching layer,
and it feels like this really could be shared.
However, we have a small problem.
On the GDB side we create target descriptions using a different set of
cpu features than on the gdbserver side! This means that for the
exact same target, we might get a different target description when
using native GDB vs using gdbserver. This surely feels like a
mistake, I would expect to get the same target description on each.
The table below shows the number of possible different target
descriptions that we can create on the GDB side vs on the gdbserver
side for each target type:
| GDB | gdbserver
------|-----|----------
i386 | 64 | 7
amd64 | 32 | 7
x32 | 16 | 7
So in theory, all I want to do is move the GDB version
of *_read_description into the nat/ directory and have gdbserver use
that, then both GDB and gdbserver would be able to create any of the
possible target descriptions.
Unfortunately it's a little more complex than that due to the in
process agent (IPA).
When the IPA is in use, gdbserver sends a target description index to
the IPA, and the IPA uses this to find the correct target description
to use.
** START OF AN ASIDE **
Back in the day I suspect this approach made perfect sense. However
since this commit:
commit a8806230241d201f808d856eaae4d44088117b0c
Date: Thu Dec 7 17:07:01 2017 +0000
Initialize target description early in IPA
I think passing the index is now more trouble than its worth.
We used to pass the index, and then use that index to lookup which
target description to instantiate and use. However, the above commit
fixed an issue where we can't call malloc() within (certain parts of)
the IPA (apparently), so instead we now pre-compute _every_ possible
target description within the IPA. The index is now only used to
lookup which of the (many) pre-computed target descriptions to use.
It would (I think) have been easier all around if the IPA just
self-inspected, figured out its own xcr0 value, and used that to
create the one target description that is required. So long as the
xcr0 to target description code is shared (at compile time) with
gdbserver, then we can be sure that the IPA will derive the same
target description as gdbserver, and we would avoid all this index
passing business, which has made this commit so very, very painful.
** END OF AN ASIDE **
Currently then for x86/linux, gdbserver sends a number between 0 and 7
to the IPA, and the IPA uses this to create a target description.
However, I am proposing that gdbserver should now create one of (up
to) 64 different target descriptions for i386, so this 0 to 7 index
isn't going to be good enough any more (amd64 and x32 have slightly
fewer possible target descriptions, but still more than 8, so the
problem is the same).
For a while I wondered if I was going to have to try and find some
backward compatible solution for this mess. But after seeing how
lightly the IPA is actually documented, I wonder if it is not the case
that there is a tight coupling between a version of gdbserver and a
version of the IPA? At least I'm hoping so.
In this commit I have thrown out the old IPA target description index
numbering scheme, and switched to a completely new numbering scheme.
Instead of the index that is passed being arbitrary, the index is
instead calculated from the set of cpu features that are present on
the target. Within the IPA we can then reverse this logic to recreate
the xcr0 value based on the index, and from the xcr0 value we can
create the correct target description.
With the gdbserver to IPA numbering scheme issue resolved I have then
update the gdbserver versions of amd64_linux_read_description and
i386_linux_read_description so that they create target descriptions
using the same set of cpu features as GDB itself.
After this gdbserver should now always come up with the same target
description as GDB does on any x86/Linux target.
This commit does not introduce any new code sharing between GDB and
gdbserver as previous commits in this series does. Instead this
commit is all about bringing GDB and gdbserver into alignment
functionally so that the next commit can merge the GDB and gdbserver
versions of these functions.
Approved-By: John Baldwin <jhb@FreeBSD.org>
|
|
This commit is part of a series to share more of the x86 target
description creation code between GDB and gdbserver.
Unlike previous commits which were mostly refactoring, this commit is
the first that makes a real change, though that change should mostly
be for gdbserver; I've largely adopted the "GDB" way of doing things
for gdbserver, and this fixes a real gdbserver bug.
On a x86-64 Linux target, running the test:
gdb.server/connect-with-no-symbol-file.exp
results in two core files being created. Both of these core files are
from the inferior process, created after gdbserver has detached.
In this test a gdbserver process is started and then, after gdbserver
has started, but before GDB attaches, we either delete the inferior
executable, or change its permissions so it can't be read. Only after
doing this do we attempt to connect with GDB.
As GDB connects to gdbserver, gdbserver attempts to figure out the
target description so that it can send the description to GDB, this
involves a call to x86_linux_read_description.
In x86_linux_read_description one of the first things we do is try to
figure out if the process is 32-bit or 64-bit. To do this we look up
the executable via the thread-id, and then attempt to read the
architecture size from the executable. This isn't going to work if
the executable has been deleted, or is no longer readable.
And so, as we can't read the executable, we default to an i386 target
and use an i386 target description.
A consequence of using an i386 target description is that addresses
are assumed to be 32-bits. Here's an example session that shows the
problems this causes. This is run on an x86-64 machine, and the test
binary (xx.x) is a standard 64-bit x86-64 binary:
shell_1$ gdbserver --once localhost :54321 /tmp/xx.x
shell_2$ gdb -q
(gdb) set sysroot
(gdb) shell chmod 000 /tmp/xx.x
(gdb) target remote :54321
Remote debugging using :54321
warning: /tmp/xx.x: Permission denied.
0xf7fd3110 in ?? ()
(gdb) show architecture
The target architecture is set to "auto" (currently "i386").
(gdb) p/x $pc
$1 = 0xf7fd3110
(gdb) info proc mappings
process 2412639
Mapped address spaces:
Start Addr End Addr Size Offset Perms objfile
0x400000 0x401000 0x1000 0x0 r--p /tmp/xx.x
0x401000 0x402000 0x1000 0x1000 r-xp /tmp/xx.x
0x402000 0x403000 0x1000 0x2000 r--p /tmp/xx.x
0x403000 0x405000 0x2000 0x2000 rw-p /tmp/xx.x
0xf7fcb000 0xf7fcf000 0x4000 0x0 r--p [vvar]
0xf7fcf000 0xf7fd1000 0x2000 0x0 r-xp [vdso]
0xf7fd1000 0xf7fd3000 0x2000 0x0 r--p /usr/lib64/ld-2.30.so
0xf7fd3000 0xf7ff3000 0x20000 0x2000 r-xp /usr/lib64/ld-2.30.so
0xf7ff3000 0xf7ffb000 0x8000 0x22000 r--p /usr/lib64/ld-2.30.so
0xf7ffc000 0xf7ffe000 0x2000 0x2a000 rw-p /usr/lib64/ld-2.30.so
0xf7ffe000 0xf7fff000 0x1000 0x0 rw-p
0xfffda000 0xfffff000 0x25000 0x0 rw-p [stack]
0xff600000 0xff601000 0x1000 0x0 r-xp [vsyscall]
(gdb) info inferiors
Num Description Connection Executable
* 1 process 2412639 1 (remote :54321)
(gdb) shell cat /proc/2412639/maps
00400000-00401000 r--p 00000000 fd:03 45907133 /tmp/xx.x
00401000-00402000 r-xp 00001000 fd:03 45907133 /tmp/xx.x
00402000-00403000 r--p 00002000 fd:03 45907133 /tmp/xx.x
00403000-00405000 rw-p 00002000 fd:03 45907133 /tmp/xx.x
7ffff7fcb000-7ffff7fcf000 r--p 00000000 00:00 0 [vvar]
7ffff7fcf000-7ffff7fd1000 r-xp 00000000 00:00 0 [vdso]
7ffff7fd1000-7ffff7fd3000 r--p 00000000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7fd3000-7ffff7ff3000 r-xp 00002000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ff3000-7ffff7ffb000 r--p 00022000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ffc000-7ffff7ffe000 rw-p 0002a000 fd:00 143904 /usr/lib64/ld-2.30.so
7ffff7ffe000-7ffff7fff000 rw-p 00000000 00:00 0
7ffffffda000-7ffffffff000 rw-p 00000000 00:00 0 [stack]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
(gdb)
Notice the difference between the mappings reported via GDB and those
reported directly from the kernel via /proc/PID/maps, the addresses of
every mapping is clamped to 32-bits for GDB, while the kernel reports
real 64-bit addresses.
Notice also that the $pc value is a 32-bit value. It appears to be
within one of the mappings reported by GDB, but is outside any of the
mappings reported from the kernel.
And this is where the problem arises. When gdbserver detaches from
the inferior we pass the inferior the address from which it should
resume. Due to the 32/64 bit confusion we tell the inferior to resume
from the 32-bit $pc value, which is not within any valid mapping, and
so, as soon as the inferior resumes, it segfaults.
If we look at how GDB (not gdbserver) figures out its target
description then we see an interesting difference. GDB doesn't try to
read the executable. Instead GDB uses ptrace to query the thread's
state, and uses this to figure out the if the thread is 32 or 64 bit.
If we update gdbserver to do it the "GDB" way then the above problem
is resolved, gdbserver now sees the process as 64-bit, and when we
detach from the inferior we give it the correct 64-bit address, and
the inferior no longer segfaults.
Now, I could just update the gdbserver code, but better, I think, to
share one copy of the code between GDB and gdbserver in gdb/nat/.
That is what this commit does.
The cores of x86_linux_read_description from gdbserver and
x86_linux_nat_target::read_description from GDB are moved into a new
file gdb/nat/x86-linux-tdesc.c and combined into a single function
x86_linux_tdesc_for_tid which is called from each location.
This new function does things the GDB way, the only changes are to
allow for the sharing; we now have a callback function to call the
first time that the xcr0 state is read, this allows for GDB and
gdbserver to perform their own initialisation as needed, and
additionally, the new function takes a pointer for where to cache the
xcr0 value, this isn't needed for this commit, but will be useful in a
later commit where gdbserver will want to read this cached xcr0
value.
Another thing to note about this commit is how the functions
i386_linux_read_description and amd64_linux_read_description are
handled. For now I've left these function as implemented separately
in GDB and gdbserver. I've moved the declarations of these functions
into gdb/nat/x86-linux-tdesc.h, but the implementations are left as
separate.
A later commit in this series will make these functions shared too,
but doing this is not trivial, so I've left that for a separate
commit. Merging the declarations as I've done here ensures that
everyone implements the function to the same API, and once these
functions are shared (in a later commit) we'll want a shared
declaration anyway.
Approved-By: John Baldwin <jhb@FreeBSD.org>
|
|
This commit is the result of the following actions:
- Running gdb/copyright.py to update all of the copyright headers to
include 2024,
- Manually updating a few files the copyright.py script told me to
update, these files had copyright headers embedded within the
file,
- Regenerating gdbsupport/Makefile.in to refresh it's copyright
date,
- Using grep to find other files that still mentioned 2023. If
these files were updated last year from 2022 to 2023 then I've
updated them this year to 2024.
I'm sure I've probably missed some dates. Feel free to fix them up as
you spot them.
|
|
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
Currently, every internal_error call must be passed __FILE__/__LINE__
explicitly, like:
internal_error (__FILE__, __LINE__, "foo %d", var);
The need to pass in explicit __FILE__/__LINE__ is there probably
because the function predates widespread and portable variadic macros
availability. We can use variadic macros nowadays, and in fact, we
already use them in several places, including the related
gdb_assert_not_reached.
So this patch renames the internal_error function to something else,
and then reimplements internal_error as a variadic macro that expands
__FILE__/__LINE__ itself.
The result is that we now should call internal_error like so:
internal_error ("foo %d", var);
Likewise for internal_warning.
The patch adjusts all calls sites. 99% of the adjustments were done
with a perl/sed script.
The non-mechanical changes are in gdbsupport/errors.h,
gdbsupport/gdb_assert.h, and gdb/gdbarch.py.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Change-Id: Ia6f372c11550ca876829e8fd85048f4502bdcf06
|
|
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
This commits the result of running gdb/copyright.py as per our Start
of New Year procedure...
gdb/ChangeLog
Update copyright year range in copyright header of all GDB files.
|
|
For the same reasons outlined in the previous patch, this patch renames
gdbserver source files to .cc.
I have moved the "-x c++" switch to only those rules that require it.
gdbserver/ChangeLog:
* Makefile.in: Rename source files from .c to .cc.
* %.c: Rename to %.cc.
* configure.ac: Rename server.c to server.cc.
* configure: Re-generate.
|