diff options
author | Szabolcs Nagy <szabolcs.nagy@arm.com> | 2023-10-16 13:18:13 +0100 |
---|---|---|
committer | Szabolcs Nagy <szabolcs.nagy@arm.com> | 2023-11-09 14:44:37 +0000 |
commit | d3a8dfdef0797244d0f2f3a8ec5db8f1dcf1337b (patch) | |
tree | e5cc3acc9f0a8ef812ee5c7713dad82a8caa41b0 /bfd | |
parent | 98b94ebb3ffe715fddde762bb3ee7fd6d972f233 (diff) | |
download | fsf-binutils-gdb-d3a8dfdef0797244d0f2f3a8ec5db8f1dcf1337b.zip fsf-binutils-gdb-d3a8dfdef0797244d0f2f3a8ec5db8f1dcf1337b.tar.gz fsf-binutils-gdb-d3a8dfdef0797244d0f2f3a8ec5db8f1dcf1337b.tar.bz2 |
bfd: aarch64: Fix broken BTI stub PR30930
Input sections are grouped together that can use the same stub area
(within reach) and these groups have a stable id.
Stubs have a name generated from the stub group id and target symbol.
When a relocation requires a stub with a name that already exists, the
stub is reused instead of adding a new one.
For an indirect branch stub another BTI stub may be inserted near the
target to provide a BTI landing pad.
The BTI stub can end up with the same stub group id and thus the same
name as the indirect stub. This happens if the target symbol is within
reach of the indirect branch stub. Then, due to the name collision,
only a single stub was emmitted which branched to itself causing an
infinite loop at runtime.
A possible solution is to just name the BTI stubs differently, but
since in the problematic case the indirect and BTI stub are in the
same stub area, a better solution is to emit a single stub with a
direct branch. The stub is still needed since the caller cannot reach
the target directly and we also want a BTI landing pad in the stub in
case other indirect stubs target the same symbol and thus need a BTI
stub.
In short we convert an indirect branch stub into a BTI stub when the
target is within reach and has no BTI. It is a hassle to change the
symbol of the stub so a BTI stub may end up with *_veneer instead of
*_bti_veneer after the conversion, but this should not matter much.
(Refactoring some of _bfd_aarch64_add_call_stub_entries would be
useful but too much for this bug fix patch.)
The same conversion to direct branch could be done even if the target
did not need a BTI. The stub groups are fixed in the current logic so
linking can fail if too many stubs are inserted and the section layout
is changed too much, but this only happens in extreme cases that can
be reasonably ignored. Because of this the target cannot go out of
reach during stub insertion so the optimization is valid, but not
implemented by this patch for the non-BTI case.
Fixes bug 30930.
Diffstat (limited to 'bfd')
-rw-r--r-- | bfd/elfnn-aarch64.c | 17 |
1 files changed, 15 insertions, 2 deletions
diff --git a/bfd/elfnn-aarch64.c b/bfd/elfnn-aarch64.c index a0dd17f..798643a 100644 --- a/bfd/elfnn-aarch64.c +++ b/bfd/elfnn-aarch64.c @@ -4638,9 +4638,22 @@ _bfd_aarch64_add_call_stub_entries (bool *stub_changed, bfd *output_bfd, insert another stub with direct jump near the target then. */ if (need_bti && !aarch64_bti_stub_p (stub_entry)) { + id_sec_bti = htab->stub_group[sym_sec->id].link_sec; + + /* If the stub with indirect jump and the BTI stub are in + the same stub group: change the indirect jump stub into + a BTI stub since a direct branch can reach the target. + The BTI landing pad is still needed in case another + stub indirectly jumps to it. */ + if (id_sec_bti == id_sec) + { + stub_entry->stub_type = aarch64_stub_bti_direct_branch; + goto skip_double_stub; + } + stub_entry->double_stub = true; htab->has_double_stub = true; - id_sec_bti = htab->stub_group[sym_sec->id].link_sec; + stub_name_bti = elfNN_aarch64_stub_name (id_sec_bti, sym_sec, hash, irela); if (!stub_name_bti) @@ -4687,7 +4700,7 @@ _bfd_aarch64_add_call_stub_entries (bool *stub_changed, bfd *output_bfd, stub_entry->h = NULL; stub_entry->st_type = STT_FUNC; } - +skip_double_stub: *stub_changed = true; } |