diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2013-11-20 09:01:04 -0800 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2014-02-19 11:48:23 -0800 |
commit | 0ff2b86e7c14177ec7f9e1257f8e697814794017 (patch) | |
tree | 237cd7c1c31e6546b7494d2fd8fa40ba5274d299 /bfd | |
parent | c658158d936f25dca30cc89260168a5c0be3a24c (diff) | |
download | gdb-0ff2b86e7c14177ec7f9e1257f8e697814794017.zip gdb-0ff2b86e7c14177ec7f9e1257f8e697814794017.tar.gz gdb-0ff2b86e7c14177ec7f9e1257f8e697814794017.tar.bz2 |
Create the second PLT for BND relocations
Intel MPX introduces 4 bound registers, which will be used for parameter
passing in x86-64. Bound registers are cleared by branch instructions.
Branch instructions with BND prefix will keep bound register contents.
This leads to 2 requirements to 64-bit MPX run-time:
1. Dynamic linker (ld.so) should save and restore bound registers during
symbol lookup.
2. Change the current 16-byte PLT0:
ff 35 08 00 00 00 pushq GOT+8(%rip)
ff 25 00 10 00 jmpq *GOT+16(%rip)
0f 1f 40 00 nopl 0x0(%rax)
and 16-byte PLT1:
ff 25 00 00 00 00 jmpq *name@GOTPCREL(%rip)
68 00 00 00 00 pushq $index
e9 00 00 00 00 jmpq PLT0
which clear bound registers, to preserve bound registers.
We use 2 new relocations:
to mark branch instructions with BND prefix.
When linker sees any R_X86_64_PC32_BND or R_X86_64_PLT32_BND relocations,
it switches to a different PLT0:
ff 35 08 00 00 00 pushq GOT+8(%rip)
f2 ff 25 00 10 00 bnd jmpq *GOT+16(%rip)
0f 1f 00 nopl (%rax)
to preserve bound registers for symbol lookup and it also creates an
external PLT section, .pl.bnd. Linker will create a BND PLT1 entry
in .plt:
68 00 00 00 00 pushq $index
f2 e9 00 00 00 00 bnd jmpq PLT0
0f 1f 44 00 00 nopl 0(%rax,%rax,1)
and a 8-byte BND PLT entry in .plt.bnd:
f2 ff 25 00 00 00 00 bnd jmpq *name@GOTPCREL(%rip)
90 nop
Otherwise, linker will create a legacy PLT1 entry in .plt:
68 00 00 00 00 pushq $index
e9 00 00 00 00 jmpq PLT0
66 0f 1f 44 00 00 nopw 0(%rax,%rax,1)
and a 8-byte legacy PLT in .plt.bnd:
ff 25 00 00 00 00 jmpq *name@GOTPCREL(%rip)
66 90 xchg %ax,%ax
The initial value of the GOT entry for "name" will be set to the the
"pushq" instruction in the corresponding entry in .plt. Linker will
resolve reference of symbol "name" to the entry in the second PLT,
.plt.bnd.
Prelink stores the offset of pushq of PLT1 (plt_base + 0x10) in GOT[1]
and GOT[1] is stored in GOT[3]. We can undo prelink in GOT by computing
the corresponding the pushq offset with
GOT[1] + (GOT offset - &GOT[3]) * 2
Since for each entry in .plt except for PLT0 we create a 8-byte entry in
.plt.bnd, there is extra 8-byte per PLT symbol.
We also investigated the 16-byte entry for .plt.bnd. We compared the
8-byte entry vs the the 16-byte entry for .plt.bnd on Sandy Bridge.
There are no performance differences in SPEC CPU 2000/2006 as well as
micro benchmarks.
Pros:
No change to undo prelink in dynamic linker.
Only 8-byte memory overhead for each PLT symbol.
Cons:
Extra .plt.bnd section is needed.
Extra 8 byte for legacy branches to PLT.
GDB is unware of the new layout of .plt and .plt.bnd.
bfd/
* elf64-x86-64.c (elf_x86_64_bnd_plt0_entry): New.
(elf_x86_64_legacy_plt_entry): Likewise.
(elf_x86_64_bnd_plt_entry): Likewise.
(elf_x86_64_legacy_plt2_entry): Likewise.
(elf_x86_64_bnd_plt2_entry): Likewise.
(elf_x86_64_bnd_arch_bed): Likewise.
(elf_x86_64_link_hash_entry): Add has_bnd_reloc and plt_bnd.
(elf_x86_64_link_hash_table): Add plt_bnd.
(elf_x86_64_link_hash_newfunc): Initialize has_bnd_reloc and
plt_bnd.
(elf_x86_64_copy_indirect_symbol): Also copy has_bnd_reloc.
(elf_x86_64_check_relocs): Create the second PLT for Intel MPX
in 64-bit mode.
(elf_x86_64_allocate_dynrelocs): Handle the second PLT for IFUNC
symbols. Resolve call to the second PLT if it is created.
(elf_x86_64_size_dynamic_sections): Keep the second PLT section.
(elf_x86_64_relocate_section): Resolve PLT references to the
second PLT if it is created.
(elf_x86_64_finish_dynamic_symbol): Use BND PLT0 and fill the
second PLT entry for BND relocation.
(elf_x86_64_finish_dynamic_sections): Use MPX backend data if
the second PLT is created.
(elf_x86_64_get_synthetic_symtab): New.
(bfd_elf64_get_synthetic_symtab): Likewise. Undefine for NaCl.
ld/
* emulparams/elf_x86_64.sh (TINY_READONLY_SECTION): New.
ld/testsuite/
* ld-x86-64/mpx.exp: Run bnd-ifunc-1 and bnd-plt-1.
* ld-x86-64/bnd-ifunc-1.d: New file.
* ld-x86-64/bnd-ifunc-1.s: Likewise.
* ld-x86-64/bnd-plt-1.d: Likewise.
Diffstat (limited to 'bfd')
-rw-r--r-- | bfd/ChangeLog | 28 | ||||
-rw-r--r-- | bfd/elf64-x86-64.c | 453 |
2 files changed, 444 insertions, 37 deletions
diff --git a/bfd/ChangeLog b/bfd/ChangeLog index 1414107..cbeec9b 100644 --- a/bfd/ChangeLog +++ b/bfd/ChangeLog @@ -1,3 +1,31 @@ +2014-02-19 Igor Zamyatin <igor.zamyatin@intel.com> + H.J. Lu <hongjiu.lu@intel.com> + + * elf64-x86-64.c (elf_x86_64_bnd_plt0_entry): New. + (elf_x86_64_legacy_plt_entry): Likewise. + (elf_x86_64_bnd_plt_entry): Likewise. + (elf_x86_64_legacy_plt2_entry): Likewise. + (elf_x86_64_bnd_plt2_entry): Likewise. + (elf_x86_64_bnd_arch_bed): Likewise. + (elf_x86_64_link_hash_entry): Add has_bnd_reloc and plt_bnd. + (elf_x86_64_link_hash_table): Add plt_bnd. + (elf_x86_64_link_hash_newfunc): Initialize has_bnd_reloc and + plt_bnd. + (elf_x86_64_copy_indirect_symbol): Also copy has_bnd_reloc. + (elf_x86_64_check_relocs): Create the second PLT for Intel MPX + in 64-bit mode. + (elf_x86_64_allocate_dynrelocs): Handle the second PLT for IFUNC + symbols. Resolve call to the second PLT if it is created. + (elf_x86_64_size_dynamic_sections): Keep the second PLT section. + (elf_x86_64_relocate_section): Resolve PLT references to the + second PLT if it is created. + (elf_x86_64_finish_dynamic_symbol): Use BND PLT0 and fill the + second PLT entry for BND relocation. + (elf_x86_64_finish_dynamic_sections): Use MPX backend data if + the second PLT is created. + (elf_x86_64_get_synthetic_symtab): New. + (bfd_elf64_get_synthetic_symtab): Likewise. Undefine for NaCl. + 2014-02-19 Alan Modra <amodra@gmail.com> * elf64-ppc.h (struct ppc64_elf_params): Add save_restore_funcs. diff --git a/bfd/elf64-x86-64.c b/bfd/elf64-x86-64.c index edee8ec..aa6fd80 100644 --- a/bfd/elf64-x86-64.c +++ b/bfd/elf64-x86-64.c @@ -562,6 +562,56 @@ static const bfd_byte elf_x86_64_plt_entry[PLT_ENTRY_SIZE] = 0, 0, 0, 0 /* replaced with offset to start of .plt0. */ }; +/* The first entry in a procedure linkage table with BND relocations + like this. */ + +static const bfd_byte elf_x86_64_bnd_plt0_entry[PLT_ENTRY_SIZE] = +{ + 0xff, 0x35, 8, 0, 0, 0, /* pushq GOT+8(%rip) */ + 0xf2, 0xff, 0x25, 16, 0, 0, 0, /* bnd jmpq *GOT+16(%rip) */ + 0x0f, 0x1f, 0 /* nopl (%rax) */ +}; + +/* Subsequent entries for legacy branches in a procedure linkage table + with BND relocations look like this. */ + +static const bfd_byte elf_x86_64_legacy_plt_entry[PLT_ENTRY_SIZE] = +{ + 0x68, 0, 0, 0, 0, /* pushq immediate */ + 0xe9, 0, 0, 0, 0, /* jmpq relative */ + 0x66, 0x0f, 0x1f, 0x44, 0, 0 /* nopw (%rax,%rax,1) */ +}; + +/* Subsequent entries for branches with BND prefx in a procedure linkage + table with BND relocations look like this. */ + +static const bfd_byte elf_x86_64_bnd_plt_entry[PLT_ENTRY_SIZE] = +{ + 0x68, 0, 0, 0, 0, /* pushq immediate */ + 0xf2, 0xe9, 0, 0, 0, 0, /* bnd jmpq relative */ + 0x0f, 0x1f, 0x44, 0, 0 /* nopl 0(%rax,%rax,1) */ +}; + +/* Entries for legacy branches in the second procedure linkage table + look like this. */ + +static const bfd_byte elf_x86_64_legacy_plt2_entry[8] = +{ + 0xff, 0x25, /* jmpq *name@GOTPC(%rip) */ + 0, 0, 0, 0, /* replaced with offset to this symbol in .got. */ + 0x66, 0x90 /* xchg %ax,%ax */ +}; + +/* Entries for branches with BND prefix in the second procedure linkage + table look like this. */ + +static const bfd_byte elf_x86_64_bnd_plt2_entry[8] = +{ + 0xf2, 0xff, 0x25, /* bnd jmpq *name@GOTPC(%rip) */ + 0, 0, 0, 0, /* replaced with offset to this symbol in .got. */ + 0x90 /* nop */ +}; + /* .eh_frame covering the .plt section. */ static const bfd_byte elf_x86_64_eh_frame_plt[] = @@ -665,6 +715,24 @@ static const struct elf_x86_64_backend_data elf_x86_64_arch_bed = sizeof (elf_x86_64_eh_frame_plt), /* eh_frame_plt_size */ }; +static const struct elf_x86_64_backend_data elf_x86_64_bnd_arch_bed = + { + elf_x86_64_bnd_plt0_entry, /* plt0_entry */ + elf_x86_64_bnd_plt_entry, /* plt_entry */ + sizeof (elf_x86_64_bnd_plt_entry), /* plt_entry_size */ + 2, /* plt0_got1_offset */ + 1+8, /* plt0_got2_offset */ + 1+12, /* plt0_got2_insn_end */ + 1+2, /* plt_got_offset */ + 1, /* plt_reloc_offset */ + 7, /* plt_plt_offset */ + 1+6, /* plt_got_insn_size */ + 11, /* plt_plt_insn_end */ + 0, /* plt_lazy_offset */ + elf_x86_64_eh_frame_plt, /* eh_frame_plt */ + sizeof (elf_x86_64_eh_frame_plt), /* eh_frame_plt_size */ + }; + #define elf_backend_arch_data &elf_x86_64_arch_bed /* x86-64 ELF linker hash entry. */ @@ -691,6 +759,13 @@ struct elf_x86_64_link_hash_entry (GOT_TLS_GD_P (type) || GOT_TLS_GDESC_P (type)) unsigned char tls_type; + /* TRUE if symbol has at least one BND relocation. */ + bfd_boolean has_bnd_reloc; + + /* Information about the second PLT entry. Filled when has_bnd_reloc is + set. */ + union gotplt_union plt_bnd; + /* Offset of the GOTPLT entry reserved for the TLS descriptor, starting at the end of the jump table. */ bfd_vma tlsdesc_got; @@ -741,6 +816,7 @@ struct elf_x86_64_link_hash_table asection *sdynbss; asection *srelbss; asection *plt_eh_frame; + asection *plt_bnd; union { @@ -818,6 +894,8 @@ elf_x86_64_link_hash_newfunc (struct bfd_hash_entry *entry, eh = (struct elf_x86_64_link_hash_entry *) entry; eh->dyn_relocs = NULL; eh->tls_type = GOT_UNKNOWN; + eh->has_bnd_reloc = FALSE; + eh->plt_bnd.offset = (bfd_vma) -1; eh->tlsdesc_got = (bfd_vma) -1; } @@ -1011,6 +1089,9 @@ elf_x86_64_copy_indirect_symbol (struct bfd_link_info *info, edir = (struct elf_x86_64_link_hash_entry *) dir; eind = (struct elf_x86_64_link_hash_entry *) ind; + if (!edir->has_bnd_reloc) + edir->has_bnd_reloc = eind->has_bnd_reloc; + if (eind->dyn_relocs != NULL) { if (edir->dyn_relocs != NULL) @@ -1547,14 +1628,59 @@ elf_x86_64_check_relocs (bfd *abfd, struct bfd_link_info *info, default: break; + case R_X86_64_PC32_BND: + case R_X86_64_PLT32_BND: + /* MPX PLT is supported only if elf_x86_64_arch_bed + is used in 64-bit mode. */ + if (ABI_64_P (abfd) + && (get_elf_x86_64_backend_data (abfd) + == &elf_x86_64_arch_bed)) + { + elf_x86_64_hash_entry (h)->has_bnd_reloc = TRUE; + + /* Create the second PLT for Intel MPX support. */ + if (htab->plt_bnd == NULL) + { + unsigned int plt_bnd_align; + const struct elf_backend_data *bed; + + bed = get_elf_backend_data (info->output_bfd); + switch (sizeof (elf_x86_64_bnd_plt2_entry)) + { + case 8: + plt_bnd_align = 3; + break; + case 16: + plt_bnd_align = 4; + break; + default: + abort (); + } + + if (htab->elf.dynobj == NULL) + htab->elf.dynobj = abfd; + htab->plt_bnd + = bfd_make_section_anyway_with_flags (htab->elf.dynobj, + ".plt.bnd", + (bed->dynamic_sec_flags + | SEC_ALLOC + | SEC_CODE + | SEC_LOAD + | SEC_READONLY)); + if (htab->plt_bnd == NULL + || !bfd_set_section_alignment (htab->elf.dynobj, + htab->plt_bnd, + plt_bnd_align)) + return FALSE; + } + } + case R_X86_64_32S: case R_X86_64_32: case R_X86_64_64: case R_X86_64_PC32: - case R_X86_64_PC32_BND: case R_X86_64_PC64: case R_X86_64_PLT32: - case R_X86_64_PLT32_BND: case R_X86_64_GOTPCREL: case R_X86_64_GOTPCREL64: if (htab->elf.dynobj == NULL) @@ -2310,11 +2436,28 @@ elf_x86_64_allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) here if it is defined and referenced in a non-shared object. */ if (h->type == STT_GNU_IFUNC && h->def_regular) - return _bfd_elf_allocate_ifunc_dyn_relocs (info, h, - &eh->dyn_relocs, - plt_entry_size, - plt_entry_size, - GOT_ENTRY_SIZE); + { + if (_bfd_elf_allocate_ifunc_dyn_relocs (info, h, + &eh->dyn_relocs, + plt_entry_size, + plt_entry_size, + GOT_ENTRY_SIZE)) + { + asection *s = htab->plt_bnd; + if (h->plt.offset != (bfd_vma) -1 && s != NULL) + { + /* Use the .plt.bnd section if it is created. */ + eh->plt_bnd.offset = s->size; + + /* Make room for this entry in the .plt.bnd section. */ + s->size += sizeof (elf_x86_64_legacy_plt2_entry); + } + + return TRUE; + } + else + return FALSE; + } else if (htab->elf.dynamic_sections_created && h->plt.refcount > 0) { @@ -2331,6 +2474,7 @@ elf_x86_64_allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) || WILL_CALL_FINISH_DYNAMIC_SYMBOL (1, 0, h)) { asection *s = htab->elf.splt; + asection *bnd_s = htab->plt_bnd; /* If this is the first .plt entry, make room for the special first entry. */ @@ -2338,6 +2482,8 @@ elf_x86_64_allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) s->size = plt_entry_size; h->plt.offset = s->size; + if (bnd_s) + eh->plt_bnd.offset = bnd_s->size; /* If this symbol is not defined in a regular file, and we are not generating a shared library, then set the symbol to this @@ -2347,12 +2493,28 @@ elf_x86_64_allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) if (! info->shared && !h->def_regular) { - h->root.u.def.section = s; - h->root.u.def.value = h->plt.offset; + if (bnd_s) + { + /* We need to make a call to the entry of the second + PLT instead of regular PLT entry. */ + h->root.u.def.section = bnd_s; + h->root.u.def.value = eh->plt_bnd.offset; + } + else + { + h->root.u.def.section = s; + h->root.u.def.value = h->plt.offset; + } } /* Make room for this entry. */ s->size += plt_entry_size; + if (bnd_s) + { + BFD_ASSERT (sizeof (elf_x86_64_bnd_plt2_entry) + == sizeof (elf_x86_64_legacy_plt2_entry)); + bnd_s->size += sizeof (elf_x86_64_legacy_plt2_entry); + } /* We also need to make an entry in the .got.plt section, which will be placed in the .got section by the linker script. */ @@ -2976,6 +3138,7 @@ elf_x86_64_size_dynamic_sections (bfd *output_bfd, || s == htab->elf.sgotplt || s == htab->elf.iplt || s == htab->elf.igotplt + || s == htab->plt_bnd || s == htab->plt_eh_frame || s == htab->sdynbss) { @@ -3254,14 +3417,15 @@ elf_x86_64_relocate_section (bfd *output_bfd, reloc_howto_type *howto; unsigned long r_symndx; struct elf_link_hash_entry *h; + struct elf_x86_64_link_hash_entry *eh; Elf_Internal_Sym *sym; asection *sec; - bfd_vma off, offplt; + bfd_vma off, offplt, plt_offset; bfd_vma relocation; bfd_boolean unresolved_reloc; bfd_reloc_status_type r; int tls_type; - asection *base_got; + asection *base_got, *resolved_plt; bfd_vma st_size; r_type = ELF32_R_TYPE (rel->r_info); @@ -3349,13 +3513,14 @@ elf_x86_64_relocate_section (bfd *output_bfd, } } + eh = (struct elf_x86_64_link_hash_entry *) h; + /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it here if it is defined in a non-shared object. */ if (h != NULL && h->type == STT_GNU_IFUNC && h->def_regular) { - asection *plt; bfd_vma plt_index; const char *name; @@ -3364,9 +3529,27 @@ elf_x86_64_relocate_section (bfd *output_bfd, abort (); /* STT_GNU_IFUNC symbol must go through PLT. */ - plt = htab->elf.splt ? htab->elf.splt : htab->elf.iplt; - relocation = (plt->output_section->vma - + plt->output_offset + h->plt.offset); + if (htab->elf.splt != NULL) + { + if (htab->plt_bnd != NULL) + { + resolved_plt = htab->plt_bnd; + plt_offset = eh->plt_bnd.offset; + } + else + { + resolved_plt = htab->elf.splt; + plt_offset = h->plt.offset; + } + } + else + { + resolved_plt = htab->elf.iplt; + plt_offset = h->plt.offset; + } + + relocation = (resolved_plt->output_section->vma + + resolved_plt->output_offset + plt_offset); switch (r_type) { @@ -3695,9 +3878,20 @@ elf_x86_64_relocate_section (bfd *output_bfd, && h->plt.offset != (bfd_vma) -1 && htab->elf.splt != NULL) { - relocation = (htab->elf.splt->output_section->vma - + htab->elf.splt->output_offset - + h->plt.offset); + if (htab->plt_bnd != NULL) + { + resolved_plt = htab->plt_bnd; + plt_offset = eh->plt_bnd.offset; + } + else + { + resolved_plt = htab->elf.splt; + plt_offset = h->plt.offset; + } + + relocation = (resolved_plt->output_section->vma + + resolved_plt->output_offset + + plt_offset); unresolved_reloc = FALSE; } @@ -3724,9 +3918,20 @@ elf_x86_64_relocate_section (bfd *output_bfd, break; } - relocation = (htab->elf.splt->output_section->vma - + htab->elf.splt->output_offset - + h->plt.offset); + if (htab->plt_bnd != NULL) + { + resolved_plt = htab->plt_bnd; + plt_offset = eh->plt_bnd.offset; + } + else + { + resolved_plt = htab->elf.splt; + plt_offset = h->plt.offset; + } + + relocation = (resolved_plt->output_section->vma + + resolved_plt->output_offset + + plt_offset); unresolved_reloc = FALSE; break; @@ -4524,20 +4729,28 @@ elf_x86_64_finish_dynamic_symbol (bfd *output_bfd, Elf_Internal_Sym *sym ATTRIBUTE_UNUSED) { struct elf_x86_64_link_hash_table *htab; - const struct elf_x86_64_backend_data *const abed - = get_elf_x86_64_backend_data (output_bfd); + const struct elf_x86_64_backend_data *abed; + bfd_boolean use_plt_bnd; htab = elf_x86_64_hash_table (info); if (htab == NULL) return FALSE; + /* Use MPX backend data in case of BND relocation. Use .plt_bnd + section only if there is .plt section. */ + use_plt_bnd = htab->elf.splt != NULL && htab->plt_bnd != NULL; + abed = (use_plt_bnd + ? &elf_x86_64_bnd_arch_bed + : get_elf_x86_64_backend_data (output_bfd)); + if (h->plt.offset != (bfd_vma) -1) { bfd_vma plt_index; - bfd_vma got_offset; + bfd_vma got_offset, plt_offset, plt_plt_offset, plt_got_offset; + bfd_vma plt_plt_insn_end, plt_got_insn_size; Elf_Internal_Rela rela; bfd_byte *loc; - asection *plt, *gotplt, *relplt; + asection *plt, *gotplt, *relplt, *resolved_plt; const struct elf_backend_data *bed; /* When building a static executable, use .iplt, .igot.plt and @@ -4588,9 +4801,56 @@ elf_x86_64_finish_dynamic_symbol (bfd *output_bfd, got_offset = got_offset * GOT_ENTRY_SIZE; } - /* Fill in the entry in the procedure linkage table. */ - memcpy (plt->contents + h->plt.offset, abed->plt_entry, - abed->plt_entry_size); + plt_plt_insn_end = abed->plt_plt_insn_end; + plt_plt_offset = abed->plt_plt_offset; + plt_got_insn_size = abed->plt_got_insn_size; + plt_got_offset = abed->plt_got_offset; + if (use_plt_bnd) + { + /* Use the second PLT with BND relocations. */ + const bfd_byte *plt_entry, *plt2_entry; + struct elf_x86_64_link_hash_entry *eh + = (struct elf_x86_64_link_hash_entry *) h; + + if (eh->has_bnd_reloc) + { + plt_entry = elf_x86_64_bnd_plt_entry; + plt2_entry = elf_x86_64_bnd_plt2_entry; + } + else + { + plt_entry = elf_x86_64_legacy_plt_entry; + plt2_entry = elf_x86_64_legacy_plt2_entry; + + /* Subtract 1 since there is no BND prefix. */ + plt_plt_insn_end -= 1; + plt_plt_offset -= 1; + plt_got_insn_size -= 1; + plt_got_offset -= 1; + } + + BFD_ASSERT (sizeof (elf_x86_64_bnd_plt_entry) + == sizeof (elf_x86_64_legacy_plt_entry)); + + /* Fill in the entry in the procedure linkage table. */ + memcpy (plt->contents + h->plt.offset, + plt_entry, sizeof (elf_x86_64_legacy_plt_entry)); + /* Fill in the entry in the second PLT. */ + memcpy (htab->plt_bnd->contents + eh->plt_bnd.offset, + plt2_entry, sizeof (elf_x86_64_legacy_plt2_entry)); + + resolved_plt = htab->plt_bnd; + plt_offset = eh->plt_bnd.offset; + } + else + { + /* Fill in the entry in the procedure linkage table. */ + memcpy (plt->contents + h->plt.offset, abed->plt_entry, + abed->plt_entry_size); + + resolved_plt = plt; + plt_offset = h->plt.offset; + } /* Insert the relocation positions of the plt section. */ @@ -4600,11 +4860,11 @@ elf_x86_64_finish_dynamic_symbol (bfd *output_bfd, (gotplt->output_section->vma + gotplt->output_offset + got_offset - - plt->output_section->vma - - plt->output_offset - - h->plt.offset - - abed->plt_got_insn_size), - plt->contents + h->plt.offset + abed->plt_got_offset); + - resolved_plt->output_section->vma + - resolved_plt->output_offset + - plt_offset + - plt_got_insn_size), + resolved_plt->contents + plt_offset + plt_got_offset); /* Fill in the entry in the global offset table, initially this points to the second part of the PLT entry. */ @@ -4646,8 +4906,8 @@ elf_x86_64_finish_dynamic_symbol (bfd *output_bfd, bfd_put_32 (output_bfd, plt_index, plt->contents + h->plt.offset + abed->plt_reloc_offset); /* Put offset for jmp .PLT0. */ - bfd_put_32 (output_bfd, - (h->plt.offset + abed->plt_plt_insn_end), - plt->contents + h->plt.offset + abed->plt_plt_offset); + bfd_put_32 (output_bfd, - (h->plt.offset + plt_plt_insn_end), + plt->contents + h->plt.offset + plt_plt_offset); } bed = get_elf_backend_data (output_bfd); @@ -4809,13 +5069,18 @@ elf_x86_64_finish_dynamic_sections (bfd *output_bfd, struct elf_x86_64_link_hash_table *htab; bfd *dynobj; asection *sdyn; - const struct elf_x86_64_backend_data *const abed - = get_elf_x86_64_backend_data (output_bfd); + const struct elf_x86_64_backend_data *abed; htab = elf_x86_64_hash_table (info); if (htab == NULL) return FALSE; + /* Use MPX backend data in case of BND relocation. Use .plt_bnd + section only if there is .plt section. */ + abed = (htab->elf.splt != NULL && htab->plt_bnd != NULL + ? &elf_x86_64_bnd_arch_bed + : get_elf_x86_64_backend_data (output_bfd)); + dynobj = htab->elf.dynobj; sdyn = bfd_get_linker_section (dynobj, ".dynamic"); @@ -4956,6 +5221,10 @@ elf_x86_64_finish_dynamic_sections (bfd *output_bfd, } } + if (htab->plt_bnd != NULL) + elf_section_data (htab->plt_bnd->output_section) + ->this_hdr.sh_entsize = sizeof (elf_x86_64_bnd_plt2_entry); + if (htab->elf.sgotplt) { if (bfd_is_abs_section (htab->elf.sgotplt->output_section)) @@ -5034,6 +5303,113 @@ elf_x86_64_plt_sym_val (bfd_vma i, const asection *plt, return plt->vma + (i + 1) * GET_PLT_ENTRY_SIZE (plt->owner); } +/* Similar to _bfd_elf_get_synthetic_symtab, with .plt.bnd section + support. */ + +static long +elf_x86_64_get_synthetic_symtab (bfd *abfd, + long symcount, + asymbol **syms, + long dynsymcount, + asymbol **dynsyms, + asymbol **ret) +{ + const struct elf_backend_data *bed = get_elf_backend_data (abfd); + asection *relplt; + asymbol *s; + bfd_boolean (*slurp_relocs) (bfd *, asection *, asymbol **, bfd_boolean); + arelent *p; + long count, i, n; + size_t size; + Elf_Internal_Shdr *hdr; + char *names; + asection *plt; + bfd_vma addr; + + plt = bfd_get_section_by_name (abfd, ".plt.bnd"); + /* Use the generic ELF version if there is no .plt.bnd section. */ + if (plt == NULL) + return _bfd_elf_get_synthetic_symtab (abfd, symcount, syms, + dynsymcount, dynsyms, ret); + + *ret = NULL; + + if ((abfd->flags & (DYNAMIC | EXEC_P)) == 0) + return 0; + + if (dynsymcount <= 0) + return 0; + + relplt = bfd_get_section_by_name (abfd, ".rela.plt"); + if (relplt == NULL) + return 0; + + hdr = &elf_section_data (relplt)->this_hdr; + if (hdr->sh_link != elf_dynsymtab (abfd) + || (hdr->sh_type != SHT_REL && hdr->sh_type != SHT_RELA)) + return 0; + + slurp_relocs = get_elf_backend_data (abfd)->s->slurp_reloc_table; + if (! (*slurp_relocs) (abfd, relplt, dynsyms, TRUE)) + return -1; + + count = relplt->size / hdr->sh_entsize; + size = count * sizeof (asymbol); + p = relplt->relocation; + for (i = 0; i < count; i++, p += bed->s->int_rels_per_ext_rel) + { + size += strlen ((*p->sym_ptr_ptr)->name) + sizeof ("@plt"); + if (p->addend != 0) + size += sizeof ("+0x") - 1 + 8 + 8; + } + + s = *ret = (asymbol *) bfd_malloc (size); + if (s == NULL) + return -1; + + names = (char *) (s + count); + p = relplt->relocation; + n = 0; + addr = 0; + for (i = 0; i < count; i++, p++) + { + size_t len; + + *s = **p->sym_ptr_ptr; + /* Undefined syms won't have BSF_LOCAL or BSF_GLOBAL set. Since + we are defining a symbol, ensure one of them is set. */ + if ((s->flags & BSF_LOCAL) == 0) + s->flags |= BSF_GLOBAL; + s->flags |= BSF_SYNTHETIC; + s->section = plt; + s->value = addr; + s->name = names; + s->udata.p = NULL; + len = strlen ((*p->sym_ptr_ptr)->name); + memcpy (names, (*p->sym_ptr_ptr)->name, len); + names += len; + if (p->addend != 0) + { + char buf[30], *a; + + memcpy (names, "+0x", sizeof ("+0x") - 1); + names += sizeof ("+0x") - 1; + bfd_sprintf_vma (abfd, buf, p->addend); + for (a = buf; *a == '0'; ++a) + ; + len = strlen (a); + memcpy (names, a, len); + names += len; + } + memcpy (names, "@plt", sizeof ("@plt")); + names += sizeof ("@plt"); + ++s, ++n; + addr += sizeof (elf_x86_64_legacy_plt2_entry); + } + + return n; +} + /* Handle an x86-64 specific section when reading an object file. This is called when elfcode.h finds a section with an unknown type. */ @@ -5294,6 +5670,7 @@ static const struct bfd_elf_special_section #define elf_backend_plt_sym_val elf_x86_64_plt_sym_val #define elf_backend_object_p elf64_x86_64_elf_object_p #define bfd_elf64_mkobject elf_x86_64_mkobject +#define bfd_elf64_get_synthetic_symtab elf_x86_64_get_synthetic_symtab #define elf_backend_section_from_shdr \ elf_x86_64_section_from_shdr @@ -5364,6 +5741,8 @@ static const struct bfd_elf_special_section #include "elf64-target.h" +#undef bfd_elf64_get_synthetic_symtab + /* Native Client support. */ static bfd_boolean |