diff options
author | Alexandre Oliva <aoliva@redhat.com> | 2006-01-18 21:07:51 +0000 |
---|---|---|
committer | Alexandre Oliva <aoliva@redhat.com> | 2006-01-18 21:07:51 +0000 |
commit | 67a4f2b710581acc83afecff55424af285ecbc28 (patch) | |
tree | 2348b4780388dad65c840f222d372edc83a2088e /bfd | |
parent | dd942754f0afab07734deed09d168afbc9ffb597 (diff) | |
download | gdb-67a4f2b710581acc83afecff55424af285ecbc28.zip gdb-67a4f2b710581acc83afecff55424af285ecbc28.tar.gz gdb-67a4f2b710581acc83afecff55424af285ecbc28.tar.bz2 |
include/elf/ChangeLog:
Introduce TLS descriptors for i386 and x86_64.
* common.h (DT_TLSDESC_GOT, DT_TLSDESC_PLT): New.
* i386.h (R_386_TLS_GOTDESC, R_386_TLS_DESC_CALL, R_386_TLS_DESC):
New.
* x86-64.h (R_X86_64_GOTPC32_TLSDESC, R_X86_64_TLSDESC_CALL,
R_X86_64_TLSDESC): New.
bfd/ChangeLog:
Introduce TLS descriptors for i386 and x86_64.
* reloc.c (BFD_RELOC_386_TLS_GOTDESC, BFD_RELOC_386_TLS_DESC,
BFD_RELOC_386_TLS_DESC_CALL, BFD_RELOC_X86_64_GOTPC32_TLSDESC,
BFD_RELOC_X86_64_TLSDESC, BFD_RELOC_X86_64_TLSDESC_CALL): New.
* libbfd.h, bfd-in2.h: Rebuilt.
* elf32-i386.c (elf_howto_table): New relocations.
(R_386_tls): Adjust.
(elf_i386_reloc_type_lookup): Map new relocations.
(GOT_TLS_GDESC, GOT_TLS_GD_BOTH_P): New macros.
(GOT_TLS_GD_P, GOT_TLS_GDESC_P, GOT_TLS_GD_ANY_P): New macros.
(struct elf_i386_link_hash_entry): Add tlsdesc_got field.
(struct elf_i386_obj_tdata): Add local_tlsdesc_gotent field.
(elf_i386_local_tlsdesc_gotent): New macro.
(struct elf_i386_link_hash_table): Add sgotplt_jump_table_size.
(elf_i386_compute_jump_table_size): New macro.
(link_hash_newfunc): Initialize tlsdesc_got.
(elf_i386_link_hash_table_create): Set sgotplt_jump_table_size.
(elf_i386_tls_transition): Handle R_386_TLS_GOTDESC and
R_386_TLS_DESC_CALL.
(elf_i386_check_relocs): Likewise. Allocate space for
local_tlsdesc_gotent.
(elf_i386_gc_sweep_hook): Handle R_386_TLS_GOTDESC and
R_386_TLS_DESC_CALL.
(allocate_dynrelocs): Count function PLT relocations. Reserve
space for TLS descriptors and relocations.
(elf_i386_size_dynamic_sections): Reserve space for TLS
descriptors and relocations. Set up sgotplt_jump_table_size.
Don't zero reloc_count in srelplt.
(elf_i386_always_size_sections): New. Set up _TLS_MODULE_BASE_.
(elf_i386_relocate_section): Handle R_386_TLS_GOTDESC and
R_386_TLS_DESC_CALL.
(elf_i386_finish_dynamic_symbol): Use GOT_TLS_GD_ANY_P.
(elf_backend_always_size_sections): Define.
* elf64-x86-64.c (x86_64_elf_howto): Add R_X86_64_GOTPC32_TLSDESC,
R_X86_64_TLSDESC, R_X86_64_TLSDESC_CALL.
(R_X86_64_standard): Adjust.
(x86_64_reloc_map): Map new relocs.
(elf64_x86_64_rtype_to_howto): New, split out of...
(elf64_x86_64_info_to_howto): ... this function, and...
(elf64_x86_64_reloc_type_lookup): ... use it to map elf_reloc_val.
(GOT_TLS_GDESC, GOT_TLS_GD_BOTH_P): New macros.
(GOT_TLS_GD_P, GOT_TLS_GDESC_P, GOT_TLS_GD_ANY_P): New macros.
(struct elf64_x86_64_link_hash_entry): Add tlsdesc_got field.
(struct elf64_x86_64_obj_tdata): Add local_tlsdesc_gotent field.
(elf64_x86_64_local_tlsdesc_gotent): New macro.
(struct elf64_x86_64_link_hash_table): Add tlsdesc_plt,
tlsdesc_got and sgotplt_jump_table_size fields.
(elf64_x86_64_compute_jump_table_size): New macro.
(link_hash_newfunc): Initialize tlsdesc_got.
(elf64_x86_64_link_hash_table_create): Initialize new fields.
(elf64_x86_64_tls_transition): Handle R_X86_64_GOTPC32_TLSDESC and
R_X86_64_TLSDESC_CALL.
(elf64_x86_64_check_relocs): Likewise. Allocate space for
local_tlsdesc_gotent.
(elf64_x86_64_gc_sweep_hook): Handle R_X86_64_GOTPC32_TLSDESC and
R_X86_64_TLSDESC_CALL.
(allocate_dynrelocs): Count function PLT relocations. Reserve
space for TLS descriptors and relocations.
(elf64_x86_64_size_dynamic_sections): Reserve space for TLS
descriptors and relocations. Set up sgotplt_jump_table_size,
tlsdesc_plt and tlsdesc_got. Make room for them. Don't zero
reloc_count in srelplt. Add dynamic entries for DT_TLSDESC_PLT
and DT_TLSDESC_GOT.
(elf64_x86_64_always_size_sections): New. Set up
_TLS_MODULE_BASE_.
(elf64_x86_64_relocate_section): Handle R_386_TLS_GOTDESC and
R_386_TLS_DESC_CALL.
(elf64_x86_64_finish_dynamic_symbol): Use GOT_TLS_GD_ANY_P.
(elf64_x86_64_finish_dynamic_sections): Set DT_TLSDESC_PLT and
DT_TLSDESC_GOT. Set up TLS descriptor lazy resolver PLT entry.
(elf_backend_always_size_sections): Define.
binutils/ChangeLog:
Introduce TLS descriptors for i386 and x86_64.
* readelf.c (get_dynamic_type): Handle DT_TLSDESC_GOT and
DT_TLSDESC_PLT.
gas/ChangeLog:
Introduce TLS descriptors for i386 and x86_64.
* config/tc-i386.c (tc_i386_fix_adjustable): Handle
BFD_RELOC_386_TLS_GOTDESC, BFD_RELOC_386_TLS_DESC_CALL,
BFD_RELOC_X86_64_GOTPC32_TLSDESC, BFD_RELOC_X86_64_TLSDESC_CALL.
(optimize_disp): Emit fix up for BFD_RELOC_386_TLS_DESC_CALL and
BFD_RELOC_X86_64_TLSDESC_CALL immediately, and clear the
displacement bits.
(build_modrm_byte): Set up zero modrm for TLS desc calls.
(lex_got): Handle @tlsdesc and @tlscall.
(md_apply_fix, tc_gen_reloc): Handle the new relocations.
ld/testsuite/ChangeLog:
Introduce TLS descriptors for i386 and x86_64.
* ld-i386/i386.exp: Run on x86_64-*-linux* and amd64-*-linux*.
Add new tests.
* ld-i386/pcrel16.d: Add -melf_i386.
* ld-i386/pcrel8.d: Likewise.
* ld-i386/tlsbindesc.dd: New.
* ld-i386/tlsbindesc.rd: New.
* ld-i386/tlsbindesc.s: New.
* ld-i386/tlsbindesc.sd: New.
* ld-i386/tlsbindesc.td: New.
* ld-i386/tlsdesc.dd: New.
* ld-i386/tlsdesc.rd: New.
* ld-i386/tlsdesc.s: New.
* ld-i386/tlsdesc.sd: New.
* ld-i386/tlsdesc.td: New.
* ld-i386/tlsgdesc.dd: New.
* ld-i386/tlsgdesc.rd: New.
* ld-i386/tlsgdesc.s: New.
* ld-x86-64/x86-64.exp: Run new tests.
* ld-x86-64/tlsbindesc.dd: New.
* ld-x86-64/tlsbindesc.rd: New.
* ld-x86-64/tlsbindesc.s: New.
* ld-x86-64/tlsbindesc.sd: New.
* ld-x86-64/tlsbindesc.td: New.
* ld-x86-64/tlsdesc.dd: New.
* ld-x86-64/tlsdesc.pd: New.
* ld-x86-64/tlsdesc.rd: New.
* ld-x86-64/tlsdesc.s: New.
* ld-x86-64/tlsdesc.sd: New.
* ld-x86-64/tlsdesc.td: New.
* ld-x86-64/tlsgdesc.dd: New.
* ld-x86-64/tlsgdesc.rd: New.
* ld-x86-64/tlsgdesc.s: New.
Diffstat (limited to 'bfd')
-rw-r--r-- | bfd/ChangeLog | 74 | ||||
-rw-r--r-- | bfd/bfd-in2.h | 9 | ||||
-rw-r--r-- | bfd/elf32-i386.c | 434 | ||||
-rw-r--r-- | bfd/elf64-x86-64.c | 540 | ||||
-rw-r--r-- | bfd/libbfd.h | 8 | ||||
-rw-r--r-- | bfd/reloc.c | 14 |
6 files changed, 978 insertions, 101 deletions
diff --git a/bfd/ChangeLog b/bfd/ChangeLog index 3012ba7..fade6a7 100644 --- a/bfd/ChangeLog +++ b/bfd/ChangeLog @@ -1,3 +1,77 @@ +2006-01-18 Alexandre Oliva <aoliva@redhat.com> + + Introduce TLS descriptors for i386 and x86_64. + * reloc.c (BFD_RELOC_386_TLS_GOTDESC, BFD_RELOC_386_TLS_DESC, + BFD_RELOC_386_TLS_DESC_CALL, BFD_RELOC_X86_64_GOTPC32_TLSDESC, + BFD_RELOC_X86_64_TLSDESC, BFD_RELOC_X86_64_TLSDESC_CALL): New. + * libbfd.h, bfd-in2.h: Rebuilt. + * elf32-i386.c (elf_howto_table): New relocations. + (R_386_tls): Adjust. + (elf_i386_reloc_type_lookup): Map new relocations. + (GOT_TLS_GDESC, GOT_TLS_GD_BOTH_P): New macros. + (GOT_TLS_GD_P, GOT_TLS_GDESC_P, GOT_TLS_GD_ANY_P): New macros. + (struct elf_i386_link_hash_entry): Add tlsdesc_got field. + (struct elf_i386_obj_tdata): Add local_tlsdesc_gotent field. + (elf_i386_local_tlsdesc_gotent): New macro. + (struct elf_i386_link_hash_table): Add sgotplt_jump_table_size. + (elf_i386_compute_jump_table_size): New macro. + (link_hash_newfunc): Initialize tlsdesc_got. + (elf_i386_link_hash_table_create): Set sgotplt_jump_table_size. + (elf_i386_tls_transition): Handle R_386_TLS_GOTDESC and + R_386_TLS_DESC_CALL. + (elf_i386_check_relocs): Likewise. Allocate space for + local_tlsdesc_gotent. + (elf_i386_gc_sweep_hook): Handle R_386_TLS_GOTDESC and + R_386_TLS_DESC_CALL. + (allocate_dynrelocs): Count function PLT relocations. Reserve + space for TLS descriptors and relocations. + (elf_i386_size_dynamic_sections): Reserve space for TLS + descriptors and relocations. Set up sgotplt_jump_table_size. + Don't zero reloc_count in srelplt. + (elf_i386_always_size_sections): New. Set up _TLS_MODULE_BASE_. + (elf_i386_relocate_section): Handle R_386_TLS_GOTDESC and + R_386_TLS_DESC_CALL. + (elf_i386_finish_dynamic_symbol): Use GOT_TLS_GD_ANY_P. + (elf_backend_always_size_sections): Define. + * elf64-x86-64.c (x86_64_elf_howto): Add R_X86_64_GOTPC32_TLSDESC, + R_X86_64_TLSDESC, R_X86_64_TLSDESC_CALL. + (R_X86_64_standard): Adjust. + (x86_64_reloc_map): Map new relocs. + (elf64_x86_64_rtype_to_howto): New, split out of... + (elf64_x86_64_info_to_howto): ... this function, and... + (elf64_x86_64_reloc_type_lookup): ... use it to map elf_reloc_val. + (GOT_TLS_GDESC, GOT_TLS_GD_BOTH_P): New macros. + (GOT_TLS_GD_P, GOT_TLS_GDESC_P, GOT_TLS_GD_ANY_P): New macros. + (struct elf64_x86_64_link_hash_entry): Add tlsdesc_got field. + (struct elf64_x86_64_obj_tdata): Add local_tlsdesc_gotent field. + (elf64_x86_64_local_tlsdesc_gotent): New macro. + (struct elf64_x86_64_link_hash_table): Add tlsdesc_plt, + tlsdesc_got and sgotplt_jump_table_size fields. + (elf64_x86_64_compute_jump_table_size): New macro. + (link_hash_newfunc): Initialize tlsdesc_got. + (elf64_x86_64_link_hash_table_create): Initialize new fields. + (elf64_x86_64_tls_transition): Handle R_X86_64_GOTPC32_TLSDESC and + R_X86_64_TLSDESC_CALL. + (elf64_x86_64_check_relocs): Likewise. Allocate space for + local_tlsdesc_gotent. + (elf64_x86_64_gc_sweep_hook): Handle R_X86_64_GOTPC32_TLSDESC and + R_X86_64_TLSDESC_CALL. + (allocate_dynrelocs): Count function PLT relocations. Reserve + space for TLS descriptors and relocations. + (elf64_x86_64_size_dynamic_sections): Reserve space for TLS + descriptors and relocations. Set up sgotplt_jump_table_size, + tlsdesc_plt and tlsdesc_got. Make room for them. Don't zero + reloc_count in srelplt. Add dynamic entries for DT_TLSDESC_PLT + and DT_TLSDESC_GOT. + (elf64_x86_64_always_size_sections): New. Set up + _TLS_MODULE_BASE_. + (elf64_x86_64_relocate_section): Handle R_386_TLS_GOTDESC and + R_386_TLS_DESC_CALL. + (elf64_x86_64_finish_dynamic_symbol): Use GOT_TLS_GD_ANY_P. + (elf64_x86_64_finish_dynamic_sections): Set DT_TLSDESC_PLT and + DT_TLSDESC_GOT. Set up TLS descriptor lazy resolver PLT entry. + (elf_backend_always_size_sections): Define. + 2006-01-17 H.J. Lu <hongjiu.lu@intel.com> PR binutils/2096 diff --git a/bfd/bfd-in2.h b/bfd/bfd-in2.h index 3cf72f3..cae4ede 100644 --- a/bfd/bfd-in2.h +++ b/bfd/bfd-in2.h @@ -8,7 +8,8 @@ /* Main header file for the bfd library -- portable access to object files. Copyright 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, - 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. + 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 + Free Software Foundation, Inc. Contributed by Cygnus Support. @@ -2661,6 +2662,9 @@ in the instruction. */ BFD_RELOC_386_TLS_DTPMOD32, BFD_RELOC_386_TLS_DTPOFF32, BFD_RELOC_386_TLS_TPOFF32, + BFD_RELOC_386_TLS_GOTDESC, + BFD_RELOC_386_TLS_DESC_CALL, + BFD_RELOC_386_TLS_DESC, /* x86-64/elf relocations */ BFD_RELOC_X86_64_GOT32, @@ -2681,6 +2685,9 @@ in the instruction. */ BFD_RELOC_X86_64_TPOFF32, BFD_RELOC_X86_64_GOTOFF64, BFD_RELOC_X86_64_GOTPC32, + BFD_RELOC_X86_64_GOTPC32_TLSDESC, + BFD_RELOC_X86_64_TLSDESC_CALL, + BFD_RELOC_X86_64_TLSDESC, /* ns32k relocations */ BFD_RELOC_NS32K_IMM_8, diff --git a/bfd/elf32-i386.c b/bfd/elf32-i386.c index 061a9cb..b8e8790 100644 --- a/bfd/elf32-i386.c +++ b/bfd/elf32-i386.c @@ -1,6 +1,6 @@ /* Intel 80386/80486-specific support for 32-bit ELF Copyright 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, - 2003, 2004, 2005 Free Software Foundation, Inc. + 2003, 2004, 2005, 2006 Free Software Foundation, Inc. This file is part of BFD, the Binary File Descriptor library. @@ -126,9 +126,19 @@ static reloc_howto_type elf_howto_table[]= HOWTO(R_386_TLS_TPOFF32, 0, 2, 32, FALSE, 0, complain_overflow_bitfield, bfd_elf_generic_reloc, "R_386_TLS_TPOFF32", TRUE, 0xffffffff, 0xffffffff, FALSE), + EMPTY_HOWTO (38), + HOWTO(R_386_TLS_GOTDESC, 0, 2, 32, FALSE, 0, complain_overflow_bitfield, + bfd_elf_generic_reloc, "R_386_TLS_GOTDESC", + TRUE, 0xffffffff, 0xffffffff, FALSE), + HOWTO(R_386_TLS_DESC_CALL, 0, 0, 0, FALSE, 0, complain_overflow_dont, + bfd_elf_generic_reloc, "R_386_TLS_DESC_CALL", + FALSE, 0, 0, FALSE), + HOWTO(R_386_TLS_DESC, 0, 2, 32, FALSE, 0, complain_overflow_bitfield, + bfd_elf_generic_reloc, "R_386_TLS_DESC", + TRUE, 0xffffffff, 0xffffffff, FALSE), /* Another gap. */ -#define R_386_tls (R_386_TLS_TPOFF32 + 1 - R_386_tls_offset) +#define R_386_tls (R_386_TLS_DESC + 1 - R_386_tls_offset) #define R_386_vt_offset (R_386_GNU_VTINHERIT - R_386_tls) /* GNU extension to record C++ vtable hierarchy. */ @@ -292,6 +302,18 @@ elf_i386_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED, TRACE ("BFD_RELOC_386_TLS_TPOFF32"); return &elf_howto_table[R_386_TLS_TPOFF32 - R_386_tls_offset]; + case BFD_RELOC_386_TLS_GOTDESC: + TRACE ("BFD_RELOC_386_TLS_GOTDESC"); + return &elf_howto_table[R_386_TLS_GOTDESC - R_386_tls_offset]; + + case BFD_RELOC_386_TLS_DESC_CALL: + TRACE ("BFD_RELOC_386_TLS_DESC_CALL"); + return &elf_howto_table[R_386_TLS_DESC_CALL - R_386_tls_offset]; + + case BFD_RELOC_386_TLS_DESC: + TRACE ("BFD_RELOC_386_TLS_DESC"); + return &elf_howto_table[R_386_TLS_DESC - R_386_tls_offset]; + case BFD_RELOC_VTABLE_INHERIT: TRACE ("BFD_RELOC_VTABLE_INHERIT"); return &elf_howto_table[R_386_GNU_VTINHERIT - R_386_vt_offset]; @@ -559,7 +581,20 @@ struct elf_i386_link_hash_entry #define GOT_TLS_IE_POS 5 #define GOT_TLS_IE_NEG 6 #define GOT_TLS_IE_BOTH 7 +#define GOT_TLS_GDESC 8 +#define GOT_TLS_GD_BOTH_P(type) \ + ((type) == (GOT_TLS_GD | GOT_TLS_GDESC)) +#define GOT_TLS_GD_P(type) \ + ((type) == GOT_TLS_GD || GOT_TLS_GD_BOTH_P (type)) +#define GOT_TLS_GDESC_P(type) \ + ((type) == GOT_TLS_GDESC || GOT_TLS_GD_BOTH_P (type)) +#define GOT_TLS_GD_ANY_P(type) \ + (GOT_TLS_GD_P (type) || GOT_TLS_GDESC_P (type)) unsigned char tls_type; + + /* Offset of the GOTPLT entry reserved for the TLS descriptor, + starting at the end of the jump table. */ + bfd_vma tlsdesc_got; }; #define elf_i386_hash_entry(ent) ((struct elf_i386_link_hash_entry *)(ent)) @@ -570,6 +605,9 @@ struct elf_i386_obj_tdata /* tls_type for each local got entry. */ char *local_got_tls_type; + + /* GOTPLT entries for TLS descriptors. */ + bfd_vma *local_tlsdesc_gotent; }; #define elf_i386_tdata(abfd) \ @@ -578,6 +616,9 @@ struct elf_i386_obj_tdata #define elf_i386_local_got_tls_type(abfd) \ (elf_i386_tdata (abfd)->local_got_tls_type) +#define elf_i386_local_tlsdesc_gotent(abfd) \ + (elf_i386_tdata (abfd)->local_tlsdesc_gotent) + static bfd_boolean elf_i386_mkobject (bfd *abfd) { @@ -620,6 +661,10 @@ struct elf_i386_link_hash_table bfd_vma offset; } tls_ldm_got; + /* The amount of space used by the reserved portion of the sgotplt + section, plus whatever space is used by the jump slots. */ + bfd_vma sgotplt_jump_table_size; + /* Small local sym to section mapping cache. */ struct sym_sec_cache sym_sec; }; @@ -629,6 +674,9 @@ struct elf_i386_link_hash_table #define elf_i386_hash_table(p) \ ((struct elf_i386_link_hash_table *) ((p)->hash)) +#define elf_i386_compute_jump_table_size(htab) \ + ((htab)->srelplt->reloc_count * 4) + /* Create an entry in an i386 ELF linker hash table. */ static struct bfd_hash_entry * @@ -655,6 +703,7 @@ link_hash_newfunc (struct bfd_hash_entry *entry, eh = (struct elf_i386_link_hash_entry *) entry; eh->dyn_relocs = NULL; eh->tls_type = GOT_UNKNOWN; + eh->tlsdesc_got = (bfd_vma) -1; } return entry; @@ -686,6 +735,7 @@ elf_i386_link_hash_table_create (bfd *abfd) ret->sdynbss = NULL; ret->srelbss = NULL; ret->tls_ldm_got.refcount = 0; + ret->sgotplt_jump_table_size = 0; ret->sym_sec.abfd = NULL; ret->is_vxworks = 0; ret->srelplt2 = NULL; @@ -845,6 +895,8 @@ elf_i386_tls_transition (struct bfd_link_info *info, int r_type, int is_local) switch (r_type) { case R_386_TLS_GD: + case R_386_TLS_GOTDESC: + case R_386_TLS_DESC_CALL: case R_386_TLS_IE_32: if (is_local) return R_386_TLS_LE_32; @@ -949,6 +1001,8 @@ elf_i386_check_relocs (bfd *abfd, case R_386_GOT32: case R_386_TLS_GD: + case R_386_TLS_GOTDESC: + case R_386_TLS_DESC_CALL: /* This symbol requires a global offset table entry. */ { int tls_type, old_tls_type; @@ -958,6 +1012,9 @@ elf_i386_check_relocs (bfd *abfd, default: case R_386_GOT32: tls_type = GOT_NORMAL; break; case R_386_TLS_GD: tls_type = GOT_TLS_GD; break; + case R_386_TLS_GOTDESC: + case R_386_TLS_DESC_CALL: + tls_type = GOT_TLS_GDESC; break; case R_386_TLS_IE_32: if (ELF32_R_TYPE (rel->r_info) == r_type) tls_type = GOT_TLS_IE_NEG; @@ -987,13 +1044,16 @@ elf_i386_check_relocs (bfd *abfd, bfd_size_type size; size = symtab_hdr->sh_info; - size *= (sizeof (bfd_signed_vma) + sizeof(char)); + size *= (sizeof (bfd_signed_vma) + + sizeof (bfd_vma) + sizeof(char)); local_got_refcounts = bfd_zalloc (abfd, size); if (local_got_refcounts == NULL) return FALSE; elf_local_got_refcounts (abfd) = local_got_refcounts; + elf_i386_local_tlsdesc_gotent (abfd) + = (bfd_vma *) (local_got_refcounts + symtab_hdr->sh_info); elf_i386_local_got_tls_type (abfd) - = (char *) (local_got_refcounts + symtab_hdr->sh_info); + = (char *) (local_got_refcounts + 2 * symtab_hdr->sh_info); } local_got_refcounts[r_symndx] += 1; old_tls_type = elf_i386_local_got_tls_type (abfd) [r_symndx]; @@ -1004,11 +1064,14 @@ elf_i386_check_relocs (bfd *abfd, /* If a TLS symbol is accessed using IE at least once, there is no point to use dynamic model for it. */ else if (old_tls_type != tls_type && old_tls_type != GOT_UNKNOWN - && (old_tls_type != GOT_TLS_GD + && (! GOT_TLS_GD_ANY_P (old_tls_type) || (tls_type & GOT_TLS_IE) == 0)) { - if ((old_tls_type & GOT_TLS_IE) && tls_type == GOT_TLS_GD) + if ((old_tls_type & GOT_TLS_IE) && GOT_TLS_GD_ANY_P (tls_type)) tls_type = old_tls_type; + else if (GOT_TLS_GD_ANY_P (old_tls_type) + && GOT_TLS_GD_ANY_P (tls_type)) + tls_type |= old_tls_type; else { (*_bfd_error_handler) @@ -1316,6 +1379,8 @@ elf_i386_gc_sweep_hook (bfd *abfd, break; case R_386_TLS_GD: + case R_386_TLS_GOTDESC: + case R_386_TLS_DESC_CALL: case R_386_TLS_IE_32: case R_386_TLS_IE: case R_386_TLS_GOTIE: @@ -1579,6 +1644,7 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf) /* We also need to make an entry in the .rel.plt section. */ htab->srelplt->size += sizeof (Elf32_External_Rel); + htab->srelplt->reloc_count++; if (htab->is_vxworks && !info->shared) { @@ -1612,6 +1678,9 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf) h->needs_plt = 0; } + eh = (struct elf_i386_link_hash_entry *) h; + eh->tlsdesc_got = (bfd_vma) -1; + /* If R_386_TLS_{IE_32,IE,GOTIE} symbol is now local to the binary, make it a R_386_TLS_LE_32 requiring no TLS entry. */ if (h->got.refcount > 0 @@ -1635,11 +1704,22 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf) } s = htab->sgot; - h->got.offset = s->size; - s->size += 4; - /* R_386_TLS_GD needs 2 consecutive GOT slots. */ - if (tls_type == GOT_TLS_GD || tls_type == GOT_TLS_IE_BOTH) - s->size += 4; + if (GOT_TLS_GDESC_P (tls_type)) + { + eh->tlsdesc_got = htab->sgotplt->size + - elf_i386_compute_jump_table_size (htab); + htab->sgotplt->size += 8; + h->got.offset = (bfd_vma) -2; + } + if (! GOT_TLS_GDESC_P (tls_type) + || GOT_TLS_GD_P (tls_type)) + { + h->got.offset = s->size; + s->size += 4; + /* R_386_TLS_GD needs 2 consecutive GOT slots. */ + if (GOT_TLS_GD_P (tls_type) || tls_type == GOT_TLS_IE_BOTH) + s->size += 4; + } dyn = htab->elf.dynamic_sections_created; /* R_386_TLS_IE_32 needs one dynamic relocation, R_386_TLS_IE resp. R_386_TLS_GOTIE needs one dynamic relocation, @@ -1648,21 +1728,23 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf) global. */ if (tls_type == GOT_TLS_IE_BOTH) htab->srelgot->size += 2 * sizeof (Elf32_External_Rel); - else if ((tls_type == GOT_TLS_GD && h->dynindx == -1) + else if ((GOT_TLS_GD_P (tls_type) && h->dynindx == -1) || (tls_type & GOT_TLS_IE)) htab->srelgot->size += sizeof (Elf32_External_Rel); - else if (tls_type == GOT_TLS_GD) + else if (GOT_TLS_GD_P (tls_type)) htab->srelgot->size += 2 * sizeof (Elf32_External_Rel); - else if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT - || h->root.type != bfd_link_hash_undefweak) + else if (! GOT_TLS_GDESC_P (tls_type) + && (ELF_ST_VISIBILITY (h->other) == STV_DEFAULT + || h->root.type != bfd_link_hash_undefweak) && (info->shared || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h))) htab->srelgot->size += sizeof (Elf32_External_Rel); + if (GOT_TLS_GDESC_P (tls_type)) + htab->srelplt->size += sizeof (Elf32_External_Rel); } else h->got.offset = (bfd_vma) -1; - eh = (struct elf_i386_link_hash_entry *) h; if (eh->dyn_relocs == NULL) return TRUE; @@ -1810,6 +1892,7 @@ elf_i386_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, bfd_signed_vma *local_got; bfd_signed_vma *end_local_got; char *local_tls_type; + bfd_vma *local_tlsdesc_gotent; bfd_size_type locsymcount; Elf_Internal_Shdr *symtab_hdr; asection *srel; @@ -1852,25 +1935,42 @@ elf_i386_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, locsymcount = symtab_hdr->sh_info; end_local_got = local_got + locsymcount; local_tls_type = elf_i386_local_got_tls_type (ibfd); + local_tlsdesc_gotent = elf_i386_local_tlsdesc_gotent (ibfd); s = htab->sgot; srel = htab->srelgot; - for (; local_got < end_local_got; ++local_got, ++local_tls_type) + for (; local_got < end_local_got; + ++local_got, ++local_tls_type, ++local_tlsdesc_gotent) { + *local_tlsdesc_gotent = (bfd_vma) -1; if (*local_got > 0) { - *local_got = s->size; - s->size += 4; - if (*local_tls_type == GOT_TLS_GD - || *local_tls_type == GOT_TLS_IE_BOTH) - s->size += 4; + if (GOT_TLS_GDESC_P (*local_tls_type)) + { + *local_tlsdesc_gotent = htab->sgotplt->size + - elf_i386_compute_jump_table_size (htab); + htab->sgotplt->size += 8; + *local_got = (bfd_vma) -2; + } + if (! GOT_TLS_GDESC_P (*local_tls_type) + || GOT_TLS_GD_P (*local_tls_type)) + { + *local_got = s->size; + s->size += 4; + if (GOT_TLS_GD_P (*local_tls_type) + || *local_tls_type == GOT_TLS_IE_BOTH) + s->size += 4; + } if (info->shared - || *local_tls_type == GOT_TLS_GD + || GOT_TLS_GD_ANY_P (*local_tls_type) || (*local_tls_type & GOT_TLS_IE)) { if (*local_tls_type == GOT_TLS_IE_BOTH) srel->size += 2 * sizeof (Elf32_External_Rel); - else + else if (GOT_TLS_GD_P (*local_tls_type) + || ! GOT_TLS_GDESC_P (*local_tls_type)) srel->size += sizeof (Elf32_External_Rel); + if (GOT_TLS_GDESC_P (*local_tls_type)) + htab->srelplt->size += sizeof (Elf32_External_Rel); } } else @@ -1914,6 +2014,14 @@ elf_i386_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, sym dynamic relocs. */ elf_link_hash_traverse (&htab->elf, allocate_dynrelocs, (PTR) info); + /* For every jump slot reserved in the sgotplt, reloc_count is + incremented. However, when we reserve space for TLS descriptors, + it's not incremented, so in order to compute the space reserved + for them, it suffices to multiply the reloc count by the jump + slot size. */ + if (htab->srelplt) + htab->sgotplt_jump_table_size = htab->srelplt->reloc_count * 4; + /* We now have determined the sizes of the various dynamic sections. Allocate memory for them. */ relocs = FALSE; @@ -1945,7 +2053,8 @@ elf_i386_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, /* We use the reloc_count field as a counter if we need to copy relocs into the output file. */ - s->reloc_count = 0; + if (s != htab->srelplt) + s->reloc_count = 0; } else { @@ -2032,6 +2141,41 @@ elf_i386_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, return TRUE; } +static bfd_boolean +elf_i386_always_size_sections (bfd *output_bfd, + struct bfd_link_info *info) +{ + asection *tls_sec = elf_hash_table (info)->tls_sec; + + if (tls_sec) + { + struct elf_link_hash_entry *tlsbase; + + tlsbase = elf_link_hash_lookup (elf_hash_table (info), + "_TLS_MODULE_BASE_", + FALSE, FALSE, FALSE); + + if (tlsbase && tlsbase->type == STT_TLS) + { + struct bfd_link_hash_entry *bh = NULL; + const struct elf_backend_data *bed + = get_elf_backend_data (output_bfd); + + if (!(_bfd_generic_link_add_one_symbol + (info, output_bfd, "_TLS_MODULE_BASE_", BSF_LOCAL, + tls_sec, 0, NULL, FALSE, + bed->collect, &bh))) + return FALSE; + tlsbase = (struct elf_link_hash_entry *)bh; + tlsbase->def_regular = 1; + tlsbase->other = STV_HIDDEN; + (*bed->elf_backend_hide_symbol) (info, tlsbase, TRUE); + } + } + + return TRUE; +} + /* Set the correct type for an x86 ELF section. We do this by the section name, which is a hack, but ought to work. */ @@ -2109,6 +2253,7 @@ elf_i386_relocate_section (bfd *output_bfd, Elf_Internal_Shdr *symtab_hdr; struct elf_link_hash_entry **sym_hashes; bfd_vma *local_got_offsets; + bfd_vma *local_tlsdesc_gotents; Elf_Internal_Rela *rel; Elf_Internal_Rela *relend; @@ -2116,6 +2261,7 @@ elf_i386_relocate_section (bfd *output_bfd, symtab_hdr = &elf_tdata (input_bfd)->symtab_hdr; sym_hashes = elf_sym_hashes (input_bfd); local_got_offsets = elf_local_got_offsets (input_bfd); + local_tlsdesc_gotents = elf_i386_local_tlsdesc_gotent (input_bfd); rel = relocs; relend = relocs + input_section->reloc_count; @@ -2127,7 +2273,7 @@ elf_i386_relocate_section (bfd *output_bfd, struct elf_link_hash_entry *h; Elf_Internal_Sym *sym; asection *sec; - bfd_vma off; + bfd_vma off, offplt; bfd_vma relocation; bfd_boolean unresolved_reloc; bfd_reloc_status_type r; @@ -2549,6 +2695,8 @@ elf_i386_relocate_section (bfd *output_bfd, /* Fall through */ case R_386_TLS_GD: + case R_386_TLS_GOTDESC: + case R_386_TLS_DESC_CALL: case R_386_TLS_IE_32: case R_386_TLS_GOTIE: r_type = elf_i386_tls_transition (info, r_type, h == NULL); @@ -2563,7 +2711,9 @@ elf_i386_relocate_section (bfd *output_bfd, } if (tls_type == GOT_TLS_IE) tls_type = GOT_TLS_IE_NEG; - if (r_type == R_386_TLS_GD) + if (r_type == R_386_TLS_GD + || r_type == R_386_TLS_GOTDESC + || r_type == R_386_TLS_DESC_CALL) { if (tls_type == GOT_TLS_IE_POS) r_type = R_386_TLS_GOTIE; @@ -2637,6 +2787,63 @@ elf_i386_relocate_section (bfd *output_bfd, rel++; continue; } + else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_GOTDESC) + { + /* GDesc -> LE transition. + It's originally something like: + leal x@tlsdesc(%ebx), %eax + + leal x@ntpoff, %eax + + Registers other than %eax may be set up here. */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a leal adding ebx to a + 32-bit offset into any register, although it's + probably almost always going to be eax. */ + roff = rel->r_offset; + BFD_ASSERT (roff >= 2); + type = bfd_get_8 (input_bfd, contents + roff - 2); + BFD_ASSERT (type == 0x8d); + val = bfd_get_8 (input_bfd, contents + roff - 1); + BFD_ASSERT ((val & 0xc7) == 0x83); + BFD_ASSERT (roff + 4 <= input_section->size); + + /* Now modify the instruction as appropriate. */ + /* aoliva FIXME: remove the above and xor the byte + below with 0x86. */ + bfd_put_8 (output_bfd, val ^ 0x86, + contents + roff - 1); + bfd_put_32 (output_bfd, -tpoff (info, relocation), + contents + roff); + continue; + } + else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_DESC_CALL) + { + /* GDesc -> LE transition. + It's originally: + call *(%eax) + Turn it into: + nop; nop */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a call *(%eax). */ + roff = rel->r_offset; + BFD_ASSERT (roff + 2 <= input_section->size); + type = bfd_get_8 (input_bfd, contents + roff); + BFD_ASSERT (type == 0xff); + val = bfd_get_8 (input_bfd, contents + roff + 1); + BFD_ASSERT (val == 0x10); + + /* Now modify the instruction as appropriate. */ + bfd_put_8 (output_bfd, 0x90, contents + roff); + bfd_put_8 (output_bfd, 0x90, contents + roff + 1); + continue; + } else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_IE) { unsigned int val, type; @@ -2751,13 +2958,17 @@ elf_i386_relocate_section (bfd *output_bfd, abort (); if (h != NULL) - off = h->got.offset; + { + off = h->got.offset; + offplt = elf_i386_hash_entry (h)->tlsdesc_got; + } else { if (local_got_offsets == NULL) abort (); off = local_got_offsets[r_symndx]; + offplt = local_tlsdesc_gotents[r_symndx]; } if ((off & 1) != 0) @@ -2767,35 +2978,77 @@ elf_i386_relocate_section (bfd *output_bfd, Elf_Internal_Rela outrel; bfd_byte *loc; int dr_type, indx; + asection *sreloc; if (htab->srelgot == NULL) abort (); + indx = h && h->dynindx != -1 ? h->dynindx : 0; + + if (GOT_TLS_GDESC_P (tls_type)) + { + outrel.r_info = ELF32_R_INFO (indx, R_386_TLS_DESC); + BFD_ASSERT (htab->sgotplt_jump_table_size + offplt + 8 + <= htab->sgotplt->size); + outrel.r_offset = (htab->sgotplt->output_section->vma + + htab->sgotplt->output_offset + + offplt + + htab->sgotplt_jump_table_size); + sreloc = htab->srelplt; + loc = sreloc->contents; + loc += sreloc->reloc_count++ + * sizeof (Elf32_External_Rel); + BFD_ASSERT (loc + sizeof (Elf32_External_Rel) + <= sreloc->contents + sreloc->size); + bfd_elf32_swap_reloc_out (output_bfd, &outrel, loc); + if (indx == 0) + { + BFD_ASSERT (! unresolved_reloc); + bfd_put_32 (output_bfd, + relocation - dtpoff_base (info), + htab->sgotplt->contents + offplt + + htab->sgotplt_jump_table_size + 4); + } + else + { + bfd_put_32 (output_bfd, 0, + htab->sgotplt->contents + offplt + + htab->sgotplt_jump_table_size + 4); + } + } + + sreloc = htab->srelgot; + outrel.r_offset = (htab->sgot->output_section->vma + htab->sgot->output_offset + off); - indx = h && h->dynindx != -1 ? h->dynindx : 0; - if (r_type == R_386_TLS_GD) + if (GOT_TLS_GD_P (tls_type)) dr_type = R_386_TLS_DTPMOD32; + else if (GOT_TLS_GDESC_P (tls_type)) + goto dr_done; else if (tls_type == GOT_TLS_IE_POS) dr_type = R_386_TLS_TPOFF; else dr_type = R_386_TLS_TPOFF32; + if (dr_type == R_386_TLS_TPOFF && indx == 0) bfd_put_32 (output_bfd, relocation - dtpoff_base (info), htab->sgot->contents + off); else if (dr_type == R_386_TLS_TPOFF32 && indx == 0) bfd_put_32 (output_bfd, dtpoff_base (info) - relocation, htab->sgot->contents + off); - else + else if (dr_type != R_386_TLS_DESC) bfd_put_32 (output_bfd, 0, htab->sgot->contents + off); outrel.r_info = ELF32_R_INFO (indx, dr_type); - loc = htab->srelgot->contents; - loc += htab->srelgot->reloc_count++ * sizeof (Elf32_External_Rel); + + loc = sreloc->contents; + loc += sreloc->reloc_count++ * sizeof (Elf32_External_Rel); + BFD_ASSERT (loc + sizeof (Elf32_External_Rel) + <= sreloc->contents + sreloc->size); bfd_elf32_swap_reloc_out (output_bfd, &outrel, loc); - if (r_type == R_386_TLS_GD) + if (GOT_TLS_GD_P (tls_type)) { if (indx == 0) { @@ -2811,8 +3064,10 @@ elf_i386_relocate_section (bfd *output_bfd, outrel.r_info = ELF32_R_INFO (indx, R_386_TLS_DTPOFF32); outrel.r_offset += 4; - htab->srelgot->reloc_count++; + sreloc->reloc_count++; loc += sizeof (Elf32_External_Rel); + BFD_ASSERT (loc + sizeof (Elf32_External_Rel) + <= sreloc->contents + sreloc->size); bfd_elf32_swap_reloc_out (output_bfd, &outrel, loc); } } @@ -2823,25 +3078,33 @@ elf_i386_relocate_section (bfd *output_bfd, htab->sgot->contents + off + 4); outrel.r_info = ELF32_R_INFO (indx, R_386_TLS_TPOFF); outrel.r_offset += 4; - htab->srelgot->reloc_count++; + sreloc->reloc_count++; loc += sizeof (Elf32_External_Rel); bfd_elf32_swap_reloc_out (output_bfd, &outrel, loc); } + dr_done: if (h != NULL) h->got.offset |= 1; else local_got_offsets[r_symndx] |= 1; } - if (off >= (bfd_vma) -2) + if (off >= (bfd_vma) -2 + && ! GOT_TLS_GDESC_P (tls_type)) abort (); - if (r_type == ELF32_R_TYPE (rel->r_info)) + if (r_type == R_386_TLS_GOTDESC + || r_type == R_386_TLS_DESC_CALL) + { + relocation = htab->sgotplt_jump_table_size + offplt; + unresolved_reloc = FALSE; + } + else if (r_type == ELF32_R_TYPE (rel->r_info)) { bfd_vma g_o_t = htab->sgotplt->output_section->vma + htab->sgotplt->output_offset; relocation = htab->sgot->output_section->vma - + htab->sgot->output_offset + off - g_o_t; + + htab->sgot->output_offset + off - g_o_t; if ((r_type == R_386_TLS_IE || r_type == R_386_TLS_GOTIE) && tls_type == GOT_TLS_IE_BOTH) relocation += 4; @@ -2849,7 +3112,7 @@ elf_i386_relocate_section (bfd *output_bfd, relocation += g_o_t; unresolved_reloc = FALSE; } - else + else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_GD) { unsigned int val, type; bfd_vma roff; @@ -2913,6 +3176,94 @@ elf_i386_relocate_section (bfd *output_bfd, rel++; continue; } + else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_GOTDESC) + { + /* GDesc -> IE transition. + It's originally something like: + leal x@tlsdesc(%ebx), %eax + + Change it to: + movl x@gotntpoff(%ebx), %eax # before nop; nop + or: + movl x@gottpoff(%ebx), %eax # before negl %eax + + Registers other than %eax may be set up here. */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a leal adding ebx to a 32-bit + offset into any register, although it's probably + almost always going to be eax. */ + roff = rel->r_offset; + BFD_ASSERT (roff >= 2); + type = bfd_get_8 (input_bfd, contents + roff - 2); + BFD_ASSERT (type == 0x8d); + val = bfd_get_8 (input_bfd, contents + roff - 1); + BFD_ASSERT ((val & 0xc7) == 0x83); + BFD_ASSERT (roff + 4 <= input_section->size); + + /* Now modify the instruction as appropriate. */ + /* To turn a leal into a movl in the form we use it, it + suffices to change the first byte from 0x8d to 0x8b. + aoliva FIXME: should we decide to keep the leal, all + we have to do is remove the statement below, and + adjust the relaxation of R_386_TLS_DESC_CALL. */ + bfd_put_8 (output_bfd, 0x8b, contents + roff - 2); + + if (tls_type == GOT_TLS_IE_BOTH) + off += 4; + + bfd_put_32 (output_bfd, + htab->sgot->output_section->vma + + htab->sgot->output_offset + off + - htab->sgotplt->output_section->vma + - htab->sgotplt->output_offset, + contents + roff); + continue; + } + else if (ELF32_R_TYPE (rel->r_info) == R_386_TLS_DESC_CALL) + { + /* GDesc -> IE transition. + It's originally: + call *(%eax) + + Change it to: + nop; nop + or + negl %eax + depending on how we transformed the TLS_GOTDESC above. + */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a call *(%eax). */ + roff = rel->r_offset; + BFD_ASSERT (roff + 2 <= input_section->size); + type = bfd_get_8 (input_bfd, contents + roff); + BFD_ASSERT (type == 0xff); + val = bfd_get_8 (input_bfd, contents + roff + 1); + BFD_ASSERT (val == 0x10); + + /* Now modify the instruction as appropriate. */ + if (tls_type != GOT_TLS_IE_NEG) + { + /* nop; nop */ + bfd_put_8 (output_bfd, 0x90, contents + roff); + bfd_put_8 (output_bfd, 0x90, contents + roff + 1); + } + else + { + /* negl %eax */ + bfd_put_8 (output_bfd, 0xf7, contents + roff); + bfd_put_8 (output_bfd, 0xd8, contents + roff + 1); + } + + continue; + } + else + BFD_ASSERT (FALSE); break; case R_386_TLS_LDM: @@ -3220,7 +3571,7 @@ elf_i386_finish_dynamic_symbol (bfd *output_bfd, } if (h->got.offset != (bfd_vma) -1 - && elf_i386_hash_entry(h)->tls_type != GOT_TLS_GD + && ! GOT_TLS_GD_ANY_P (elf_i386_hash_entry(h)->tls_type) && (elf_i386_hash_entry(h)->tls_type & GOT_TLS_IE) == 0) { Elf_Internal_Rela rel; @@ -3555,6 +3906,7 @@ elf_i386_plt_sym_val (bfd_vma i, const asection *plt, #define elf_backend_reloc_type_class elf_i386_reloc_type_class #define elf_backend_relocate_section elf_i386_relocate_section #define elf_backend_size_dynamic_sections elf_i386_size_dynamic_sections +#define elf_backend_always_size_sections elf_i386_always_size_sections #define elf_backend_plt_sym_val elf_i386_plt_sym_val #include "elf32-target.h" diff --git a/bfd/elf64-x86-64.c b/bfd/elf64-x86-64.c index 54914ba..5433771 100644 --- a/bfd/elf64-x86-64.c +++ b/bfd/elf64-x86-64.c @@ -1,5 +1,5 @@ /* X86-64 specific support for 64-bit ELF - Copyright 2000, 2001, 2002, 2003, 2004, 2005 + Copyright 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Contributed by Jan Hubicka <jh@suse.cz>. @@ -112,12 +112,31 @@ static reloc_howto_type x86_64_elf_howto_table[] = HOWTO(R_X86_64_GOTPC32, 0, 2, 32, TRUE, 0, complain_overflow_signed, bfd_elf_generic_reloc, "R_X86_64_GOTPC32", FALSE, 0xffffffff, 0xffffffff, TRUE), + EMPTY_HOWTO (27), + EMPTY_HOWTO (28), + EMPTY_HOWTO (29), + EMPTY_HOWTO (30), + EMPTY_HOWTO (31), + EMPTY_HOWTO (32), + EMPTY_HOWTO (33), + HOWTO(R_X86_64_GOTPC32_TLSDESC, 0, 2, 32, TRUE, 0, + complain_overflow_bitfield, bfd_elf_generic_reloc, + "R_X86_64_GOTPC32_TLSDESC", + FALSE, 0xffffffff, 0xffffffff, TRUE), + HOWTO(R_X86_64_TLSDESC_CALL, 0, 0, 0, FALSE, 0, + complain_overflow_dont, bfd_elf_generic_reloc, + "R_X86_64_TLSDESC_CALL", + FALSE, 0, 0, FALSE), + HOWTO(R_X86_64_TLSDESC, 0, 4, 64, FALSE, 0, + complain_overflow_bitfield, bfd_elf_generic_reloc, + "R_X86_64_TLSDESC", + FALSE, MINUS_ONE, MINUS_ONE, FALSE), /* We have a gap in the reloc numbers here. R_X86_64_standard counts the number up to this point, and R_X86_64_vt_offset is the value to subtract from a reloc type of R_X86_64_GNU_VT* to form an index into this table. */ -#define R_X86_64_standard (R_X86_64_GOTPC32 + 1) +#define R_X86_64_standard (R_X86_64_TLSDESC + 1) #define R_X86_64_vt_offset (R_X86_64_GNU_VTINHERIT - R_X86_64_standard) /* GNU extension to record C++ vtable hierarchy. */ @@ -166,14 +185,38 @@ static const struct elf_reloc_map x86_64_reloc_map[] = { BFD_RELOC_64_PCREL, R_X86_64_PC64, }, { BFD_RELOC_X86_64_GOTOFF64, R_X86_64_GOTOFF64, }, { BFD_RELOC_X86_64_GOTPC32, R_X86_64_GOTPC32, }, + { BFD_RELOC_X86_64_GOTPC32_TLSDESC, R_X86_64_GOTPC32_TLSDESC, }, + { BFD_RELOC_X86_64_TLSDESC_CALL, R_X86_64_TLSDESC_CALL, }, + { BFD_RELOC_X86_64_TLSDESC, R_X86_64_TLSDESC, }, { BFD_RELOC_VTABLE_INHERIT, R_X86_64_GNU_VTINHERIT, }, { BFD_RELOC_VTABLE_ENTRY, R_X86_64_GNU_VTENTRY, }, }; +static reloc_howto_type * +elf64_x86_64_rtype_to_howto (bfd *abfd, unsigned r_type) +{ + unsigned i; + + if (r_type < (unsigned int) R_X86_64_GNU_VTINHERIT + || r_type >= (unsigned int) R_X86_64_max) + { + if (r_type >= (unsigned int) R_X86_64_standard) + { + (*_bfd_error_handler) (_("%B: invalid relocation type %d"), + abfd, (int) r_type); + r_type = R_X86_64_NONE; + } + i = r_type; + } + else + i = r_type - (unsigned int) R_X86_64_vt_offset; + BFD_ASSERT (x86_64_elf_howto_table[i].type == r_type); + return &x86_64_elf_howto_table[i]; +} /* Given a BFD reloc type, return a HOWTO structure. */ static reloc_howto_type * -elf64_x86_64_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED, +elf64_x86_64_reloc_type_lookup (bfd *abfd, bfd_reloc_code_real_type code) { unsigned int i; @@ -182,7 +225,8 @@ elf64_x86_64_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED, i++) { if (x86_64_reloc_map[i].bfd_reloc_val == code) - return &x86_64_elf_howto_table[i]; + return elf64_x86_64_rtype_to_howto (abfd, + x86_64_reloc_map[i].elf_reloc_val); } return 0; } @@ -193,23 +237,10 @@ static void elf64_x86_64_info_to_howto (bfd *abfd ATTRIBUTE_UNUSED, arelent *cache_ptr, Elf_Internal_Rela *dst) { - unsigned r_type, i; + unsigned r_type; r_type = ELF64_R_TYPE (dst->r_info); - if (r_type < (unsigned int) R_X86_64_GNU_VTINHERIT - || r_type >= (unsigned int) R_X86_64_max) - { - if (r_type >= (unsigned int) R_X86_64_standard) - { - (*_bfd_error_handler) (_("%B: invalid relocation type %d"), - abfd, (int) r_type); - r_type = R_X86_64_NONE; - } - i = r_type; - } - else - i = r_type - (unsigned int) R_X86_64_vt_offset; - cache_ptr->howto = &x86_64_elf_howto_table[i]; + cache_ptr->howto = elf64_x86_64_rtype_to_howto (abfd, r_type); BFD_ASSERT (r_type == cache_ptr->howto->type); } @@ -353,7 +384,20 @@ struct elf64_x86_64_link_hash_entry #define GOT_NORMAL 1 #define GOT_TLS_GD 2 #define GOT_TLS_IE 3 +#define GOT_TLS_GDESC 4 +#define GOT_TLS_GD_BOTH_P(type) \ + ((type) == (GOT_TLS_GD | GOT_TLS_GDESC)) +#define GOT_TLS_GD_P(type) \ + ((type) == GOT_TLS_GD || GOT_TLS_GD_BOTH_P (type)) +#define GOT_TLS_GDESC_P(type) \ + ((type) == GOT_TLS_GDESC || GOT_TLS_GD_BOTH_P (type)) +#define GOT_TLS_GD_ANY_P(type) \ + (GOT_TLS_GD_P (type) || GOT_TLS_GDESC_P (type)) unsigned char tls_type; + + /* Offset of the GOTPLT entry reserved for the TLS descriptor, + starting at the end of the jump table. */ + bfd_vma tlsdesc_got; }; #define elf64_x86_64_hash_entry(ent) \ @@ -365,6 +409,9 @@ struct elf64_x86_64_obj_tdata /* tls_type for each local got entry. */ char *local_got_tls_type; + + /* GOTPLT entries for TLS descriptors. */ + bfd_vma *local_tlsdesc_gotent; }; #define elf64_x86_64_tdata(abfd) \ @@ -373,6 +420,8 @@ struct elf64_x86_64_obj_tdata #define elf64_x86_64_local_got_tls_type(abfd) \ (elf64_x86_64_tdata (abfd)->local_got_tls_type) +#define elf64_x86_64_local_tlsdesc_gotent(abfd) \ + (elf64_x86_64_tdata (abfd)->local_tlsdesc_gotent) /* x86-64 ELF linker hash table. */ @@ -389,11 +438,23 @@ struct elf64_x86_64_link_hash_table asection *sdynbss; asection *srelbss; + /* The offset into splt of the PLT entry for the TLS descriptor + resolver. Special values are 0, if not necessary (or not found + to be necessary yet), and -1 if needed but not determined + yet. */ + bfd_vma tlsdesc_plt; + /* The offset into sgot of the GOT entry used by the PLT entry + above. */ + bfd_vma tlsdesc_got; + union { bfd_signed_vma refcount; bfd_vma offset; } tls_ld_got; + /* The amount of space used by the jump slots in the GOT. */ + bfd_vma sgotplt_jump_table_size; + /* Small local sym to section mapping cache. */ struct sym_sec_cache sym_sec; }; @@ -403,6 +464,9 @@ struct elf64_x86_64_link_hash_table #define elf64_x86_64_hash_table(p) \ ((struct elf64_x86_64_link_hash_table *) ((p)->hash)) +#define elf64_x86_64_compute_jump_table_size(htab) \ + ((htab)->srelplt->reloc_count * GOT_ENTRY_SIZE) + /* Create an entry in an x86-64 ELF linker hash table. */ static struct bfd_hash_entry * @@ -428,6 +492,7 @@ link_hash_newfunc (struct bfd_hash_entry *entry, struct bfd_hash_table *table, eh = (struct elf64_x86_64_link_hash_entry *) entry; eh->dyn_relocs = NULL; eh->tls_type = GOT_UNKNOWN; + eh->tlsdesc_got = (bfd_vma) -1; } return entry; @@ -459,7 +524,10 @@ elf64_x86_64_link_hash_table_create (bfd *abfd) ret->sdynbss = NULL; ret->srelbss = NULL; ret->sym_sec.abfd = NULL; + ret->tlsdesc_plt = 0; + ret->tlsdesc_got = 0; ret->tls_ld_got.refcount = 0; + ret->sgotplt_jump_table_size = 0; return &ret->elf.root; } @@ -616,6 +684,8 @@ elf64_x86_64_tls_transition (struct bfd_link_info *info, int r_type, int is_loca switch (r_type) { case R_X86_64_TLSGD: + case R_X86_64_GOTPC32_TLSDESC: + case R_X86_64_TLSDESC_CALL: case R_X86_64_GOTTPOFF: if (is_local) return R_X86_64_TPOFF32; @@ -706,6 +776,8 @@ elf64_x86_64_check_relocs (bfd *abfd, struct bfd_link_info *info, asection *sec, case R_X86_64_GOT32: case R_X86_64_GOTPCREL: case R_X86_64_TLSGD: + case R_X86_64_GOTPC32_TLSDESC: + case R_X86_64_TLSDESC_CALL: /* This symbol requires a global offset table entry. */ { int tls_type, old_tls_type; @@ -715,6 +787,9 @@ elf64_x86_64_check_relocs (bfd *abfd, struct bfd_link_info *info, asection *sec, default: tls_type = GOT_NORMAL; break; case R_X86_64_TLSGD: tls_type = GOT_TLS_GD; break; case R_X86_64_GOTTPOFF: tls_type = GOT_TLS_IE; break; + case R_X86_64_GOTPC32_TLSDESC: + case R_X86_64_TLSDESC_CALL: + tls_type = GOT_TLS_GDESC; break; } if (h != NULL) @@ -733,14 +808,17 @@ elf64_x86_64_check_relocs (bfd *abfd, struct bfd_link_info *info, asection *sec, bfd_size_type size; size = symtab_hdr->sh_info; - size *= sizeof (bfd_signed_vma) + sizeof (char); + size *= sizeof (bfd_signed_vma) + + sizeof (bfd_vma) + sizeof (char); local_got_refcounts = ((bfd_signed_vma *) bfd_zalloc (abfd, size)); if (local_got_refcounts == NULL) return FALSE; elf_local_got_refcounts (abfd) = local_got_refcounts; + elf64_x86_64_local_tlsdesc_gotent (abfd) + = (bfd_vma *) (local_got_refcounts + symtab_hdr->sh_info); elf64_x86_64_local_got_tls_type (abfd) - = (char *) (local_got_refcounts + symtab_hdr->sh_info); + = (char *) (local_got_refcounts + 2 * symtab_hdr->sh_info); } local_got_refcounts[r_symndx] += 1; old_tls_type @@ -750,10 +828,14 @@ elf64_x86_64_check_relocs (bfd *abfd, struct bfd_link_info *info, asection *sec, /* If a TLS symbol is accessed using IE at least once, there is no point to use dynamic model for it. */ if (old_tls_type != tls_type && old_tls_type != GOT_UNKNOWN - && (old_tls_type != GOT_TLS_GD || tls_type != GOT_TLS_IE)) + && (! GOT_TLS_GD_ANY_P (old_tls_type) + || tls_type != GOT_TLS_IE)) { - if (old_tls_type == GOT_TLS_IE && tls_type == GOT_TLS_GD) + if (old_tls_type == GOT_TLS_IE && GOT_TLS_GD_ANY_P (tls_type)) tls_type = old_tls_type; + else if (GOT_TLS_GD_ANY_P (old_tls_type) + && GOT_TLS_GD_ANY_P (tls_type)) + tls_type |= old_tls_type; else { (*_bfd_error_handler) @@ -1101,6 +1183,8 @@ elf64_x86_64_gc_sweep_hook (bfd *abfd, struct bfd_link_info *info, break; case R_X86_64_TLSGD: + case R_X86_64_GOTPC32_TLSDESC: + case R_X86_64_TLSDESC_CALL: case R_X86_64_GOTTPOFF: case R_X86_64_GOT32: case R_X86_64_GOTPCREL: @@ -1368,6 +1452,7 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) /* We also need to make an entry in the .rela.plt section. */ htab->srelplt->size += sizeof (Elf64_External_Rela); + htab->srelplt->reloc_count++; } else { @@ -1381,6 +1466,9 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) h->needs_plt = 0; } + eh = (struct elf64_x86_64_link_hash_entry *) h; + eh->tlsdesc_got = (bfd_vma) -1; + /* If R_X86_64_GOTTPOFF symbol is now local to the binary, make it a R_X86_64_TPOFF32 requiring no GOT entry. */ if (h->got.refcount > 0 @@ -1403,31 +1491,46 @@ allocate_dynrelocs (struct elf_link_hash_entry *h, void * inf) return FALSE; } - s = htab->sgot; - h->got.offset = s->size; - s->size += GOT_ENTRY_SIZE; - /* R_X86_64_TLSGD needs 2 consecutive GOT slots. */ - if (tls_type == GOT_TLS_GD) - s->size += GOT_ENTRY_SIZE; + if (GOT_TLS_GDESC_P (tls_type)) + { + eh->tlsdesc_got = htab->sgotplt->size + - elf64_x86_64_compute_jump_table_size (htab); + htab->sgotplt->size += 2 * GOT_ENTRY_SIZE; + h->got.offset = (bfd_vma) -2; + } + if (! GOT_TLS_GDESC_P (tls_type) + || GOT_TLS_GD_P (tls_type)) + { + s = htab->sgot; + h->got.offset = s->size; + s->size += GOT_ENTRY_SIZE; + if (GOT_TLS_GD_P (tls_type)) + s->size += GOT_ENTRY_SIZE; + } dyn = htab->elf.dynamic_sections_created; /* R_X86_64_TLSGD needs one dynamic relocation if local symbol and two if global. R_X86_64_GOTTPOFF needs one dynamic relocation. */ - if ((tls_type == GOT_TLS_GD && h->dynindx == -1) + if ((GOT_TLS_GD_P (tls_type) && h->dynindx == -1) || tls_type == GOT_TLS_IE) htab->srelgot->size += sizeof (Elf64_External_Rela); - else if (tls_type == GOT_TLS_GD) + else if (GOT_TLS_GD_P (tls_type)) htab->srelgot->size += 2 * sizeof (Elf64_External_Rela); - else if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT - || h->root.type != bfd_link_hash_undefweak) + else if (! GOT_TLS_GDESC_P (tls_type) + && (ELF_ST_VISIBILITY (h->other) == STV_DEFAULT + || h->root.type != bfd_link_hash_undefweak) && (info->shared || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h))) htab->srelgot->size += sizeof (Elf64_External_Rela); + if (GOT_TLS_GDESC_P (tls_type)) + { + htab->srelplt->size += sizeof (Elf64_External_Rela); + htab->tlsdesc_plt = (bfd_vma) -1; + } } else h->got.offset = (bfd_vma) -1; - eh = (struct elf64_x86_64_link_hash_entry *) h; if (eh->dyn_relocs == NULL) return TRUE; @@ -1575,6 +1678,7 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, bfd_signed_vma *local_got; bfd_signed_vma *end_local_got; char *local_tls_type; + bfd_vma *local_tlsdesc_gotent; bfd_size_type locsymcount; Elf_Internal_Shdr *symtab_hdr; asection *srel; @@ -1618,20 +1722,43 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, locsymcount = symtab_hdr->sh_info; end_local_got = local_got + locsymcount; local_tls_type = elf64_x86_64_local_got_tls_type (ibfd); + local_tlsdesc_gotent = elf64_x86_64_local_tlsdesc_gotent (ibfd); s = htab->sgot; srel = htab->srelgot; - for (; local_got < end_local_got; ++local_got, ++local_tls_type) + for (; local_got < end_local_got; + ++local_got, ++local_tls_type, ++local_tlsdesc_gotent) { + *local_tlsdesc_gotent = (bfd_vma) -1; if (*local_got > 0) { - *local_got = s->size; - s->size += GOT_ENTRY_SIZE; - if (*local_tls_type == GOT_TLS_GD) - s->size += GOT_ENTRY_SIZE; + if (GOT_TLS_GDESC_P (*local_tls_type)) + { + *local_tlsdesc_gotent = htab->sgotplt->size + - elf64_x86_64_compute_jump_table_size (htab); + htab->sgotplt->size += 2 * GOT_ENTRY_SIZE; + *local_got = (bfd_vma) -2; + } + if (! GOT_TLS_GDESC_P (*local_tls_type) + || GOT_TLS_GD_P (*local_tls_type)) + { + *local_got = s->size; + s->size += GOT_ENTRY_SIZE; + if (GOT_TLS_GD_P (*local_tls_type)) + s->size += GOT_ENTRY_SIZE; + } if (info->shared - || *local_tls_type == GOT_TLS_GD + || GOT_TLS_GD_ANY_P (*local_tls_type) || *local_tls_type == GOT_TLS_IE) - srel->size += sizeof (Elf64_External_Rela); + { + if (GOT_TLS_GDESC_P (*local_tls_type)) + { + htab->srelplt->size += sizeof (Elf64_External_Rela); + htab->tlsdesc_plt = (bfd_vma) -1; + } + if (! GOT_TLS_GDESC_P (*local_tls_type) + || GOT_TLS_GD_P (*local_tls_type)) + srel->size += sizeof (Elf64_External_Rela); + } } else *local_got = (bfd_vma) -1; @@ -1653,6 +1780,34 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, sym dynamic relocs. */ elf_link_hash_traverse (&htab->elf, allocate_dynrelocs, (PTR) info); + /* For every jump slot reserved in the sgotplt, reloc_count is + incremented. However, when we reserve space for TLS descriptors, + it's not incremented, so in order to compute the space reserved + for them, it suffices to multiply the reloc count by the jump + slot size. */ + if (htab->srelplt) + htab->sgotplt_jump_table_size + = elf64_x86_64_compute_jump_table_size (htab); + + if (htab->tlsdesc_plt) + { + /* If we're not using lazy TLS relocations, don't generate the + PLT and GOT entries they require. */ + if ((info->flags & DF_BIND_NOW)) + htab->tlsdesc_plt = 0; + else + { + htab->tlsdesc_got = htab->sgot->size; + htab->sgot->size += GOT_ENTRY_SIZE; + /* Reserve room for the initial entry. + FIXME: we could probably do away with it in this case. */ + if (htab->splt->size == 0) + htab->splt->size += PLT_ENTRY_SIZE; + htab->tlsdesc_plt = htab->splt->size; + htab->splt->size += PLT_ENTRY_SIZE; + } + } + /* We now have determined the sizes of the various dynamic sections. Allocate memory for them. */ relocs = FALSE; @@ -1676,7 +1831,8 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, /* We use the reloc_count field as a counter if we need to copy relocs into the output file. */ - s->reloc_count = 0; + if (s != htab->srelplt) + s->reloc_count = 0; } else { @@ -1736,6 +1892,11 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, || !add_dynamic_entry (DT_PLTREL, DT_RELA) || !add_dynamic_entry (DT_JMPREL, 0)) return FALSE; + + if (htab->tlsdesc_plt + && (!add_dynamic_entry (DT_TLSDESC_PLT, 0) + || !add_dynamic_entry (DT_TLSDESC_GOT, 0))) + return FALSE; } if (relocs) @@ -1763,6 +1924,41 @@ elf64_x86_64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED, return TRUE; } +static bfd_boolean +elf64_x86_64_always_size_sections (bfd *output_bfd, + struct bfd_link_info *info) +{ + asection *tls_sec = elf_hash_table (info)->tls_sec; + + if (tls_sec) + { + struct elf_link_hash_entry *tlsbase; + + tlsbase = elf_link_hash_lookup (elf_hash_table (info), + "_TLS_MODULE_BASE_", + FALSE, FALSE, FALSE); + + if (tlsbase && tlsbase->type == STT_TLS) + { + struct bfd_link_hash_entry *bh = NULL; + const struct elf_backend_data *bed + = get_elf_backend_data (output_bfd); + + if (!(_bfd_generic_link_add_one_symbol + (info, output_bfd, "_TLS_MODULE_BASE_", BSF_LOCAL, + tls_sec, 0, NULL, FALSE, + bed->collect, &bh))) + return FALSE; + tlsbase = (struct elf_link_hash_entry *)bh; + tlsbase->def_regular = 1; + tlsbase->other = STV_HIDDEN; + (*bed->elf_backend_hide_symbol) (info, tlsbase, TRUE); + } + } + + return TRUE; +} + /* Return the base VMA address which should be subtracted from real addresses when resolving @dtpoff relocation. This is PT_TLS segment p_vaddr. */ @@ -1821,6 +2017,7 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, Elf_Internal_Shdr *symtab_hdr; struct elf_link_hash_entry **sym_hashes; bfd_vma *local_got_offsets; + bfd_vma *local_tlsdesc_gotents; Elf_Internal_Rela *rel; Elf_Internal_Rela *relend; @@ -1831,6 +2028,7 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, symtab_hdr = &elf_tdata (input_bfd)->symtab_hdr; sym_hashes = elf_sym_hashes (input_bfd); local_got_offsets = elf_local_got_offsets (input_bfd); + local_tlsdesc_gotents = elf64_x86_64_local_tlsdesc_gotent (input_bfd); rel = relocs; relend = relocs + input_section->reloc_count; @@ -1842,7 +2040,7 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, struct elf_link_hash_entry *h; Elf_Internal_Sym *sym; asection *sec; - bfd_vma off; + bfd_vma off, offplt; bfd_vma relocation; bfd_boolean unresolved_reloc; bfd_reloc_status_type r; @@ -2201,6 +2399,8 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, break; case R_X86_64_TLSGD: + case R_X86_64_GOTPC32_TLSDESC: + case R_X86_64_TLSDESC_CALL: case R_X86_64_GOTTPOFF: r_type = elf64_x86_64_tls_transition (info, r_type, h == NULL); tls_type = GOT_UNKNOWN; @@ -2212,7 +2412,9 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, if (!info->shared && h->dynindx == -1 && tls_type == GOT_TLS_IE) r_type = R_X86_64_TPOFF32; } - if (r_type == R_X86_64_TLSGD) + if (r_type == R_X86_64_TLSGD + || r_type == R_X86_64_GOTPC32_TLSDESC + || r_type == R_X86_64_TLSDESC_CALL) { if (tls_type == GOT_TLS_IE) r_type = R_X86_64_GOTTPOFF; @@ -2254,6 +2456,67 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, rel++; continue; } + else if (ELF64_R_TYPE (rel->r_info) == R_X86_64_GOTPC32_TLSDESC) + { + /* GDesc -> LE transition. + It's originally something like: + leaq x@tlsdesc(%rip), %rax + + Change it to: + movl $x@tpoff, %rax + + Registers other than %rax may be set up here. */ + + unsigned int val, type, type2; + bfd_vma roff; + + /* First, make sure it's a leaq adding rip to a + 32-bit offset into any register, although it's + probably almost always going to be rax. */ + roff = rel->r_offset; + BFD_ASSERT (roff >= 3); + type = bfd_get_8 (input_bfd, contents + roff - 3); + BFD_ASSERT ((type & 0xfb) == 0x48); + type2 = bfd_get_8 (input_bfd, contents + roff - 2); + BFD_ASSERT (type2 == 0x8d); + val = bfd_get_8 (input_bfd, contents + roff - 1); + BFD_ASSERT ((val & 0xc7) == 0x05); + BFD_ASSERT (roff + 4 <= input_section->size); + + /* Now modify the instruction as appropriate. */ + bfd_put_8 (output_bfd, 0x48 | ((type >> 2) & 1), + contents + roff - 3); + bfd_put_8 (output_bfd, 0xc7, contents + roff - 2); + bfd_put_8 (output_bfd, 0xc0 | ((val >> 3) & 7), + contents + roff - 1); + bfd_put_32 (output_bfd, tpoff (info, relocation), + contents + roff); + continue; + } + else if (ELF64_R_TYPE (rel->r_info) == R_X86_64_TLSDESC_CALL) + { + /* GDesc -> LE transition. + It's originally: + call *(%rax) + Turn it into: + nop; nop. */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a call *(%rax). */ + roff = rel->r_offset; + BFD_ASSERT (roff + 2 <= input_section->size); + type = bfd_get_8 (input_bfd, contents + roff); + BFD_ASSERT (type == 0xff); + val = bfd_get_8 (input_bfd, contents + roff + 1); + BFD_ASSERT (val == 0x10); + + /* Now modify the instruction as appropriate. */ + bfd_put_8 (output_bfd, 0x90, contents + roff); + bfd_put_8 (output_bfd, 0x90, contents + roff + 1); + continue; + } else { unsigned int val, type, reg; @@ -2319,13 +2582,17 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, abort (); if (h != NULL) - off = h->got.offset; + { + off = h->got.offset; + offplt = elf64_x86_64_hash_entry (h)->tlsdesc_got; + } else { if (local_got_offsets == NULL) abort (); off = local_got_offsets[r_symndx]; + offplt = local_tlsdesc_gotents[r_symndx]; } if ((off & 1) != 0) @@ -2335,30 +2602,61 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, Elf_Internal_Rela outrel; bfd_byte *loc; int dr_type, indx; + asection *sreloc; if (htab->srelgot == NULL) abort (); + indx = h && h->dynindx != -1 ? h->dynindx : 0; + + if (GOT_TLS_GDESC_P (tls_type)) + { + outrel.r_info = ELF64_R_INFO (indx, R_X86_64_TLSDESC); + BFD_ASSERT (htab->sgotplt_jump_table_size + offplt + + 2 * GOT_ENTRY_SIZE <= htab->sgotplt->size); + outrel.r_offset = (htab->sgotplt->output_section->vma + + htab->sgotplt->output_offset + + offplt + + htab->sgotplt_jump_table_size); + sreloc = htab->srelplt; + loc = sreloc->contents; + loc += sreloc->reloc_count++ + * sizeof (Elf64_External_Rela); + BFD_ASSERT (loc + sizeof (Elf64_External_Rela) + <= sreloc->contents + sreloc->size); + if (indx == 0) + outrel.r_addend = relocation - dtpoff_base (info); + else + outrel.r_addend = 0; + bfd_elf64_swap_reloca_out (output_bfd, &outrel, loc); + } + + sreloc = htab->srelgot; + outrel.r_offset = (htab->sgot->output_section->vma + htab->sgot->output_offset + off); - indx = h && h->dynindx != -1 ? h->dynindx : 0; - if (r_type == R_X86_64_TLSGD) + if (GOT_TLS_GD_P (tls_type)) dr_type = R_X86_64_DTPMOD64; + else if (GOT_TLS_GDESC_P (tls_type)) + goto dr_done; else dr_type = R_X86_64_TPOFF64; bfd_put_64 (output_bfd, 0, htab->sgot->contents + off); outrel.r_addend = 0; - if (dr_type == R_X86_64_TPOFF64 && indx == 0) + if ((dr_type == R_X86_64_TPOFF64 + || dr_type == R_X86_64_TLSDESC) && indx == 0) outrel.r_addend = relocation - dtpoff_base (info); outrel.r_info = ELF64_R_INFO (indx, dr_type); - loc = htab->srelgot->contents; - loc += htab->srelgot->reloc_count++ * sizeof (Elf64_External_Rela); + loc = sreloc->contents; + loc += sreloc->reloc_count++ * sizeof (Elf64_External_Rela); + BFD_ASSERT (loc + sizeof (Elf64_External_Rela) + <= sreloc->contents + sreloc->size); bfd_elf64_swap_reloca_out (output_bfd, &outrel, loc); - if (r_type == R_X86_64_TLSGD) + if (GOT_TLS_GD_P (tls_type)) { if (indx == 0) { @@ -2374,27 +2672,37 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, outrel.r_info = ELF64_R_INFO (indx, R_X86_64_DTPOFF64); outrel.r_offset += GOT_ENTRY_SIZE; - htab->srelgot->reloc_count++; + sreloc->reloc_count++; loc += sizeof (Elf64_External_Rela); + BFD_ASSERT (loc + sizeof (Elf64_External_Rela) + <= sreloc->contents + sreloc->size); bfd_elf64_swap_reloca_out (output_bfd, &outrel, loc); } } + dr_done: if (h != NULL) h->got.offset |= 1; else local_got_offsets[r_symndx] |= 1; } - if (off >= (bfd_vma) -2) + if (off >= (bfd_vma) -2 + && ! GOT_TLS_GDESC_P (tls_type)) abort (); if (r_type == ELF64_R_TYPE (rel->r_info)) { - relocation = htab->sgot->output_section->vma - + htab->sgot->output_offset + off; + if (r_type == R_X86_64_GOTPC32_TLSDESC + || r_type == R_X86_64_TLSDESC_CALL) + relocation = htab->sgotplt->output_section->vma + + htab->sgotplt->output_offset + + offplt + htab->sgotplt_jump_table_size; + else + relocation = htab->sgot->output_section->vma + + htab->sgot->output_offset + off; unresolved_reloc = FALSE; } - else + else if (ELF64_R_TYPE (rel->r_info) == R_X86_64_TLSGD) { unsigned int i; static unsigned char tlsgd[8] @@ -2434,6 +2742,77 @@ elf64_x86_64_relocate_section (bfd *output_bfd, struct bfd_link_info *info, rel++; continue; } + else if (ELF64_R_TYPE (rel->r_info) == R_X86_64_GOTPC32_TLSDESC) + { + /* GDesc -> IE transition. + It's originally something like: + leaq x@tlsdesc(%rip), %rax + + Change it to: + movq x@gottpoff(%rip), %rax # before nop; nop + + Registers other than %rax may be set up here. */ + + unsigned int val, type, type2; + bfd_vma roff; + + /* First, make sure it's a leaq adding rip to a 32-bit + offset into any register, although it's probably + almost always going to be rax. */ + roff = rel->r_offset; + BFD_ASSERT (roff >= 3); + type = bfd_get_8 (input_bfd, contents + roff - 3); + BFD_ASSERT ((type & 0xfb) == 0x48); + type2 = bfd_get_8 (input_bfd, contents + roff - 2); + BFD_ASSERT (type2 == 0x8d); + val = bfd_get_8 (input_bfd, contents + roff - 1); + BFD_ASSERT ((val & 0xc7) == 0x05); + BFD_ASSERT (roff + 4 <= input_section->size); + + /* Now modify the instruction as appropriate. */ + /* To turn a leaq into a movq in the form we use it, it + suffices to change the second byte from 0x8d to + 0x8b. */ + bfd_put_8 (output_bfd, 0x8b, contents + roff - 2); + + bfd_put_32 (output_bfd, + htab->sgot->output_section->vma + + htab->sgot->output_offset + off + - rel->r_offset + - input_section->output_section->vma + - input_section->output_offset + - 4, + contents + roff); + continue; + } + else if (ELF64_R_TYPE (rel->r_info) == R_X86_64_TLSDESC_CALL) + { + /* GDesc -> IE transition. + It's originally: + call *(%rax) + + Change it to: + nop; nop. */ + + unsigned int val, type; + bfd_vma roff; + + /* First, make sure it's a call *(%eax). */ + roff = rel->r_offset; + BFD_ASSERT (roff + 2 <= input_section->size); + type = bfd_get_8 (input_bfd, contents + roff); + BFD_ASSERT (type == 0xff); + val = bfd_get_8 (input_bfd, contents + roff + 1); + BFD_ASSERT (val == 0x10); + + /* Now modify the instruction as appropriate. */ + bfd_put_8 (output_bfd, 0x90, contents + roff); + bfd_put_8 (output_bfd, 0x90, contents + roff + 1); + + continue; + } + else + BFD_ASSERT (FALSE); break; case R_X86_64_TLSLD: @@ -2672,7 +3051,7 @@ elf64_x86_64_finish_dynamic_symbol (bfd *output_bfd, } if (h->got.offset != (bfd_vma) -1 - && elf64_x86_64_hash_entry (h)->tls_type != GOT_TLS_GD + && ! GOT_TLS_GD_ANY_P (elf64_x86_64_hash_entry (h)->tls_type) && elf64_x86_64_hash_entry (h)->tls_type != GOT_TLS_IE) { Elf_Internal_Rela rela; @@ -2827,6 +3206,18 @@ elf64_x86_64_finish_dynamic_sections (bfd *output_bfd, struct bfd_link_info *inf dyn.d_un.d_val -= s->size; } break; + + case DT_TLSDESC_PLT: + s = htab->splt; + dyn.d_un.d_ptr = s->output_section->vma + s->output_offset + + htab->tlsdesc_plt; + break; + + case DT_TLSDESC_GOT: + s = htab->sgot; + dyn.d_un.d_ptr = s->output_section->vma + s->output_offset + + htab->tlsdesc_got; + break; } bfd_elf64_swap_dyn_out (output_bfd, &dyn, dyncon); @@ -2861,6 +3252,40 @@ elf64_x86_64_finish_dynamic_sections (bfd *output_bfd, struct bfd_link_info *inf elf_section_data (htab->splt->output_section)->this_hdr.sh_entsize = PLT_ENTRY_SIZE; + + if (htab->tlsdesc_plt) + { + bfd_put_64 (output_bfd, (bfd_vma) 0, + htab->sgot->contents + htab->tlsdesc_got); + + memcpy (htab->splt->contents + htab->tlsdesc_plt, + elf64_x86_64_plt0_entry, + PLT_ENTRY_SIZE); + + /* Add offset for pushq GOT+8(%rip), since the + instruction uses 6 bytes subtract this value. */ + bfd_put_32 (output_bfd, + (htab->sgotplt->output_section->vma + + htab->sgotplt->output_offset + + 8 + - htab->splt->output_section->vma + - htab->splt->output_offset + - htab->tlsdesc_plt + - 6), + htab->splt->contents + htab->tlsdesc_plt + 2); + /* Add offset for jmp *GOT+TDG(%rip), where TGD stands for + htab->tlsdesc_got. The 12 is the offset to the end of + the instruction. */ + bfd_put_32 (output_bfd, + (htab->sgot->output_section->vma + + htab->sgot->output_offset + + htab->tlsdesc_got + - htab->splt->output_section->vma + - htab->splt->output_offset + - htab->tlsdesc_plt + - 12), + htab->splt->contents + htab->tlsdesc_plt + 8); + } } } @@ -3132,6 +3557,7 @@ static const struct bfd_elf_special_section #define elf_backend_reloc_type_class elf64_x86_64_reloc_type_class #define elf_backend_relocate_section elf64_x86_64_relocate_section #define elf_backend_size_dynamic_sections elf64_x86_64_size_dynamic_sections +#define elf_backend_always_size_sections elf64_x86_64_always_size_sections #define elf_backend_plt_sym_val elf64_x86_64_plt_sym_val #define elf_backend_object_p elf64_x86_64_elf_object_p #define bfd_elf64_mkobject elf64_x86_64_mkobject diff --git a/bfd/libbfd.h b/bfd/libbfd.h index 5a8c216..984ade3 100644 --- a/bfd/libbfd.h +++ b/bfd/libbfd.h @@ -7,7 +7,7 @@ (This include file is not for users of the library.) Copyright 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, - 1999, 2000, 2001, 2002, 2003, 2004, 2005 + 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Written by Cygnus Support. @@ -1034,6 +1034,9 @@ static const char *const bfd_reloc_code_real_names[] = { "@@uninitialized@@", "BFD_RELOC_386_TLS_DTPMOD32", "BFD_RELOC_386_TLS_DTPOFF32", "BFD_RELOC_386_TLS_TPOFF32", + "BFD_RELOC_386_TLS_GOTDESC", + "BFD_RELOC_386_TLS_DESC_CALL", + "BFD_RELOC_386_TLS_DESC", "BFD_RELOC_X86_64_GOT32", "BFD_RELOC_X86_64_PLT32", "BFD_RELOC_X86_64_COPY", @@ -1052,6 +1055,9 @@ static const char *const bfd_reloc_code_real_names[] = { "@@uninitialized@@", "BFD_RELOC_X86_64_TPOFF32", "BFD_RELOC_X86_64_GOTOFF64", "BFD_RELOC_X86_64_GOTPC32", + "BFD_RELOC_X86_64_GOTPC32_TLSDESC", + "BFD_RELOC_X86_64_TLSDESC_CALL", + "BFD_RELOC_X86_64_TLSDESC", "BFD_RELOC_NS32K_IMM_8", "BFD_RELOC_NS32K_IMM_16", "BFD_RELOC_NS32K_IMM_32", diff --git a/bfd/reloc.c b/bfd/reloc.c index 14c3392..98246c8 100644 --- a/bfd/reloc.c +++ b/bfd/reloc.c @@ -1,6 +1,6 @@ /* BFD support for handling relocation entries. Copyright 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, - 2000, 2001, 2002, 2003, 2004, 2005 + 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Written by Cygnus Support. @@ -2298,6 +2298,12 @@ ENUMX BFD_RELOC_386_TLS_DTPOFF32 ENUMX BFD_RELOC_386_TLS_TPOFF32 +ENUMX + BFD_RELOC_386_TLS_GOTDESC +ENUMX + BFD_RELOC_386_TLS_DESC_CALL +ENUMX + BFD_RELOC_386_TLS_DESC ENUMDOC i386/elf relocations @@ -2337,6 +2343,12 @@ ENUMX BFD_RELOC_X86_64_GOTOFF64 ENUMX BFD_RELOC_X86_64_GOTPC32 +ENUMX + BFD_RELOC_X86_64_GOTPC32_TLSDESC +ENUMX + BFD_RELOC_X86_64_TLSDESC_CALL +ENUMX + BFD_RELOC_X86_64_TLSDESC ENUMDOC x86-64/elf relocations |