aboutsummaryrefslogtreecommitdiff
path: root/gold/powerpc.cc
AgeCommit message (Collapse)AuthorFilesLines
2019-09-20PowerPC64, error on unsupported dynamic relocationAlan Modra1-1/+1
This patch corrects the set of dynamic relocations recognised by gold as supported by glibc, and teaches ld.bfd to report an error similar to the gold error. Note that ld --noinhibit-exec can be used to produce an output, supporting older ld with newer glibc if the set of supported glibc dynamic relocations changes. bfd/ * elf64-ppc.c (ppc64_glibc_dynamic_reloc): New function. (ppc64_elf_relocate_section): Error if emitting unsupported dynamic relocations. gold/ * powerpc.cc (Target_powerpc::Scan::check_non_pic): Move REL24 to 32-bit supported.
2019-08-02[GOLD] PowerPC64 pc-relative TLS supportAlan Modra1-68/+338
Gold version of git commit c213164ad2. elfcpp/ * powerpc.h (R_PPC64_TPREL34, R_PPC64_DTPREL34), (R_PPC64_GOT_TLSGD34, R_PPC64_GOT_TLSLD34), (R_PPC64_GOT_TPREL34, R_PPC64_GOT_DTPREL34): Define. gold/ * powerpc.cc (Target_powerpc::Scan::get_reference_flags): Set flags for new relocations, and some missing older relocs. (Target_powerpc::Scan::local): Handle new pcrel tls relocs. Call set_has_static_tls for tprel relocs. (Target_powerpc::Scan::global): Likewise. (Target_powerpc::Relocate::relocate): Handle new pcrel tls relocs.
2019-08-02[GOLD] PowerPC relocation signed overflow checkAlan Modra1-4/+12
Relocations with right shifts were calculating wrong overflow status. Since the addr34 split-field reloc is implemented as an 18-bit high part with value shifted right by 16 and a 16-bit low part, most of the pc-relative relocs were affected. * powerpc.cc (Powerpc_relocate_functions::rela, rela_ua): Perform signed right shift for signed overflow check.
2019-07-13[GOLD] PowerPC R_PPC64_PCREL_OPT supportAlan Modra1-12/+171
* powerpc.cc (xlate_pcrel_opt): New function. (Target_powerpc::Relocate::relocate): Optimise PCREL34 and GOT_PCREL34 sequences marked with PCREL_OPT.
2019-07-13[GOLD] PowerPC got reloc optimisationAlan Modra1-11/+97
Note that gold won't remove unused GOT entries, in contrast to ld.bfd which will. * powerpc.cc (Powerpc_relobj::make_got_relative): New function. (relative_value_is_known): New functions. (Target_powerpc::Relocate::relocate): Edit code using GOT16_HA, GOT16_LO_DS, and GOT_PCREL34 relocs.
2019-07-13[GOLD] PowerPC relocations for prefix insnsAlan Modra1-12/+432
Also use pc-relative instructions for notoc stubs. elfcpp/ * powerpc.h (R_PPC64_PCREL_OPT, R_PPC64_D34, R_PPC64_D34_LO), (R_PPC64_D34_HI30, R_PPC64_D34_HA30, R_PPC64_PCREL34), (R_PPC64_GOT_PCREL34, R_PPC64_PLT_PCREL34, R_PPC64_PLT_PCREL34_NOTOC), (R_PPC64_ADDR16_HIGHER34, R_PPC64_ADDR16_HIGHERA34), (R_PPC64_ADDR16_HIGHEST34, R_PPC64_ADDR16_HIGHESTA34), (R_PPC64_REL16_HIGHER34, R_PPC64_REL16_HIGHERA34), (R_PPC64_REL16_HIGHEST34, R_PPC64_REL16_HIGHESTA34), (R_PPC64_D28, R_PPC64_PCREL28): Define. gold/ * powerpc.cc (Target_powerpc): Add powerxx_stubs_ and accessor functions. (Target_powerpc::maybe_skip_tls_get_addr_call): Handle PLT_PCREL34 and PLT_PCREL34_NOTOC relocs. (Powerpc_relocate_functions): Add addr34, addr34_hi, addr34_ha, addr28, addr16_higher34, addr16_highera34, addr16_highest34, addr16_highest34a functions. (li_11_0, ori_11_11_0, sldi_11_11_34): Define. (paddi_12_pc, pld_12_pc, pnop): Define. (d34, ha34): New inline functions. (Stub_table::add_plt_call_entry): Handle powerxx_stubs. (Stub_table::add_eh_frame): Likewise. (build_powerxx_offset): New function. (Stub_table::plt_call_size): Handle powerxx_stubs. (Stub_table::branch_stub_size): Likewise. (Stub_table::do_write): Likewise. (Target_powerpc::Scan::get_reference_flags): Handle new relocs. (Target_powerpc::Scan::reloc_needs_plt_for_ifunc: Likewise. (Target_powerpc::Scan::local, global, relocate): Likewise.
2019-07-13[GOLD] PowerPC notoc eh_frameAlan Modra1-65/+132
When generating notoc call and branch stubs without the benefit of pc-relative insns, the stubs need to use LR to access the run time PC. All LR changes must be described in .eh_frame if we're to support unwinding through asynchronous exceptions. That's what this patch does. The patch has gone through way too many iterations. At first I attempted to add multiple FDEs, one for each stub. That ran into difficulties with do_plt_fde_location which is only capable of setting the address of a single FDE per Output_data section, and with removing any FDEs added on a previous do_relax pass. Removing FDEs (git commit be897fb774) went overboard in matching the FDE contents. That means either stashing the contents created for add_eh_frame_for_plt to use when calling remove_eh_frame_for_plt, or recreating contents on the fly (*) just to remove FDEs. In fact, FDE content matching is quite unnecesary. FDEs added by a previous do_relax pass are those with u_.from_linker.post_map set. So they can easily be recognised just by looking at that flag. This patch keeps that part of the multiple FDE changes. In the end I went for just one FDE per stub group to describe the call stubs. That's reasonably efficient for the common case of only needing to describe the __tls_get_addr_opt call stub. We don't expect to be making many calls using notoc stubs without pc-relative insns. *) Which has it's own set of problems. The contents must be recreated using the old stub layout, but .eh_frame size can affect stub requirements so you need to temporarily keep the old .eh_frame size when creating new stubs, then reset .eh_frame size before adding new FDEs. * ehframe.cc (Fde::operator==): Delete. (Cie::remove_fde): Delete. (Eh_frame::remove_ehframe_for_plt): Delete fde_data and fde_length parameters. Remove all post-map plt FDEs. * ehframe.h (Fde:post_map): Make const, add variant to compare plt. (Fde::operator==): Delete. (Cie::remove_fde): Implement here. (Cie::last_fde): New accessor. (Eh_frame::remove_ehframe_for_plt): Update prototype. * layout.cc (Layout::remove_eh_frame_for_plt): Delete fde_data and fde_length parameters. * layout.h (Layout::remove_eh_frame_for_plt): Update prototype. * powerpc.cc (Stub_table::tls_get_addr_opt_bctrl_): Delete. (Stub_table::plt_fde_len_, plt_fde_, init_plt_fde): Delete. (Stub_table::add_plt_call_entry): Don't set tls_get_addr_opt_bctrl_. (eh_advance): New function. (stub_sort): New function. (Stub_table::add_eh_frame): Emit eh_frame for notoc plt calls and branches as well as __tls_get_addr_opt plt call stub. (Stub_table::remove_eh_frame): Update to suit.
2019-07-13[GOLD] PowerPC64 ELFv2 notoc supportAlan Modra1-261/+709
Calls from notoc functions via the PLT need different stubs. Even calls to local functions requiring a valid toc pointer must go via a stub. This patch provides the support in gold. elfcpp/ * powerpc.h (R_PPC64_PLTSEQ_NOTOC, R_PPC64_PLTCALL_NOTOC): Define. gold/ * powerpc.cc (Target_powerpc::maybe_skip_tls_get_addr_call): Handle notoc calls. (is_branch_reloc): Template on size. Return true for REL24_NOTOC. Update all callers. (max_branch_delta): Likewise. (Target_powerpc::Branch_info::make_stub): Add a stub for notoc calls to functions needing a valid toc pointer. (Target_powerpc::do_relax): Layout stubs again if any need resize. (add_12_11_12, addi_12_11, addis_12_11, ldx_12_11_12, ori_12_12_0), (oris_12_12_0, sldi_12_12_32): Define. (Stub_table::Plt_stub_ent): Add notoc_ and iter_ fields. (Stub_table::Branch_stub_key, Branch_stub_key_hash): Rename from Branch_stub_ent and Branch_stub_ent hash. Remove save_res_ from key. (Stub_table::Branch_stub_ent): New struct. (class Stub_table): Add need_resize and resizing vars. (Stub_table::need_resize, branch_size): New accessors. (Stub_table::set_resizing): New function. (Stub_table::add_plt_call_entry): Handle notoc calls and resizing on seeing such or a tocsave stubs after a normal stub using the same sym. (Stub_table::add_long_branch_entry): Similarly. (Stub_table::find_long_branch_entry): Return a Branch_stub_ent*. (Stub_table::define_stub_syms): Adjust (Stub_table::build_tls_opt_head, build_tls_opt_tail): New functions. (build_notoc_offset): New function. (Stub_table::plt_call_size): Move out of line. Handle notoc calls. (Stub_table::branch_stub_size): Similarly. (Stub_table::do_write): Separate loop for ELFv2 stubs, handling notoc calls. Simplify ELFv1 loop. Output notoc branch stubs. Use build_tls_opt_head and build_tls_opt_tail. (Target_powerpc::Scan::get_reference_flags): Handle REL24_NOTOC. (Target_powerpc::Scan::reloc_needs_plt_for_ifunc): Likewise, and PLTSEQ_NOTOC and PLTCALL_NOTOC. (Target_powerpc::Scan::local, global, relocate): Likewise.
2019-06-28[GOLD] PowerPC tweak relnum testsAlan Modra1-2/+2
There is a call of relocate() to perform a single relocation. In that case the "relnum" parameter is -1U and of course it isn't appropriate to consider any of the PowerPC code sequence optimisations triggered by a following relocation. * powerpc.cc (Target_powerpc::Relocate::relocate): Don't look at next/previous reloc when relnum is -1.
2019-06-28[GOLD] PowerPC linkage table errorAlan Modra1-5/+18
This fixes a segfault when attempring to output a "linkage table error". "object" is only non-NULL in the local symbol case. * powerpc.cc (Stub_table::plt_error): New function. (Stub_table::do_write): Use it. (Output_data_glink::do_write): Don't segfault emitting linkage table error.
2019-06-28[GOLD] R_PPC64_REL16_HIGH relocsAlan Modra1-0/+30
These relocs have been around for quite a while. It's past time gold supported them. elfcpp/ * powerpc.h (R_PPC64_REL16_HIGH, R_PPC64_REL16_HIGHA), (R_PPC64_REL16_HIGHER, R_PPC64_REL16_HIGHERA), (R_PPC64_REL16_HIGHEST, R_PPC64_REL16_HIGHESTA): Define. gold/ * powerpc.cc (Target_powerpc::Scan::get_reference_flags): Handle REL16_HIGH* relocs. (Target_powerpc::Scan::local): Likewise. (Target_powerpc::Scan::global): Likewise. (Target_powerpc::Relocate::relocate): Likewise.
2019-01-01Update year range in copyright notice of binutils filesAlan Modra1-1/+1
2018-07-06[GOLD] PowerPC .gnu.attributes supportAlan Modra1-7/+347
elfcpp/ * powerpc.h (Tag_GNU_Power_ABI_FP): Define. (Tag_GNU_Power_ABI_Vector, Tag_GNU_Power_ABI_Struct_Return): Define. gold/ * powerpc.cc: Include attributes.h. (Powerpc_relobj::attributes_section_data_): New variable, with accessor and associated constructor and destructor support. (Powerpc_dynobj::attributes_section_data_): Likewise. (Powerpc_relobj::do_read_symbols): Stash SHT_GNU_ATTRIBUTES section contents in attributes_section_data_. (Powerpc_dynobj::do_read_symbols): Likewise. (Target_powerpc): Add attributes_section_data_, last_fp_, last_ld_, last_vec_, and last_struct_ vars. (Target_powerpc::merge_object_attributes): New function. (Target_powerpc::do_finalize_sections): Iterate over input objects merging attributes. Create output attributes section.
2018-04-09PowerPC inline PLT call supportAlan Modra1-8/+57
In addition to the existing relocs we need two more to mark all instructions in the call sequence, PLTCALL on the call itself (plus the toc restore insn for ppc64), and PLTSEQ on others. All relocations in a particular sequence have the same symbol. Example ppc64 ELFv2 assembly: .reloc .,R_PPC64_PLTSEQ,puts std 2,24(1) addis 12,2,puts@plt@ha # .reloc .,R_PPC64_PLT16_HA,puts ld 12,puts@plt@l(12) # .reloc .,R_PPC64_PLT16_LO_DS,puts .reloc .,R_PPC64_PLTSEQ,puts mtctr 12 .reloc .,R_PPC64_PLTCALL,puts bctrl ld 2,24(1) Example ppc32 -fPIC assembly: addis 12,30,puts+32768@plt@ha # .reloc .,R_PPC_PLT16_HA,puts+0x8000 lwz 12,12,puts+32768@plt@l # .reloc .,R_PPC_PLT16_LO,puts+0x8000 .reloc .,R_PPC_PLTSEQ,puts+32768 mtctr 12 .reloc .,R_PPC_PLTCALL,puts+32768 bctrl Marking sequences like this allows the linker to convert them to nops and a direct call if the target symbol turns out to be local. When the call is __tls_get_addr, each relocation shown above is paired with an R_PPC*_TLSLD or R_PPC*_TLSGD reloc to additionally mark the sequence for possible TLS optimization. The TLSLD or TLSGD relocs are emitted first. include/ * elf/ppc.h (R_PPC_PLTSEQ, R_PPC_PLTCALL): Define. * elf/ppc64.h (R_PPC64_PLTSEQ, R_PPC64_PLTCALL): Define. bfd/ * elf32-ppc.c (ppc_elf_howto_raw): Add PLTSEQ and PLTCALL howtos. (is_plt_seq_reloc): New function. (ppc_elf_check_relocs): Handle PLTSEQ and PLTCALL relocs. (ppc_elf_tls_optimize): Handle inline plt call sequence. (ppc_elf_relax_section): Handle PLTCALL reloc. (ppc_elf_relocate_section): Nop out inline plt call sequence when resolving locally. * elf64-ppc.c (ppc64_elf_howto_raw): Add R_PPC64_PLTSEQ and R_PPC64_PLTCALL entries. Comment R_PPC64_TOCSAVE. (has_tls_get_addr_call): Correct comment. (is_branch_reloc): Add PLTCALL. (is_plt_seq_reloc): New function. (ppc64_elf_check_relocs): Handle PLT16_LO_DS reloc. Set has_tls_reloc for R_PPC64_TLSGD and R_PPC64_TLSLD. Create plt entry for R_PPC64_PLTCALL. (ppc64_elf_tls_optimize): Handle inline plt call sequence. (ppc_type_of_stub): Handle PLTCALL reloc. (toc_adjusting_stub_needed): Likewise. (ppc64_elf_relocate_section): Set "can_plt_call" for PLTCALL reloc insn. Nop out inline plt call sequence when resolving locally. Handle __tls_get_addr inline plt call optimization. elfcpp/ * powerpc.h (R_POWERPC_PLTSEQ, R_POWERPC_PLTCALL): Define. gold/ * powerpc.cc (Target_powerpc::Track_tls::maybe_skip_tls_get_addr_call): Handle inline plt sequence relocs. (Stub_table::Plt_stub_key::Plt_stub_key): Likewise. (Target_powerpc::Scan::reloc_needs_plt_for_ifunc): Likewise. (Target_powerpc::Relocate::relocate): Likewise.
2018-04-09Support PLT16 relocs against local symbolsAlan Modra1-5/+144
Necessary if gcc is to use PLT16 relocs to implement -mlongcall, and there isn't a good technical reason why local symbols should be excluded from PLT16 support. Non-ifunc local symbol PLT entries go in a separate section to other PLT entries. In a fixed position executable they won't need to be relocated, and in a PIE or shared library I chose to not implement lazy relocation. bfd/ * elf64-ppc.c (LOCAL_PLT_ENTRY_SIZE): Define. (struct ppc_stub_hash_entry): Add symtype field. (PLT_KEEP): Define. (struct ppc_link_hash_table): Add pltlocal and relpltlocal. (create_linkage_sections): Create pltlocal and relpltlocal. (ppc64_elf_check_relocs): Allow PLT relocs on local symbols. Set PLT_KEEP. (ppc64_elf_adjust_dynamic_symbol): Keep PLT entries for inline calls. (allocate_dynrelocs): Allocate pltlocal and relpltlocal. (ppc64_elf_size_dynamic_sections): Size pltlocal and relpltlocal. Keep PLT entries for inline calls against locals. (ppc_build_one_stub): Use pltlocal as appropriate. (ppc_size_one_stub): Likewise. (ppc64_elf_size_stubs): Set symtype. (build_global_entry_stubs_and_plt): Init pltlocal and write relpltlocal for globals. (write_plt_relocs_for_local_syms): Likewise for local syms. (ppc64_elf_relocate_section): Support PLT for local syms. * elf32-ppc.c (PLT_KEEP): Define. (struct ppc_elf_link_hash_table): Add pltlocal and relpltlocal. (ppc_elf_create_glink): Create pltlocal and relpltlocal. (ppc_elf_check_relocs): Allow PLT relocs on local symbols. Set PLT_KEEP. Adjust update_local_sym_info call. (ppc_elf_adjust_dynamic_symbol): Keep PLT entries for inline calls. (allocate_dynrelocs): Allocate pltlocal and relpltlocal. (ppc_elf_size_dynamic_sections): Size pltlocal and relpltlocal. (ppc_elf_relocate_section): Support PLT16 relocs for local syms. (write_global_sym_plt): Init pltlocal and write relpltlocal. (ppc_finish_symbols): Likewise for locals. ld/ * emulparams/elf32ppc.sh (OTHER_RELRO_SECTIONS_2): Add .branch_lt. (OTHER_GOT_RELOC_SECTIONS): Add .rela.branch_lt. * testsuite/ld-powerpc/elfv2so.d: Update for symbol/stub reordering. * testsuite/ld-powerpc/relbrlt.d: Likewise. * testsuite/ld-powerpc/relbrlt.s: Likewise. * testsuite/ld-powerpc/tlsso.r: Likewise. * testsuite/ld-powerpc/tlstocso.r: Likewise. gold/ * powerpc.cc (Target_powerpc::lplt_): New variable. (Target_powerpc::lplt_section): Associated accessor. (Target_powerpc::plt_off): Handle local non-ifunc symbols. (Target_powerpc::make_lplt_section): New function. (Target_powerpc::make_local_plt_entry): New function. (Powerpc_relobj::do_relocate_sections): Write out lplt. (Output_data_plt_powerpc::first_plt_entry_offset): Zero for lplt. (Output_data_plt_powerpc::add_local_entry): New function. (Output_data_plt_powerpc::do_write): Ignore lplt. (Target_powerpc::make_iplt_section): Make lplt first. (Target_powerpc::make_brlt_section): Make .branch_lt relro. (Target_powerpc::Scan::local): Handle PLT16 relocs.
2018-04-09PowerPC PLT16 relocationsAlan Modra1-56/+135
The PowerPC64 ELFv2 ABI and the PowerPC SysV ABI support a number of relocations that can be used to create and access a PLT entry. However, the relocs are not well defined. The PLT16 family of relocs talk about "the section offset or address of the procedure linkage table entry". It's plain that we do need a relative address when PIC as otherwise we'd have dynamic text relocations, but "section offset" doesn't specify which section. The most obvious one, ".plt", isn't that useful because there is no readily available way of addressing the start of the ".plt" section. Much more useful would be "the GOT/TOC-pointer relative offset of the procedure linkage table entry", and I suppose you could argue that is a "section offset" of sorts. For PowerPC64 it is better to use the same TOC-pointer relative addressing even when non-PIC, since ".plt" may be located outside the range of a 32-bit address. However, for ppc32 we do want an absolute address when non-PIC as a GOT pointer may not be set up. Also, for ppc32 PIC we have a similar situation to R_PPC_PLTREL24 in that the GOT pointer is set to a location in the .got2 section and we need to specify the .got2 offset in the PLT16 reloc addend. This patch supports PLT16 relocations using these semantics. This is not an ABI change for ppc32 since the relocations were not previously supported by GNU ld, but is for ppc64 where some of the PLT16 relocs were supported. I'm not particularly concerned since the old ppc64 PLT16 reloc semantics made them almost completely useless. bfd/ * elf32-ppc.c (ppc_elf_check_relocs): Handle PLT16 relocs. (ppc_elf_relocate_section): Likewise. * elf64-ppc.c (ppc64_elf_check_relocs): Handle PLT16_LO_DS. (ppc64_elf_relocate_section): Likewise. Correct PLT16 resolution to plt entry relative to toc pointer. gold/ * powerpc.cc (Target_powerpc::plt_off): New functions. (is_plt16_reloc): New function. (Stub_table::plt_off): Use Target_powerpc::plt_off. (Stub_table::plt_call_size): Use plt_off. (Stub_table::do_write): Likewise. (Target_powerpc::Scan::get_reference_flags): Return RELATIVE_REF for PLT16 relocations. (Target_powerpc::Scan::reloc_needs_plt_for_ifunc): Return true for PLT16 relocations. (Target_powerpc::Scan::global): Make a PLT entry for PLT16 relocations. (Target_powerpc::Relocate::relocate): Support PLT16 relocations. (Powerpc_scan_relocatable_reloc::global_strategy): Return RELOC_SPECIAL for ppc32 plt16 relocs.
2018-04-06Further improve warnings for relocations referring to discarded sections.Cary Coutant1-1/+1
Relocations referring to discarded sections are now treated as errors instead of warnings. Also with this patch, we will now print the section group signature and the object file with the prevailing definition of that group along with the name of the symbol that the relocation is referring to. This additional information should be much more useful to anyone trying to track down the source of such errors. To do so, we now map each discarded section to the Kept_section info in the Layout class, and defer the logic that maps a discarded section to its counterpart in the kept group. This gives us the information we need to identify the signature symbol given the discarded section, and the name of the object file that provided the prevailing (i.e., first) definition of that group. gold/ * object.cc (Sized_relobj_file::include_section_group): Store reference to Kept_section info for discarded comdat sections regardless of size. Move size checking to map_to_kept_section. (Sized_relobj_file::include_linkonce_section): Likewise. (Sized_relobj_file::map_to_kept_section): Add section name parameter. Insert size checking logic from above functions. (Sized_relobj_file::find_kept_section_object): New method. (Sized_relobj_file::get_symbol_name): New method. * object.h (Sized_relobj_file::map_to_kept_section): Add section_name parameter. Adjust all callers. (Sized_relobj_file::find_kept_section_object): New method. (Sized_relobj_file::get_symbol_name): New method. (Sized_relobj_file::Kept_comdat_section): Replace object and shndx fields with sh_size, kept_section, symndx, and is_comdat fields. (Sized_relobj_file::set_kept_comdat_section): Replace kept_object and kept_shndx parameters with is_comdat, symndx, sh_size, and kept_section. (Sized_relobj_file::get_kept_comdat_section): Likewise. * target-reloc.h (enum Comdat_behavior): Change CB_WARNING to CB_ERROR. Adjust all references. (issue_undefined_symbol_error): New function template. (relocate_section): Pass section name to map_to_kept_section. Move discarded section code to new function above. * aarch64.cc (Target_aarch64::scan_reloc_section_for_stubs): Move declaration for gsym out one level. Call issue_discarded_error. * arm.cc (Target_arm::scan_reloc_section_for_stubs): Likewise. * powerpc.cc (Relocate_comdat_behavior): Change CB_WARNING to CB_ERROR.
2018-04-05[GOLD] Make powerpc64 .branch_lt relroAlan Modra1-8/+5
Better security beats better placement for code optimization. * powerpc.cc (Target_powerpc::make_brlt_section): Make .branch_lt relro.
2018-04-02Fix problem where mixed section types can cause internal error during a -r link.Cary Coutant1-0/+4
During a -r (or --emit-relocs) link, if two sections had the same name but different section types, gold would put relocations for both sections into the same relocation section even though the data sections remained separate. For .eh_frame sections, when one section is PROGBITS and another is X86_64_UNWIND, we really should be using the UNWIND section type and combining the sections anyway. For other sections, we should be creating one relocation section for each output data section. gold/ PR gold/23016 * incremental.cc (can_incremental_update): Check for unwind section type. * layout.h (Layout::layout): Add sh_type parameter. * layout.cc (Layout::layout): Likewise. (Layout::layout_reloc): Create new output reloc section if data section does not already have one. (Layout::layout_eh_frame): Check for unwind section type. (Layout::make_eh_frame_section): Use unwind section type for .eh_frame and .eh_frame_hdr. * object.h (Sized_relobj_file::Shdr_write): New typedef. (Sized_relobj_file::layout_section): Add sh_type parameter. (Sized_relobj_file::Deferred_layout::Deferred_layout): Add sh_type parameter. * object.cc (Sized_relobj_file::check_eh_frame_flags): Check for unwind section type. (Sized_relobj_file::layout_section): Add sh_type parameter; pass it to Layout::layout. (Sized_relobj_file::do_layout): Make local copy of sh_type. Force .eh_frame sections to unwind section type. Pass sh_type to layout_section. (Sized_relobj_file<size, big_endian>::do_layout_deferred_sections): Pass sh_type to layout_section. * output.cc (Output_section::Output_section): Initialize reloc_section_. * output.h (Output_section::reloc_section): New method. (Output_section::set_reloc_section): New method. (Output_section::reloc_section_): New data member. * target.h (Target::unwind_section_type): New method. (Target::Target_info::unwind_section_type): New data member. * aarch64.cc (aarch64_info): Add unwind_section_type. * arm.cc (arm_info, arm_nacl_info): Likewise. * i386.cc (i386_info, i386_nacl_info, iamcu_info): Likewise. * mips.cc (mips_info, mips_nacl_info): Likewise. * powerpc.cc (powerpc_info): Likewise. * s390.cc (s390_info): Likewise. * sparc.cc (sparc_info): Likewise. * tilegx.cc (tilegx_info): Likewise. * x86_64.cc (x86_64_info, x86_64_nacl_info): Likewise. * testsuite/Makefile.am (pr23016_1, pr23016_2): New test cases. * testsuite/Makefile.in: Regenerate. * testsuite/testfile.cc: Add unwind_section_type. * testsuite/pr23016_1.sh: New test script. * testsuite/pr23016_1a.s: New source file. * testsuite/pr23016_1b.s: New source file. * testsuite/pr23016_2.sh: New test script. * testsuite/pr23016_2a.s: New source file. * testsuite/pr23016_2b.s: New source file.
2018-02-07Revert "PowerPC PLT speculative execution barriers"Alan Modra1-50/+13
This reverts most of commit 1be5d8d3bb. Left in place are addition of --no-plt-align to some ppc32 ld tests and the ld.texinfo --no-plt-thread-safe fix.
2018-01-18PowerPC PLT stub alignment fixesAlan Modra1-7/+12
Asking for ppc32 plt call stubs to be aligned at 32 byte boundaries didn't quite work. For ld.bfd they were spaced 32 bytes apart, but only started on a 16 byte boundary. ld.gold also didn't get it right. Finding that bug made me check over the ppc64 plt stub alignment, where I found that negative values for alignment (meaning align to minimize boundary crossing) were not accepted. Since no one has complained about that, I guess I could have removed the feature from ld.bfd documentation, but I've opted instead to correct the code. I've also added an optional alignment paramenter for ppc32 --plt-align, for some consistency with gold and ppc64 ld.bfd. bfd/ * elf32-ppc.c (ppc_elf_create_glink): Correct alignment of .glink. * elf64-ppc.c (ppc64_elf_size_stubs): Handle negative plt_stub_align. (ppc64_elf_build_stubs): Likewise. gold/ * powerpc.cc (param_plt_align): New function supplying default --plt-align values. Use it.. (Stub_table::plt_call_align): ..here, and.. (Output_data_glink::global_entry_align): ..here. (Stub_table::stub_align): Correct 32-bit minimum alignment. ld/ * emultempl/ppc32elf.em: Support optional --plt-align arg. * emultempl/ppc64elf.em: Support negative --plt-align arg.
2018-01-17PowerPC PLT speculative execution barriersAlan Modra1-13/+50
Spectre variant 2 mitigation for PowerPC and PowerPC64. bfd/ * elf32-ppc.c (GLINK_ENTRY_SIZE): Handle speculation barrier. (CRSETEQ, BEQCTRM): Define. (is_nonpic_glink_stub): Don't check bctr. (ppc_elf_link_hash_table_create): Init new ppc_elf_params field. (ppc_elf_relax_section): Size speculation barrier. (output_bctr): New function. (write_glink_stub): Use output_bctr. (ppc_elf_relocate_section): Use output_bctr for long branch stub. (ppc_elf_finish_dynamic_symbol): Likewise. (ppc_elf_finish_dynamic_sections): Use output_bctr. * elf32-ppc.h (struct ppc_elf_params): Add speculate_indirect_jumps. * elf64-ppc.c (CRSETEQ, BEQCTRM, BEQCTRLM): Define. (GLINK_PLTRESOLVE_SIZE): Size speculation barrier. (size_global_entry_stubs): Handle speculation barrier sizing. (plt_stub_size): Likewise. (output_bctr): New function. (build_plt_stub, build_tls_get_addr_stub): Output speculation barrier. (ppc_build_one_stub): Likewise for ppc_stub_plt_branch. (ppc_size_one_stub): Size speculation barrier in ppc_stub_plt_branch. (build_global_entry_stubs): Output speculation barrier. (ppc64_elf_build_stubs): Likewise in __glink_PLTresolve stub. * elf64-ppc.h (struct ppc64_elf_params): Add speculate_indirect_jumps. gold/ * options.h (speculate_indirect_jumps): New option. * powerpc.cc (beqctrm, beqctrlm, crseteq): New insn constants. (output_bctr): New function. (Stub_table::plt_call_size): Add space for speculation barrier. (Stub_table::branch_stub_size): Likewise. (Output_data_glink::pltresolve_size): Likewise. (Stub_table::do_write): Output speculation barriers. ld/ * emultempl/ppc32elf.em (params): Init new field. (OPTION_SPECULATE_INDIRECT_JUMPS): Define. (OPTION_NO_SPECULATE_INDIRECT_JUMPS): Define. (PARSE_AND_LIST_LONGOPTS): Handle new options. (PARSE_AND_LIST_ARGS_CASES): Likewise. (PARSE_AND_LIST_OPTIONS): Likewise. * emultempl/ppc64elf.em (params): Init new field. (OPTION_SPECULATE_INDIRECT_JUMPS): Define. (OPTION_NO_SPECULATE_INDIRECT_JUMPS): Define. (PARSE_AND_LIST_LONGOPTS): Handle --speculate-indirect-jumps. (PARSE_AND_LIST_OPTIONS): Likewise. (PARSE_AND_LIST_ARGS_CASES): Likewise. * ld.texinfo (--no-plt-thread-safe): Correct itemx. (--speculate-indirect-jumps): Document. * testsuite/ld-powerpc/elfv2exe.d, * testsuite/ld-powerpc/elfv2so.d, * testsuite/ld-powerpc/relbrlt.d, * testsuite/ld-powerpc/powerpc.exp: Disable plt alignment and speculation barriers on various tests.
2018-01-17PowerPC PLT stub tidyAlan Modra1-93/+132
This is in preparation for the next patch adding Spectre variant 2 mitigation for PowerPC and PowerPC64. Besides tidying code involved in stub output (to reduce the number of places where bctr is output), the patch adds some user visible features: 1) PowerPC64 ELFv2 global entry stubs now are aligned under the control of --plt-align, with a default alignment of 32 bytes. 2) PowerPC64 __glink_PLTresolve is no longer padded out with nops. 3) PowerPC32 PLT stubs are aligned under the control of --plt-align, with the default alignment being 16 bytes as before. 4) The PowerPC32 branch/nop table emitted before __glink_PLTresolve is now smaller in many cases. It was sized incorrectly when the __tls_get_addr_opt stub was used, and unnecessarily included space for local ifuncs. bfd/ * elf32-ppc.c (GLINK_ENTRY_SIZE): Add parameters, handle __tls_get_addr_opt, and alignment sizing. (TLS_GET_ADDR_GLINK_SIZE): Delete. (is_nonpic_glink_stub): Don't use GLINK_ENTRY_SIZE. (ppc_elf_get_synthetic_symtab): Recognize stubs spaced at 4, 6, or 8 insns. (ppc_elf_link_hash_table_create): Init new ppc_elf_params field. (allocate_dynrelocs): Use new GLINK_ENTRY_SIZE. (ppc_elf_size_dynamic_sections): Likewise. Size branch table by PLT reloc count. (write_glink_stub): Handle __tls_get_addr_opt stub. Pad out to size given by GLINK_ENTRY_SIZE. (ppc_elf_relocate_section): Adjust write_glink_stub call. (ppc_elf_finish_dynamic_symbol): Likewise. (ppc_elf_finish_dynamic_sections): Write PLTresolve without using insn array since so many need rewriting. * elf32-ppc.h (struct ppc_elf_params): Add plt_stub_align. * elf64-ppc.c (GLINK_PLTRESOLVE_SIZE): Rename from GLINK_CALL_STUB_SIZE. Add htab param and evaluate to size without nops. Adjust all uses. (ppc64_elf_get_synthetic_symtab): Don't use GLINK_CALL_STUB_SIZE in glink_vma calculation. (struct ppc_link_hash_table): Add global_entry section pointer. (create_linkage_sections): Create separate section for global entry stubs. (PPC_LO, PPC_HI, PPC_HA): Move earlier. (size_global_entry_stubs): Handle sizing for aligned stubs. (ppc64_elf_size_dynamic_sections): Handle global_entry alloc, and don't stash end of glink branch table in rawsize. (ppc_build_one_stub): Rewrite stub size calculations. (build_global_entry_stubs): Use new section. (ppc64_elf_build_stubs): Don't pad __glink_PLTresolve with nops. Build lazy link stubs out to end of section. Build global entry stubs in new section. gold/ * options.h (plt_align): Support for PowerPC32 too. * powerpc.cc (Stub_table::stub_align): Heed --plt-align for 32-bit. (Stub_table::plt_call_size, branch_stub_size): Tidy. (Stub_table::plt_call_align): Implement using stub_align. (Output_data_glink::global_entry_align): New function. (Output_data_glink::global_entry_off): New function. (Output_data_glink::global_entry_address): Use global_entry_off. (Output_data_glink::pltresolve_size): New function, replacing pltresolve_size_ constant. Update all uses. (Output_data_glink::add_global_entry): Align offset. (Output_data_glink::set_final_data_size): Use global_entry_align. (Stub_table::do_write): Don't pad __glink_PLTrelsolve with nops. Tidy stub output. Use global_entry_off. ld/ * emultempl/ppc32elf.em (params): Init new field. (enum ppc32_opt): New enum to define OPTION_* values. Add OPTION_PLT_ALIGN and OPTION_NO_PLT_ALIGN. (PARSE_AND_LIST_LONGOPTS): Handle new options. (PARSE_AND_LIST_ARGS_CASES): Likewise. (PARSE_AND_LIST_OPTIONS): Likewise. Break up help output. * emultempl/ppc64elf.em (ppc_add_stub_section): Init alignment correctly for negative --plt-stub-align. * testsuite/ld-powerpc/elfv2exe.d, * testsuite/ld-powerpc/elfv2so.d, * testsuite/ld-powerpc/relbrlt.d, * testsuite/ld-powerpc/relbrlt.s, * testsuite/ld-powerpc/tlsexe.d, * testsuite/ld-powerpc/tlsexe.r, * testsuite/ld-powerpc/tlsexe32.d, * testsuite/ld-powerpc/tlsexe32.g, * testsuite/ld-powerpc/tlsexe32.r, * testsuite/ld-powerpc/tlsexetoc.d, * testsuite/ld-powerpc/tlsexetoc.r, * testsuite/ld-powerpc/tlsopt5_32.d, * testsuite/ld-powerpc/tlsso.d, * testsuite/ld-powerpc/tlstocso.d: Update for changed stub order.
2018-01-03Update year range in copyright notice of binutils filesAlan Modra1-1/+1
2017-12-15[GOLD] PR22602, handle __tls_get_addr forwarders properlyAlan Modra1-4/+4
We never need to resolve_forwards() a symbol found by hash table lookup such as target->tls_get_addr_opt() but we do potentially need to do so for random symbols seen on relocs. So these calls were in the wrong order, resulting in missing stubs and an assertion failure. PR 22602 * powerpc.cc (Target_powerpc::Branch_info::mark_pltcall): Resolve forwards before replacing __tls_get_addr. (Target_powerpc::Branch_info::make_stub): Likewise.
2017-11-27Fix symbol values and relocation addends for relocatable links.Cary Coutant1-4/+13
The fix for PR 19291 broke some other cases where -r is used with scripts, as reported in PR 22266. The original fix for PR 22266 ended up breaking many cases for REL targets, where the addends are stored in the section data, and are not being adjusted properly. The problem was basically that in a relocatable output file (ET_REL), symbol values are supposed to be relative to the start address of their section. Usually in a relocatable file, all sections start at 0, so the failure to get this right is often irrelevant, but with a linker script, we occasionally see an output section whose starting address is not 0, and gold would occasionally write a symbol with its relocated value instead of its section-relative value. This patch reverts the recent fix for PR 22266 as well as my original fix for PR 19291. The original fix moved the symbol value adjustment to write_local_symbols, but neglected to undo a few places where the adjustment was also being applied, resulting in an occasional double adjustment. The more recent fix removed those other adjustments, but then failed to re-account for the adjustment when rewriting the relocations on REL targets. With the old attempts reverted, we now apply the symbol value adjustment to the one case that had been missed (non-section symbols in merge sections). But now we also need to account for the adjustment when rewriting the addends for RELA relocations. gold/ PR gold/19291 PR gold/22266 * object.cc (Sized_relobj_file::compute_final_local_value_internal): Revert changes from 2017-11-08 patch. Adjust symbol value in relocatable links for non-section symbols. (Sized_relobj_file::compute_final_local_value): Revert changes from 2017-11-08 patch. (Sized_relobj_file::do_finalize_local_symbols): Likewise. (Sized_relobj_file::write_local_symbols): Revert changes from 2015-11-25 patch. * object.h (Sized_relobj_file::compute_final_local_value_internal): Revert changes from 2017-11-08 patch. * powerpc.cc (Target_powerpc::relocate_relocs): Adjust addend for relocatable links. * target-reloc.h (relocate_relocs): Adjust addend for relocatable links. * testsuite/pr22266_a.c (hello): New function. * testsuite/pr22266_main.c (main): Add test for merge sections. * testsuite/pr22266_script.t: Add rule for .rodata.
2017-10-18[GOLD] Fix powerpc64 optimization of TOC accessesAlan Modra1-2/+2
Fixes a thinko. Given code that puts variables into the TOC (a bad idea, but some see the TOC as a small data section) this bug could result in an attempt to optimize a sequence that should not be optimized. * powerpc.cc (Target_powerpc::Scan::local): Correct dst_off calculation for TOC16 relocs. (Target_powerpc::Scan::global): Likewise.
2017-09-22[GOLD] Set non-exec stack for ppc64Alan Modra1-2/+2
gcc doesn't emit stack notes for ELFv1, since ELFv1 never needs an executable stack. Note that ELFv1 is usually big-endian and ELFv2 little-endian, but the ABI is really orthogonal to endiannes. * powerpc.cc (Target_powerpc<64,*>::powerpc_info): Set is_default_stack_executable false.
2017-09-20[GOLD] PowerPC function address in non-PICAlan Modra1-5/+20
ppc32, like many targets, defines the address of a function as the PLT call stub code for functions referenced but not defined in a non-PIC executable. ppc32 gold, unlike other targets, inherits the ppc64 multiple stub capability for dealing with very large binaries where one set of stubs can't be reached from all code locations. This means there can be multiple choices of address for a function, which might cause function pointer comparison failures. So for ppc32, make non-branch references always use the first stub group. (PowerPC64 ELFv1 is always PIC so doesn't need to define the address of an external function as the PLT stub. PowerPC64 ELFv2 needs a special set of global entry stubs to serve as the address of external functions, so it too is not affected by this bug.) * powerpc.cc (Target_powerpc::Branch_info::make_stub): Put stubs for ppc32 non-branch relocs in first stub table. (Target_powerpc::Relocate::relocate): Resolve similarly.
2017-08-30PowerPC TPREL16_HA/LO reloc optimizationAlan Modra1-0/+32
In the TLS GD/LD to LE optimization, ld replaces a sequence like addi 3,2,x@got@tlsgd R_PPC64_GOT_TLSGD16 x bl __tls_get_addr(x@tlsgd) R_PPC64_TLSGD x R_PPC64_REL24 __tls_get_addr nop with addis 3,13,x@tprel@ha R_PPC64_TPREL16_HA x addi 3,3,x@tprel@l R_PPC64_TPREL16_LO x nop When the tprel offset is small, this can be further optimized to nop addi 3,13,x@tprel nop bfd/ * elf64-ppc.c (struct ppc_link_hash_table): Add do_tls_opt. (ppc64_elf_tls_optimize): Set it. (ppc64_elf_relocate_section): Nop addis on TPREL16_HA, and convert insn on TPREL16_LO and TPREL16_LO_DS relocs to use r13 when addis would add zero. * elf32-ppc.c (struct ppc_elf_link_hash_table): Add do_tls_opt. (ppc_elf_tls_optimize): Set it. (ppc_elf_relocate_section): Nop addis on TPREL16_HA, and convert insn on TPREL16_LO relocs to use r2 when addis would add zero. gold/ * powerpc.cc (Target_powerpc::Relocate::relocate): Nop addis on TPREL16_HA, and convert insn on TPREL16_LO and TPREL16_LO_DS relocs to use r2/r13 when addis would add zero. ld/ * testsuite/ld-powerpc/tls.s: Add calls with tls markers. * testsuite/ld-powerpc/tls32.s: Likewise. * testsuite/ld-powerpc/powerpc.exp: Run tls marker tests. * testsuite/ld-powerpc/tls.d: Adjust for TPREL16_HA/LO optimization. * testsuite/ld-powerpc/tlsexe.d: Likewise. * testsuite/ld-powerpc/tlsexetoc.d: Likewise. * testsuite/ld-powerpc/tlsld.d: Likewise. * testsuite/ld-powerpc/tlsmark.d: Likewise. * testsuite/ld-powerpc/tlsopt4.d: Likewise. * testsuite/ld-powerpc/tlstoc.d: Likewise.
2017-08-29[GOLD] PowerPC tls_get_addr_optimizeAlan Modra1-57/+306
This implements the special __tls_get_addr_opt call stub for powerpc gold that returns __thread variable addresses without actually making a call to __tls_get_addr in most cases. Shared libraries that are loaded at program load time (ie. dlopen is not used) have a known layout for their __thread variables, and thus DTPMOD64/DPTREL64 pairs describing those variables can be set up by ld.so for the __tls_get_addr_opt call stub fast exit. Ref https://sourceware.org/ml/libc-alpha/2015-03/msg00626.html I really, really wish I'd used a differently versioned __tls_get_addr symbol than the base symbol to indicate glibc support for the optimized call, rather than having glibc export __tls_get_addr_opt. A lot of the messing around here, flipping symbols from __tls_get_addr to __tls_get_addr_opt, is caused by that decision. About the only benefit is that a user can see at a glance that their disassembled code is calling __tls_get_addr via the fancy call stub.. Anyway, we need references to __tls_get_addr to seem like they were to __tls_get_addr_opt, and in cases like the tsan interceptor, a definition of __tls_get_addr to seem like one of __tls_get_addr_opt as well. That's the reason for Symbol::clear_in_reg and Symbol_table::clone, and why symbols are substituted in Scan::global and other places dealing with dynamic linking. elfcpp/ * elfcpp.h (DT_PPC_OPT): Define. * powerpc.h (PPC_OPT_TLS): Define. gold/ * options.h (tls_get_addr_optimize): New option. * symtab.h (Symbol::clear_in_reg, clone): New functions. (Sized_symbol::clone): New function. (Symbol_table::clone): New function. * resolve.cc (Symbol::clone, Sized_symbol::clone): New functions. * powerpc.cc (Target_powerpc::has_tls_get_addr_opt_, tls_get_addr_, tls_get_addr_opt_): New vars. (Target_powerpc::tls_get_addr_opt, tls_get_addr, is_tls_get_addr_opt, replace_tls_get_addr, set_has_tls_get_addr_opt, stk_linker): New functions. (Target_powerpc::Track_tls::maybe_skip_tls_get_addr_call): Add target param. Update callers. Compare symbols rather than names. (Target_powerpc::do_define_standard_symbols): Init tls_get_addr_ and tls_get_addr_opt_. (Target_powerpc::Branch_info::mark_pltcall): Translate tls_get_addr sym to tls_get_addr_opt. (Target_powerpc::Branch_info::make_stub): Likewise. (Stub_table::define_stub_syms): Likewise. (Target_powerpc::Scan::global): Likewise. (Target_powerpc::Relocate::relocate): Likewise. (add_3_12_2, add_3_12_13, bctrl, beqlr, cmpdi_11_0, cmpwi_11_0, ld_11_1, ld_11_3, ld_12_3, lwz_11_3, lwz_12_3, mr_0_3, mr_3_0, mtlr_11, std_11_1): New constants. (Stub_table::eh_frame_added_): Delete. (Stub_table::tls_get_addr_opt_bctrl_, plt_fde_len_, plt_fde_): New vars. (Stub_table::init_plt_fde): New functions. (Stub_table::add_eh_frame, replace_eh_frame): Move definition out of line. Init and use plt_fde_. (Stub_table::plt_call_size): Return size for tls_get_addr stub. Extract alignment code to.. (Stub_table::plt_call_align): ..this new function. Adjust all callers. (Stub_table::add_plt_call_entry): Set has_tls_get_addr_opt and tls_get_addr_opt_bctrl, and align after that. (Stub_table::do_write): Write out tls_get_addr stub. (Target_powerpc::do_finalize_sections): Emit DT_PPC_OPT PPC_OPT_TLS/PPC64_OPT_TLS bit. (Target_powerpc::Relocate::relocate): Don't check for or modify nop following bl for tls_get_addr stub.
2017-08-28[GOLD] Symbol flag for PowerPC64 localentry:0 trackingAlan Modra1-3/+20
This patch provides a flag for PowerPC64 ELFv2 use in class Symbol, and modifies Sized_target::resolve to return whether the symbol has been resolved. If not, normal processing continues. I use this for PowerPC64 ELFv2 to keep track of whether a symbol has any definition with non-zero localentry, in order to disable --plt-localentry for that symbol. PR 21847 * powerpc.cc (Target_powerpc::is_elfv2_localentry0): Test non_zero_localentry. (Target_powerpc::resolve): New function. (powerpc_info): Set has_resolve for 64-bit. * target.h (Sized_target::resolve): Return bool. * resolve.cc (Symbol_table::resolve): Continue with normal processing when target resolve returns false. * symtab.h (Symbol::non_zero_localentry, set_non_zero_localentry): New accessors. (Symbol::non_zero_localentry_): New flag bit. * symtab.cc (Symbol::init_fields): Init non_zero_localentry_.
2017-08-01[GOLD] PowerPC recreate eh_frame for stubs on each relax passAlan Modra1-18/+41
There is a very small but non-zero probability that a stub group contains stubs on one relax pass, but does not on the next. In that case we would get an FDE covering a zero length address range. (Actually, it's even worse. Alignment padding for stubs can mean the address for the non-existent stubs is past the end of the original section to which stubs are attached, and due to the way do_plt_fde_location calculates the length we can get a negative length.) Fixing this properly requires removing the FDE. Also, I have been implementing the __tls_get_addr_opt support for gold, and that stub needs something other than the default FDE. The necessary FDE will depend on the offset to the __tls_get_addr_opt stub, which of course can change during relaxation. That means at the very least, rewriting the FDE on each pass, possibly changing the FDE size. I think that is better done by completely recreating PLT eh_frame FDEs. * ehframe.cc (Fde::operator==): New. (Cie::remove_fde, Eh_frame::remove_ehframe_for_plt): New. * ehframe.h (Fde::operator==): Declare. (Cie::remove_fde, Eh_frame::remove_ehframe_for_plt): Likewise. * layout.cc (Layout::remove_eh_frame_for_plt): New. * layout.h (Layout::remove_eh_frame_for_plt): Declare. * powerpc.cc (Target_powerpc::do_relax): Remove old eh_frame FDEs. (Stub_table::add_eh_frame): Delete eh_frame_added_ condition. Don't add eh_frame for empty stub section. (Stub_table::remove_eh_frame): New.
2017-07-31[GOLD] PowerPC --no-tls-optimizeAlan Modra1-15/+25
This adds a --no-tls-optimize option for people who want to keep __tls_get_addr calls in an executable rather than optimizing such code sequences to IE/LE. Also tidy some formatting errors, rename a variable to better reflect its use, and tweak two functions that create pairs of GOT entries to first check whether the GOT entry already exists before potentially inserting the header via reserve(2). Without the check it is possible to waste one GOT entry. * options.h (no_tls_optimize): New powerpc option. * powerpc.cc (Target_powerpc::abiversion, set_abiversion): Formatting. (Target_powerpc::stk_toc): Formatting, fix comment. (Target_powerpc::Track_tls::tls_get_addr_state): Rename from tls_get_addr. (Target_powerpc::optimize_tls_gd, optimize_tls_ld, optimize_tls_ie): Return TLSOPT_NONE when !tls_optimize. (Target_powerpc::add_global_pair_with_rel): Check for existing reloc before reserving. (Target_powerpc::add_local_tls_pair): Likewise.
2017-07-31PR 21847, PowerPC64 --plt-localentry againAlan Modra1-0/+4
This makes ld warn about --plt-localentry if a version of glibc without the necessary ld.so checks is detected, and revises the documentation. bfd/ * elf64-ppc.c (ppc64_elf_tls_setup): Warn on --plt-localentry without ld.so checks. gold/ * powerpc.cc (Target_powerpc::scan_relocs): Warn on --plt-localentry without ld.so checks. ld/ * ld.texinfo (plt-localentry): Revise.
2017-07-29PR 21847, Don't default PowerPC64 to --plt-localentryAlan Modra1-2/+0
The big comment in ppc64_elf_tls_setup says why. I've also added some code to the bfd linker that catches the -lpthread -lc symbol differences and disable generation of optimized call stubs even when --plt-localentry is activated. Gold doesn't yet have that. PR 21847 bfd/ * elf64-ppc.c (struct ppc_link_hash_entry): Add non_zero_localentry. (ppc64_elf_merge_symbol): Set non_zero_localentry. (is_elfv2_localentry0): Test non_zero_localentry. (ppc64_elf_tls_setup): Default to --no-plt-localentry. gold/ * powerpc.cc (Target_powerpc::scan_relocs): Default to --no-plt-localentry. ld/ * ld.texinfo (plt-localentry): Document.
2017-07-23Correct eh_frame info for __glink_PLTresolveAlan Modra1-2/+2
My PPC64_OPT_LOCALENTRY patch of June 1, git commit f378ab099d, and the later gold change, git commit 7ee7ff7015, added an insn in __glink_PLTresolve which needs a corresponding adjustment in the eh_frame info for asynchronous exceptions to unwind correctly. It would have been OK for both ABIs to use +5 for the advance before restore of LR, since we can put the DW_CFA_restore_extended on any insn after the actual restore and before the r12/r0 copy is clobbered, but it's slightly better to delay as much as possible. There are then more addresses where fewer CFA program insns are executed. bfd/ * elf64-ppc.c (ppc64_elf_size_stubs): Correct advance to restore of LR. gold/ * powerpc.cc (glink_eh_frame_fde_64v2): Correct advance to restore of LR. (glink_eh_frame_fde_64v1): Advance to restore of LR at latest possible insn.
2017-07-18Fix spelling typos.Yuri Chornovian1-1/+1
2017-06-23[GOLD] PowerPC64 localentry:0 plt call optimizationAlan Modra1-11/+109
elfcpp/ * elfcpp.h (DT_PPC64_OPT): Define. * powerpc.h (PPC64_OPT_TLS, PPC64_OPT_MULTI_TOC, PPC64_OPT_LOCALENTRY): Define. gold/ * options.h (General_options): Add plt_localentry. * powerpc.cc (Target_powerpc::st_other): New function. (Target_powerpc::plt_localentry0_, plt_localentry0_init_, has_localentry0_): New vars. (Target_powerpc::plt_localentry0, set_has_localentry0, is_elfv2_localentry0): New functions. (Target_powerpc::Branch_info::mark_pltcall): Don't set tocsave or return true for localentry:0 calls. (Stub_table::Plt_stub_ent::localentry0_): New var. (Stub_table::add_plt_call_entry): Set localentry0_ and has_localentry0_. Don't set r2save_ for localentry:0 calls. (Output_data_glink::do_write): Save r2 in __glink_PLTresolve for elfv2. (Target_powerpc::scan_relocs): Default plt_localentry0_. (Target_powerpc::do_finalize_sections): Set DT_PPC64_OPT. (Target_powerpc::Relocate::relocate): Don't require nop following calls for localentry:0 plt calls, and don't change nop.
2017-06-23[GOLD] PowerPC64 tocsaveAlan Modra1-55/+224
This adds support to gold for the tocsave relocs already supported by ld.bfd. R_PPC64_TOCSAVE relocs are part of a scheme to move r2 saves to the prologue of a function rather than in each plt call stub. We don't want a compiler to always emit the r2 save, as this would be wasted if the calls turned out to be local. See the tocsave*.s in ld/testsuite/ld-powerpc/. * powerpc.cc (Target_powerpc::tocsave_loc_): New var. (Target_powerpc::mark_pltcall, add_tocsave, tocsave_loc): New functions. (Target_powerpc::Branch_info::tocsave_): New var. (Target_powerpc::Branch_info::mark_pltcall): New function. (Target_powerpc::Branch_info::make_stub): Pass tocsave_ to add_plt_call_entry. (Stub_table::Plt_stub_ent): Make public. Add r2save_. (Stub_table::add_plt_call_entry): Add bool tocsave_ param. Set r2save_. (Stub_table::find_plt_call_entry): Return Plt_stub_ent*. Adjust use throughout. (Stub_table::do_write): Conditionally output r2 save in plt stubs. (Target_powerpc::Scan::local): Handle R_PPC64_TOCSAVE. (Target_powerpc::Scan::global): Likewise. (Target_powerpc::Relocate::relocate): Skip r2 save in plt call stub with tocsave reloc. Replace header tocsave nop with r2 save. * symtab.h (struct Symbol_location_hash): Make public.
2017-06-21[GOLD] PowerPC move plt indx_ out of unordered map keyAlan Modra1-42/+53
I was lazy when adding indx_ to Plt_stub_ent. The field isn't part of the key, so ought to be part of the mapped type. Make it so. * powerpc.cc (Plt_stub_key): Rename from Plt_stub_ent. Remove indx_. (Plt_stub_key_hash): Rename from Plt_stub_ent_hash. (struct Plt_stub_ent): New. (Plt_stub_entries): Map from Plt_stub_key to Plt_stub_ent. Adjust use throughout file.
2017-06-20[GOLD] Avoid duplicate PLT stub symbols on ppc32James Clarke1-6/+12
If two objects are compiled with -fPIC or -fPIE and call the same function, two different PLT entries are created, one for each object, but the same stub symbol name is used for both. * powerpc.cc (Stub_table::define_stub_syms): Always include object's uniq_ value.
2017-05-23PR21503, Gold doesn't create linker stub symbols on ppc64Alan Modra1-27/+165
PR 21503 * options.h: Add --emit-stub-syms option. * powerpc.cc (object_id): New. (Powerpc_relobj): Add uniq_ and accessor. Sort variables for better packing. (Powerpc_dynobj): Sort variables for better packing. (Target_powerpc::define_local): New function. (Target_powerpc::group_sections): Pass stub table size to Stub_table constructor. (Target_powerpc::do_relax): Define stub and glink symbols. (Stub_table): Add uniq_ variable, and id param to constructor. (Stub_table::Plt_stub_ent): Add indx_ variable. (Stub_table::Branch_stub_entries): Move typedef earlier. (Stub_table::branch_stub_size): Replace "to" parameter with a Branch_stub_entries iterator. (Stub_table::add_long_branch_entry): Adjust to suit. (Stub_table::add_plt_call_entry): Set indx_. (Stub_table::define_stub_syms): New function.
2017-02-22PowerPC ld segfault on script discarding dynamic sectionsAlan Modra1-5/+9
bfd/ * elf64-ppc.c (ppc64_elf_finish_dynamic_sections): Don't segfault on .got or .plt output section being discarded by script. * elf32-ppc.c (ppc_elf_finish_dynamic_sections): Likewise. Move vxworks splt temp. gold/ * powerpc.cc (Target_powerpc::make_iplt_section): Check that output_section exists before attempting add_output_section_data. (Target_powerpc::make_brlt_section): Likewise.
2017-02-03[GOLD] PowerPC64 TOC indirect to TOC relative segfaultAlan Modra1-0/+6
* powerpc.cc (Powerpc_relobj::make_toc_relative): Don't crash when no .toc section exists.
2017-01-13Gold: Fix build with GCC 4.2H.J. Lu1-1/+1
PR gold/21040 * powerpc.cc (Powerpc_relobj<size, big_endian>::make_toc_relative): Cast 0x80008000 to uint64_t.
2017-01-11[GOLD] PowerPC64 TOC indirect to TOC relative code editingAlan Modra1-44/+517
Doesn't yet trim off the unused TOC entries. * powerpc.cc (class Powerpc_copy_relocs): New. (Powerpc_copy_relocs::emit): New function. (Powerpc_relobj::relatoc_, toc_, no_toc_opt_): New variables. (Powerpc_relobj::toc_shndx, set_no_toc_opt, no_toc_opt): New inlines. (Powerpc_relobj::do_relocate_sections): New function. (Powerpc_relobj::make_toc_relative): Likewise. (Powerpc_relobj::do_find_special_sections): Stash away .rela.toc and .toc too. (ok_lo_toc_insn): Move earlier, and handle more insns. (Target_powerpc::Scan::local): If optimizing toc accesses, set no_toc_opt for entries we can't edit. Check insn validity. Emit "toc optimization is not supported" warning, downgraded from error. (Target_powerpc::Scan::global): Likewise. (Target_powerpc::Relocate::relocate): Edit TOC indirect code to TOC relative. Don't emit "toc optimization is not supported" error here.
2017-01-10[GOLD] Add --secure-plt option for ppc32Alan Modra1-0/+38
Added just to accept, and ignore. gcc since 2015-10-21, when configured with --enable-secureplt passes this option to the linker. As powerpc gold cannot link --bss-plt code successfully, gold needs to accept the option or the gcc specs file needs to be changed. The patch also make gold detect --bss-plt code and error out rather than producing a binary that crashes. * options.h: Add --secure-plt option. * powerpc.cc (Target_powerpc::Scan::local): Detect and error on -fPIC -mbss-plt code. (Target_powerpc::Scan::global): Likewise.
2017-01-09[GOLD] Set sh_info of .rela.plt for powerpcAlan Modra1-0/+3
* powerpc.cc (Target_powerpc::make_plt_section): Point sh_info of ".rela.plt" at ".plt".
2017-01-07[GOLD] powerpc.cc tidiesAlan Modra1-38/+28
Plus some paranoia in symval_for_branch. We shouldn't get there with dynamic symbols, but if we ever did the static_cast to Powerpc_relobj would be wrong. * powerpc.cc: Use shorter equivalent elfcpp typedef for Reltype and reloc_size throughout. (Target_powerpc::symval_for_branch): Exclude dynamic symbols. (Target_powerpc::Scan::local): Use local var r_sym. (Target_powerpc::Scan::global: Likewise. (Target_powerpc::Relocate::relocate): Delete shadowing r_sym.