diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2015-12-04 08:43:45 -0800 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2015-12-04 09:03:04 -0800 |
commit | 02e2aef89bb58e8e0cc4390db41c5f775e1b7c3e (patch) | |
tree | 323891844d56806214ca3851131d40cea1d8fc68 /ld/testsuite/ld-i386/mov2.s | |
parent | f27c5390b2fcff06b1e2199a4f051d543670aa03 (diff) | |
download | gdb-02e2aef89bb58e8e0cc4390db41c5f775e1b7c3e.zip gdb-02e2aef89bb58e8e0cc4390db41c5f775e1b7c3e.tar.gz gdb-02e2aef89bb58e8e0cc4390db41c5f775e1b7c3e.tar.bz2 |
Optimize R_386_GOT32/R_386_GOT32X only if addend is 0
Linker can't optimize R_386_GOT32 and R_386_GOT32X relocations if addend
isn't 0. It isn't valid to convert
movl foo@GOT+1(%ecx), %eax
to
leal foo@GOTOFF+1(%ecx), %eax
nor to convert
movq foo@GOTPCREL+1(%rip), %rax
to
leaq foo(%rip), %rax
for x86-64. We should check if addend is 0 before optimizing R_386_GOT32
and R_386_GOT32X relocations. Testcases are added for i386 and x86-64.
bfd/
* elf32-i386.c (elf_i386_convert_load): Skip if addend isn't 0.
(elf_i386_relocate_section): Skip R_386_GOT32X optimization if
addend isn't 0.
ld/testsuite/
* ld-i386/i386.exp: Run mov2a, mov2b and mov3.
* ld-i386/mov2.s: New file.
* ld-i386/mov2a.d: Likewise.
* ld-i386/mov2b.d: Likewise.
* ld-i386/mov3.d: Likewise.
* ld-i386/mov3.s: Likewise.
* ld-x86-64/mov2.s: Likewise.
* ld-x86-64/mov2a.d: Likewise.
* ld-x86-64/mov2b.d: Likewise.
* ld-x86-64/mov2c.d: Likewise.
* ld-x86-64/mov2d.d: Likewise.
* ld-x86-64/x86-64.exp: Run mov2a, mov2b, mov2c and mov2d.
Diffstat (limited to 'ld/testsuite/ld-i386/mov2.s')
-rw-r--r-- | ld/testsuite/ld-i386/mov2.s | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/ld/testsuite/ld-i386/mov2.s b/ld/testsuite/ld-i386/mov2.s new file mode 100644 index 0000000..3fa06ce --- /dev/null +++ b/ld/testsuite/ld-i386/mov2.s @@ -0,0 +1,15 @@ + .section my_section,"aw",@progbits + .long 0x12345678 + .text + .globl foo + .type foo, @function +foo: + ret + .size foo, .-foo + .globl _start + .type _start, @function +_start: + movl foo@GOT+1(%ecx), %eax + movl __start_my_section@GOT+1(%ecx), %eax + movl __stop_my_section@GOT+1(%ecx), %eax + .size _start, .-_start |