diff options
author | Richard Sandiford <richard.sandiford@arm.com> | 2020-09-24 10:06:11 +0100 |
---|---|---|
committer | Richard Sandiford <richard.sandiford@arm.com> | 2020-09-24 10:06:11 +0100 |
commit | e94797250b403d66cb3624a594e41faf0dd76617 (patch) | |
tree | b194646ca7dd5acb0bc0333d86814438f44e5a4b /gcc/tree-ssa-loop-ch.c | |
parent | 10843f8303509fcba880c6c05c08e4b4ccd24f36 (diff) | |
download | gcc-e94797250b403d66cb3624a594e41faf0dd76617.zip gcc-e94797250b403d66cb3624a594e41faf0dd76617.tar.gz gcc-e94797250b403d66cb3624a594e41faf0dd76617.tar.bz2 |
arm: Fix canary address calculation for non-PIC
For non-PIC, the stack protector patterns did:
rtx mem = XEXP (force_const_mem (SImode, operands[1]), 0);
emit_move_insn (operands[2], mem);
Here, operands[1] is the address of the canary (&__stack_chk_guard)
and operands[2] is the register that we want to move that address into.
However, the code above instead sets operands[2] to the address of a
constant pool entry that contains &__stack_chk_guard, rather than to
&__stack_chk_guard itself. The sequence therefore does one less
pointer indirection than it should.
The net effect was to use &__stack_chk_guard for stack-smash detection,
instead of using __stack_chk_guard itself.
gcc/
* config/arm/arm.md (*stack_protect_combined_set_insn): For non-PIC,
load the address of the canary rather than the address of the
constant pool entry that points to it.
(*stack_protect_combined_test_insn): Likewise.
gcc/testsuite/
* gcc.target/arm/stack-protector-3.c: New test.
* gcc.target/arm/stack-protector-4.c: Likewise.
Diffstat (limited to 'gcc/tree-ssa-loop-ch.c')
0 files changed, 0 insertions, 0 deletions