diff options
author | Andrew Waterman <andrew@sifive.com> | 2017-11-05 00:39:01 +0000 |
---|---|---|
committer | Palmer Dabbelt <palmer@gcc.gnu.org> | 2017-11-05 00:39:01 +0000 |
commit | caf1c1cd1253a847644744e3d6df3f98051ef024 (patch) | |
tree | ea3a913a9901cb4877862b088ee4ebc768ac76eb | |
parent | ecc82a8d0551e02afc9bb4d9dff450f6f0098b4e (diff) | |
download | gcc-caf1c1cd1253a847644744e3d6df3f98051ef024.zip gcc-caf1c1cd1253a847644744e3d6df3f98051ef024.tar.gz gcc-caf1c1cd1253a847644744e3d6df3f98051ef024.tar.bz2 |
RISC-V: If -m[no-]strict-align is not passed, assume its value from -mtune
2017-11-04 Andrew Waterman <andrew@sifive.com>
* config/riscv/riscv.c (riscv_option_override): Conditionally set
TARGET_STRICT_ALIGN based upon -mtune argument.
From-SVN: r254417
-rw-r--r-- | gcc/ChangeLog | 5 | ||||
-rw-r--r-- | gcc/config/riscv/riscv.c | 6 |
2 files changed, 10 insertions, 1 deletions
diff --git a/gcc/ChangeLog b/gcc/ChangeLog index 285ac20..09b0cd7 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,5 +1,10 @@ 2017-11-04 Andrew Waterman <andrew@sifive.com> + * config/riscv/riscv.c (riscv_option_override): Conditionally set + TARGET_STRICT_ALIGN based upon -mtune argument. + +2017-11-04 Andrew Waterman <andrew@sifive.com> + * config/riscv/riscv.h (SLOW_BYTE_ACCESS): Change to 1. 2017-11-04 Daniel Santos <daniel.santos@pobox.com> diff --git a/gcc/config/riscv/riscv.c b/gcc/config/riscv/riscv.c index b81a2d2..52bbc25 100644 --- a/gcc/config/riscv/riscv.c +++ b/gcc/config/riscv/riscv.c @@ -3772,9 +3772,13 @@ riscv_option_override (void) /* Use -mtune's setting for slow_unaligned_access, even when optimizing for size. For architectures that trap and emulate unaligned accesses, - the performance cost is too great, even for -Os. */ + the performance cost is too great, even for -Os. Similarly, if + -m[no-]strict-align is left unspecified, heed -mtune's advice. */ riscv_slow_unaligned_access_p = (cpu->tune_info->slow_unaligned_access || TARGET_STRICT_ALIGN); + if ((target_flags_explicit & MASK_STRICT_ALIGN) == 0 + && cpu->tune_info->slow_unaligned_access) + target_flags |= MASK_STRICT_ALIGN; /* If the user hasn't specified a branch cost, use the processor's default. */ |