aboutsummaryrefslogtreecommitdiff
path: root/gcc/fold-const.c
diff options
context:
space:
mode:
authorWilco Dijkstra <wdijkstr@arm.com>2017-06-28 14:13:02 +0000
committerWilco Dijkstra <wilco@gcc.gnu.org>2017-06-28 14:13:02 +0000
commit55994b971b02a3808f3776ce66e890ecc1c7b759 (patch)
treedff29e7e29c7b161ca55e3d7bcf2469727c7c2f0 /gcc/fold-const.c
parent926c786507a69f31253d6c904cf582b9ba162ded (diff)
downloadgcc-55994b971b02a3808f3776ce66e890ecc1c7b759.zip
gcc-55994b971b02a3808f3776ce66e890ecc1c7b759.tar.gz
gcc-55994b971b02a3808f3776ce66e890ecc1c7b759.tar.bz2
Improve Cortex-A53 shift bypass
The aarch_forward_to_shift_is_not_shifted_reg bypass always returns true on AArch64 shifted instructions. This causes the bypass to activate in too many cases, resulting in slower execution on Cortex-A53 like reported in PR79665. This patch uses the arm_no_early_alu_shift_dep condition instead which improves the example in PR79665 by ~7%. Given it is no longer used, remove aarch_forward_to_shift_is_not_shifted_reg. Also remove an unnecessary REG_P check. gcc/ PR target/79665 * config/arm/aarch-common.c (arm_no_early_alu_shift_dep): Remove redundant if. (aarch_forward_to_shift_is_not_shifted_reg): Remove. * config/arm/aarch-common-protos.h (aarch_forward_to_shift_is_not_shifted_re): Remove. * config/arm/cortex-a53.md: Use arm_no_early_alu_shift_dep in bypass. From-SVN: r249740
Diffstat (limited to 'gcc/fold-const.c')
0 files changed, 0 insertions, 0 deletions