aboutsummaryrefslogtreecommitdiff
path: root/gcc/lra-constraints.c
diff options
context:
space:
mode:
authorRichard Sandiford <richard.sandiford@arm.com>2021-03-26 16:08:35 +0000
committerRichard Sandiford <richard.sandiford@arm.com>2021-03-26 16:08:34 +0000
commit3b924b0d7c0218956dbc2ce0ca2740e8923c2c4a (patch)
tree9c4cb4243d0e29cc5640034a5e97904ceca5e6a8 /gcc/lra-constraints.c
parent50a525b50c912999073a78220c6d62d87946b579 (diff)
downloadgcc-3b924b0d7c0218956dbc2ce0ca2740e8923c2c4a.zip
gcc-3b924b0d7c0218956dbc2ce0ca2740e8923c2c4a.tar.gz
gcc-3b924b0d7c0218956dbc2ce0ca2740e8923c2c4a.tar.bz2
aarch64: Try to detect when Advanced SIMD code would be completely unrolled
GCC usually costs the SVE and Advanced SIMD versions of a loop and picks the one with the lowest cost. By default it will choose SVE over Advanced SIMD in the event of tie. This is normally the correct behaviour, not least because SVE can handle every scalar iteration count whereas Advanced SIMD can only handle full vectors. However, there is one important exception that GCC failed to consider: we can completely unroll Advanced SIMD code at compile time, but we can't do the same for SVE. This patch therefore adds an opt-in heuristic to guess whether the Advanced SIMD version of a loop is likely to be unrolled. This will only be suitable for some CPUs, so it is not enabled by default and is controlled separately from use_new_vector_costs. Like with previous patches, this one only becomes active if a CPU selects both of the new tuning parameters. It should therefore have a very low impact on other CPUs. gcc/ * config/aarch64/aarch64-tuning-flags.def (matched_vector_throughput): New tuning parameter. * config/aarch64/aarch64.c (neoversev1_tunings): Use it. (aarch64_estimated_sve_vq): New function. (aarch64_vector_costs::analyzed_vinfo): New member variable. (aarch64_vector_costs::is_loop): Likewise. (aarch64_vector_costs::unrolled_advsimd_niters): Likewise. (aarch64_vector_costs::unrolled_advsimd_stmts): Likewise. (aarch64_record_potential_advsimd_unrolling): New function. (aarch64_analyze_loop_vinfo, aarch64_analyze_bb_vinfo): Likewise. (aarch64_add_stmt_cost): Call aarch64_analyze_loop_vinfo or aarch64_analyze_bb_vinfo on the first use of a costs structure. Detect whether we're vectorizing a loop for SVE that might be completely unrolled if it used Advanced SIMD instead. (aarch64_adjust_body_cost_for_latency): New function. (aarch64_finish_cost): Call it.
Diffstat (limited to 'gcc/lra-constraints.c')
0 files changed, 0 insertions, 0 deletions