aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/ProfileData/Coverage/CoverageMappingReader.cpp
diff options
context:
space:
mode:
authorPhilip Reames <preames@rivosinc.com>2023-04-05 07:47:24 -0700
committerPhilip Reames <listmail@philipreames.com>2023-04-05 07:58:56 -0700
commit37646a2c28fd08f7d7715fd8efc132357ffe0c34 (patch)
treea76f73efb71d2081ebcc62d6a2b1f409a3559e75 /llvm/lib/ProfileData/Coverage/CoverageMappingReader.cpp
parent05a2f4290e27c67b0f547b893f1dc9aaf6d40ca2 (diff)
downloadllvm-37646a2c28fd08f7d7715fd8efc132357ffe0c34.zip
llvm-37646a2c28fd08f7d7715fd8efc132357ffe0c34.tar.gz
llvm-37646a2c28fd08f7d7715fd8efc132357ffe0c34.tar.bz2
[RISCV] Account for LMUL in memory op costs
Generally, the cost of a memory op will scale with the number of vector registers accessed. Machines might exist which have a narrow memory access than vector register width, but machines with a wider memory access width than vector register width seem unlikely. I noticed this because we were preferring wide loads + deinterleaves on examples where the cost of a short gather (actually a strided load) would be better. Touching 8 vector registers instead of doing a 4 element gather is not a good tradeoff. Differential Revision: https://reviews.llvm.org/D147470
Diffstat (limited to 'llvm/lib/ProfileData/Coverage/CoverageMappingReader.cpp')
0 files changed, 0 insertions, 0 deletions