aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Object/COFFObjectFile.cpp
diff options
context:
space:
mode:
authorPhilip Reames <preames@rivosinc.com>2022-06-16 14:10:21 -0700
committerPhilip Reames <listmail@philipreames.com>2022-06-16 14:22:31 -0700
commitd764aa7fc6b9cc3fbe960019018f5f9e941eb0a6 (patch)
treeed0201b47a5cc47ea828d14256a19e8497f77248 /llvm/lib/Object/COFFObjectFile.cpp
parentbbb73ade43a29c90adabca69afb9a4df1a5cfb5e (diff)
downloadllvm-d764aa7fc6b9cc3fbe960019018f5f9e941eb0a6.zip
llvm-d764aa7fc6b9cc3fbe960019018f5f9e941eb0a6.tar.gz
llvm-d764aa7fc6b9cc3fbe960019018f5f9e941eb0a6.tar.bz2
[RISCV] Add cost model for scalable scatter and gather
The costing we use for fixed length vector gather and scatter is to simply count up the memory ops, and multiply by a fixed memory op cost. For scalable vectors, we don't actually know how many lanes are active. Instead, we have to end up making a worst case assumption on how many lanes could be active. In the generic +V case, this results in very high costs, but we can do better when we know an upper bound on the VLEN. There's some obvious ways to improve this - e.g. using information about VL and mask bits from the instruction to reduce the upper bound - but this seems like a reasonable starting point. The resulting costs do bias us pretty strongly away from generating scatter/gather for generic +V. Without this, we'd be returning an invalid cost and thus definitely not vectorizing, so no major change in practical behavior expected. Differential Revision: https://reviews.llvm.org/D127541
Diffstat (limited to 'llvm/lib/Object/COFFObjectFile.cpp')
0 files changed, 0 insertions, 0 deletions