diff options
author | David Sherwood <david.sherwood@arm.com> | 2021-01-22 16:53:21 +0000 |
---|---|---|
committer | David Sherwood <david.sherwood@arm.com> | 2021-02-02 09:52:39 +0000 |
commit | d4d4ceeb8f3be67be94781ed718ceb103213df74 (patch) | |
tree | 08c65b87b14a0ec24cda86f71db57a03ca1b486c /flang/lib/Frontend/CompilerInvocation.cpp | |
parent | 679ef22f2e553e73eda43c45d4361256c4524c00 (diff) | |
download | llvm-d4d4ceeb8f3be67be94781ed718ceb103213df74.zip llvm-d4d4ceeb8f3be67be94781ed718ceb103213df74.tar.gz llvm-d4d4ceeb8f3be67be94781ed718ceb103213df74.tar.bz2 |
[SVE][LoopVectorize] Add masked load/store and gather/scatter support for SVE
This patch updates IRBuilder::CreateMaskedGather/Scatter to work
with ScalableVectorType and adds isLegalMaskedGather/Scatter functions
to AArch64TargetTransformInfo. In addition I've fixed up
isLegalMaskedLoad/Store to return true for supported scalar types,
since this is what the vectorizer asks for.
In LoopVectorize.cpp I've changed
LoopVectorizationCostModel::getInterleaveGroupCost to return an invalid
cost for scalable vectors, since currently this relies upon using shuffle
vector for reversing vectors. In addition, in
LoopVectorizationCostModel::setCostBasedWideningDecision I have assumed
that the cost of scalarising memory ops is infinitely expensive.
I have added some simple masked load/store and gather/scatter tests,
including cases where we use gathers and scatters for conditional invariant
loads and stores.
Differential Revision: https://reviews.llvm.org/D95350
Diffstat (limited to 'flang/lib/Frontend/CompilerInvocation.cpp')
0 files changed, 0 insertions, 0 deletions