aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Support/Program.cpp
diff options
context:
space:
mode:
authorAart Bik <ajcbik@google.com>2021-01-13 10:33:28 -0800
committerAart Bik <ajcbik@google.com>2021-01-13 11:55:23 -0800
commitf4f158b2f89e16ee7068d6292d2d46457d6932bb (patch)
tree78697a8f7d27227b3117180946d9e9d7c4652928 /llvm/lib/Support/Program.cpp
parentbb72adcaee7db0877e1cecb29d414003bf19ce02 (diff)
downloadllvm-f4f158b2f89e16ee7068d6292d2d46457d6932bb.zip
llvm-f4f158b2f89e16ee7068d6292d2d46457d6932bb.tar.gz
llvm-f4f158b2f89e16ee7068d6292d2d46457d6932bb.tar.bz2
[mlir][sparse] add vectorization strategies to sparse compiler
Similar to the parallelization strategies, the vectorization strategies provide control on what loops should be vectorize. Unlike the parallel strategies, only innermost loops are considered, but including reductions, with the control of vectorizing dense loops only or dense and sparse loops. The vectorized loops are always controlled by a vector mask to avoid overrunning the iterations, but subsequent vector operation folding removes redundant masks and replaces the operations with more efficient counterparts. Similarly, we will rely on subsequent loop optimizations to further optimize masking, e.g. using an unconditional full vector loop and scalar cleanup loop. The current strategy already demonstrates a nice interaction between the sparse compiler and all prior optimizations that went into the vector dialect. Ongoing discussion at: https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020/10 Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D94551
Diffstat (limited to 'llvm/lib/Support/Program.cpp')
0 files changed, 0 insertions, 0 deletions