diff options
author | Matthias Springer <springerm@google.com> | 2022-11-11 10:32:05 +0100 |
---|---|---|
committer | Matthias Springer <springerm@google.com> | 2022-11-11 11:39:18 +0100 |
commit | e62681e70a4e0cef34d7310d42de7d425784264b (patch) | |
tree | 8677f98915c2b4a74833e36a38b5a8b2851d6946 /clang/lib/Frontend/CompilerInvocation.cpp | |
parent | 882ddab4c0674763473ca82451bb8fe96a998f5a (diff) | |
download | llvm-e62681e70a4e0cef34d7310d42de7d425784264b.zip llvm-e62681e70a4e0cef34d7310d42de7d425784264b.tar.gz llvm-e62681e70a4e0cef34d7310d42de7d425784264b.tar.bz2 |
[mlir][bufferize] Eliminate tensor.empty ops instead of bufferization.alloc_tensor ops
tensor.empty op elimination is an optimization that brings IR in a more bufferization-friendly form. E.g.:
```
%0 = tensor.empty()
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
Is rewritten to:
```
%0 = tensor.extract_slice %t[10][20][1]
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
This optimization used to operate on bufferization.alloc_tensor ops. This is not correct because the documentation of bufferization.alloc_tensor says that it always bufferizes to an allocation. Instead, this optimization should operate on tensor.empty ops, which can then be lowered to bufferization.alloc_tensor ops (if they don't get eliminated).
Differential Revision: https://reviews.llvm.org/D137162
Diffstat (limited to 'clang/lib/Frontend/CompilerInvocation.cpp')
0 files changed, 0 insertions, 0 deletions