diff options
author | Sean Silva <silvasean@google.com> | 2020-10-14 11:26:22 -0700 |
---|---|---|
committer | Sean Silva <silvasean@google.com> | 2020-10-15 12:19:20 -0700 |
commit | ee491ac91e123b90eeec3cce7e494936ea8cb85d (patch) | |
tree | f93acf89a341f4edc2648bbd5b75d8f1dd3e6666 /llvm/lib/Support/TargetParser.cpp | |
parent | 9c728a7cbf5df69627966b823d30daa6cfe2426d (diff) | |
download | llvm-ee491ac91e123b90eeec3cce7e494936ea8cb85d.zip llvm-ee491ac91e123b90eeec3cce7e494936ea8cb85d.tar.gz llvm-ee491ac91e123b90eeec3cce7e494936ea8cb85d.tar.bz2 |
[mlir] Add std.tensor_to_memref op and teach the infra about it
The opposite of tensor_to_memref is tensor_load.
- Add some basic tensor_load/tensor_to_memref folding.
- Add source/target materializations to BufferizeTypeConverter.
- Add an example std bufferization pattern/pass that shows how the
materialiations work together (more std bufferization patterns to come
in subsequent commits).
- In coming commits, I'll document how to write composable
bufferization passes/patterns and update the other in-tree
bufferization passes to match this convention. The populate* functions
will of course continue to be exposed for power users.
The naming on tensor_load/tensor_to_memref and their pretty forms are
not very intuitive. I'm open to any suggestions here. One key
observation is that the memref type must always be the one specified in
the pretty form, since the tensor type can be inferred from the memref
type but not vice-versa.
With this, I've been able to replace all my custom bufferization type
converters in npcomp with BufferizeTypeConverter!
Part of the plan discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17
Differential Revision: https://reviews.llvm.org/D89437
Diffstat (limited to 'llvm/lib/Support/TargetParser.cpp')
0 files changed, 0 insertions, 0 deletions