aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
diff options
context:
space:
mode:
authorAlex Zinenko <zinenko@google.com>2021-09-24 18:26:19 +0200
committerAlex Zinenko <zinenko@google.com>2021-09-24 18:40:13 +0200
commit5988a3b7a09126aff982944ecb36f533c450388e (patch)
treec7f6073ccdc90ed09ad31ab17dcc02ff1118b98b /llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
parent3b0240e6c89d9201430ee83b09fe7c94256e8838 (diff)
downloadllvm-5988a3b7a09126aff982944ecb36f533c450388e.zip
llvm-5988a3b7a09126aff982944ecb36f533c450388e.tar.gz
llvm-5988a3b7a09126aff982944ecb36f533c450388e.tar.bz2
[mlir] Linalg: ensure tile-and-pad always creates padding as requested
Initially, the padding transformation and the related operation were only used to guarantee static shapes of subtensors in tiled operations. The transformation would not insert the padding operation if the shapes were already static, and the overall code generation would actively remove such "noop" pads. However, this transformation can be also used to pack data into smaller tensors and marshall them into faster memory, regardless of the size mismatches. In context of expert-driven transformation, we should assume that, if padding is requested, a potentially padded tensor must be always created. Update the transformation accordingly. To do this, introduce an optional `packing` attribute to the `pad_tensor` op that serves as an indication that the padding is an intentional choice (as opposed to side effect of type normalization) and should be left alone by cleanups. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D110425
Diffstat (limited to 'llvm/lib/Bitcode/Writer/BitcodeWriter.cpp')
0 files changed, 0 insertions, 0 deletions