aboutsummaryrefslogtreecommitdiff
path: root/mlir/lib/Conversion/ControlFlowToLLVM/ControlFlowToLLVM.cpp
diff options
context:
space:
mode:
authorHan-Chung Wang <hanhan0912@gmail.com>2025-07-28 09:29:15 -0700
committerGitHub <noreply@github.com>2025-07-28 09:29:15 -0700
commit496d31c8a9d69ded50e4aa7fbd5c5ba1ffd3ef2c (patch)
tree3b52bf66d65794e548b9a3b112dd08a72ae35b47 /mlir/lib/Conversion/ControlFlowToLLVM/ControlFlowToLLVM.cpp
parent5f20518f5b4734d01848dfe44d24aed195dc2043 (diff)
downloadllvm-496d31c8a9d69ded50e4aa7fbd5c5ba1ffd3ef2c.zip
llvm-496d31c8a9d69ded50e4aa7fbd5c5ba1ffd3ef2c.tar.gz
llvm-496d31c8a9d69ded50e4aa7fbd5c5ba1ffd3ef2c.tar.bz2
Reapply "[mlir][linalg] Restrict linalg.pack to not have artificial padding." (#150675) (#150680)
This reverts commit https://github.com/llvm/llvm-project/commit/0844812b2e9d7f5ab005223443791c9287bcf5a2 with a shape fix in https://github.com/llvm/llvm-project/commit/1db4c6b27500e686fad9e55bbbe7c7c68b246b7e The revision restrict the `linalg.pack` op to not have artificial padding semantics. E.g., the below is valid without the change, and it becomes invalid with the change. ```mlir func.func @foo(%src: tensor<9xf32>) -> tensor<100x8xf32> { %cst = arith.constant 0.000000e+00 : f32 %dest = tensor.empty() : tensor<100x8xf32> %pack = linalg.pack %src padding_value(%cst : f32) inner_dims_pos = [0] inner_tiles = [8] into %dest : tensor<9xf32> -> tensor<100x8xf32> return %pack : tensor<100x8xf32> } ``` IMO, it is a misuse if we use pack ops with artificial padding sizes because the intention of the pack op is to relayout the source based on target intrinsics, etc. The output shape is expected to be `tensor<2x8xf32>`. If people need extra padding sizes, they can create a new pad op followed by the pack op. This also makes consumer tiling much easier because the consumer fusion does not support artificial padding sizes. It is very hard to make it work without using ad-hoc patterns because the tiling sizes are about source, which implies that you don't have a core_id/thread_id to write padding values to the whole tile. People may have a question how why pad tiling implementation works. The answer is that it creates an `if-else` branch to handle the case. In my experience, it is very struggle in transformation because most of the time people only need one side of the branch given that the tile sizes are usually greater than padding sizes. However, the implementation is conservatively correct in terms of semantics. Given that the introduction of `pack` op is to serve the relayout needs better, having the restriction makes sense to me. Removed tests: - `no_bubble_up_pack_extending_dimension_through_expand_cannot_reassociate` from `data-layout-propagation.mlir`: it is a dup test to `bubble_up_pack_non_expanded_dims_through_expand` after we fix the shape. - `fuse_pack_consumer_with_untiled_extra_padding` from `tile-and-fuse-consumer.mlir`: it was created for artificial padding in the consumer fusion implementation. The other changes in lit tests are just fixing the shape. --------- Signed-off-by: hanhanW <hanhan0912@gmail.com>
Diffstat (limited to 'mlir/lib/Conversion/ControlFlowToLLVM/ControlFlowToLLVM.cpp')
0 files changed, 0 insertions, 0 deletions