diff options
author | Christopher Bate <cbate@nvidia.com> | 2023-07-19 15:29:47 -0600 |
---|---|---|
committer | Christopher Bate <cbate@nvidia.com> | 2023-09-14 13:51:42 -0600 |
commit | cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8 (patch) | |
tree | 8f7373917551574eb1f5b962d49a035cd9a2dd71 /llvm/docs/tutorial | |
parent | e8ad9b0e336c309f9126bb98cc53dd6d8217cb42 (diff) | |
download | llvm-cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8.zip llvm-cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8.tar.gz llvm-cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8.tar.bz2 |
[mlir][VectorToGPU] Update memref stride preconditions on `nvgpu.mma.sync` path
This change removes the requirement that the row stride be statically known when
converting `vector.transfer_read` and `vector.transfer_write` to distributed
SIMT operations in the `nvgpu` lowering path. It also adds a check to verify
that the last dimension of the source memref is statically known to have stride
1 since this is assumed in the conversion logic. No other change should be
required since the generated `vector.load` operations are never created across
dimensions other than the last. The routines for checking preconditions on
`vector.transfer_read/write` are moved to under nvgpu utilities.
The change is NFC with respect to the GPU dialect lowering path.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D155753
Diffstat (limited to 'llvm/docs/tutorial')
0 files changed, 0 insertions, 0 deletions