aboutsummaryrefslogtreecommitdiff
path: root/mlir/test/Conversion/GPUCommon
diff options
context:
space:
mode:
authorKrzysztof Drewniak <Krzysztof.Drewniak@amd.com>2023-01-19 21:56:04 +0000
committerKrzysztof Drewniak <Krzysztof.Drewniak@amd.com>2023-02-09 18:00:46 +0000
commit499abb243cb75262f121659b87f5a2a6a7c8a82f (patch)
treeb7898a0f02674ddf0e63f7df728c2c0c10fb22d6 /mlir/test/Conversion/GPUCommon
parent3c565c246635cace81f01340cd3d1d7386042478 (diff)
downloadllvm-499abb243cb75262f121659b87f5a2a6a7c8a82f.zip
llvm-499abb243cb75262f121659b87f5a2a6a7c8a82f.tar.gz
llvm-499abb243cb75262f121659b87f5a2a6a7c8a82f.tar.bz2
Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type conversions, most often when going to LLVM or to/from SPIR-V (a future commit), and it is possible that such remappings may become more common in the future as dialects take advantage of the more generic memory space infrastructure. Currently, memory space remappings are handled by running a special-purpose conversion pass before the main conversion that changes the address space attributes. In this commit, this approach is replaced by adding a notion of type attribute conversions TypeConverter, which is then used to convert memory space attributes. Then, we use this infrastructure throughout the *ToLLVM conversions. This has the advantage of loosing the requirements on the inputs to those passes from "all address spaces must be integers" to "all memory spaces must be convertible to integer spaces", a looser requirement that reduces the coupling between portions of MLIR. ON top of that, this change leads to the removal of most of the calls to getMemorySpaceAsInt(), bringing us closer to removing it. (A rework of the SPIR-V conversions to use this new system will be in a folowup commit.) As a note, one long-term motivation for this change is that I would eventually like to add an allocaMemorySpace key to MLIR data layouts and then call getMemRefAddressSpace(allocaMemorySpace) in the relevant *ToLLVM in order to ensure all alloca()s, whether incoming or produces during the LLVM lowering, have the correct address space for a given target. I expect that the type attribute conversion system may be useful in other contexts. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D142159
Diffstat (limited to 'mlir/test/Conversion/GPUCommon')
-rw-r--r--mlir/test/Conversion/GPUCommon/lower-memory-space-attrs.mlir48
1 files changed, 48 insertions, 0 deletions
diff --git a/mlir/test/Conversion/GPUCommon/lower-memory-space-attrs.mlir b/mlir/test/Conversion/GPUCommon/lower-memory-space-attrs.mlir
new file mode 100644
index 0000000..3c1924e
--- /dev/null
+++ b/mlir/test/Conversion/GPUCommon/lower-memory-space-attrs.mlir
@@ -0,0 +1,48 @@
+// RUN: mlir-opt %s -split-input-file -convert-gpu-to-rocdl | FileCheck %s --check-prefixes=CHECK,ROCDL
+// RUN: mlir-opt %s -split-input-file -convert-gpu-to-nvvm | FileCheck %s --check-prefixes=CHECK,NVVM
+
+gpu.module @kernel {
+ gpu.func @private(%arg0: f32) private(%arg1: memref<4xf32, #gpu.address_space<private>>) {
+ %c0 = arith.constant 0 : index
+ memref.store %arg0, %arg1[%c0] : memref<4xf32, #gpu.address_space<private>>
+ gpu.return
+ }
+}
+
+// CHECK-LABEL: llvm.func @private
+// CHECK: llvm.store
+// ROCDL-SAME: : !llvm.ptr<f32, 5>
+// NVVM-SAME: : !llvm.ptr<f32>
+
+
+// -----
+
+gpu.module @kernel {
+ gpu.func @workgroup(%arg0: f32) workgroup(%arg1: memref<4xf32, #gpu.address_space<workgroup>>) {
+ %c0 = arith.constant 0 : index
+ memref.store %arg0, %arg1[%c0] : memref<4xf32, #gpu.address_space<workgroup>>
+ gpu.return
+ }
+}
+
+// CHECK-LABEL: llvm.func @workgroup
+// CHECK: llvm.store
+// CHECK-SAME: : !llvm.ptr<f32, 3>
+
+// -----
+
+gpu.module @kernel {
+ gpu.func @nested_memref(%arg0: memref<4xmemref<4xf32, #gpu.address_space<global>>, #gpu.address_space<global>>) -> f32 {
+ %c0 = arith.constant 0 : index
+ %inner = memref.load %arg0[%c0] : memref<4xmemref<4xf32, #gpu.address_space<global>>, #gpu.address_space<global>>
+ %value = memref.load %inner[%c0] : memref<4xf32, #gpu.address_space<global>>
+ gpu.return %value : f32
+ }
+}
+
+// CHECK-LABEL: llvm.func @nested_memref
+// CHECK: llvm.load
+// CHECK-SAME: : !llvm.ptr<{{.*}}, 1>
+// CHECK: [[value:%.+]] = llvm.load
+// CHECK-SAME: : !llvm.ptr<f32, 1>
+// CHECK: llvm.return [[value]]