aboutsummaryrefslogtreecommitdiff
path: root/clang/lib/Parse/ParseTentative.cpp
diff options
context:
space:
mode:
authorSean Silva <silvasean@google.com>2020-03-26 15:11:23 -0700
committerSean Silva <silvasean@google.com>2020-03-27 16:38:42 -0700
commit569e4f9bc99a755cc30f0102b29b1eefd4fa33b4 (patch)
treec53cd80049d42ccdf1165fa8e2dc213deaf54ca0 /clang/lib/Parse/ParseTentative.cpp
parentcbce88dd3a9ea7161da3c57749cf03873dc7ea79 (diff)
downloadllvm-569e4f9bc99a755cc30f0102b29b1eefd4fa33b4.zip
llvm-569e4f9bc99a755cc30f0102b29b1eefd4fa33b4.tar.gz
llvm-569e4f9bc99a755cc30f0102b29b1eefd4fa33b4.tar.bz2
`shape` dialect: add some ops
- add `to_extent_tensor` - rename `create_shape` to `from_extent_tensor` for symmetry - add `split_at` and `concat` ops for basic shape manipulations This set of ops is inspired by the requirements of lowering a dynamic-shape-aware batch matmul op. For such an op, the "matrix" dimensions aren't subject to broadcasting but the others are, and so we need to slice, broadcast, and reconstruct the final output shape. Furthermore, the actual broadcasting op used downstream uses a tensor of extents as its preferred shape interface for the actual op that does the broadcasting. However, this functionality is quite general. It's obvious that `to_extent_tensor` is needed long-term to support many common patterns that involve computations on shapes. We can evolve the shape manipulation ops introduced here. The specific choices made here took into consideration the potentially unranked nature of the !shape.shape type, which means that a simple listing of dimensions to extract isn't possible in general. Differential Revision: https://reviews.llvm.org/D76817
Diffstat (limited to 'clang/lib/Parse/ParseTentative.cpp')
0 files changed, 0 insertions, 0 deletions