'tensor' Dialect
The tensor
dialect is intended to hold core tensor creation and
manipulation ops, which are not strongly associated with any particular
other dialect or domain abstraction. The primary smoke test of this is ops
that make sense for any tensor element type.
We leave it to other dialects to hold the vast swath of possible computations one might want to do on a tensor.
The tensor
type is (for better or for worse) used to represent all kinds
of things, and supports an open-ended set of element types. Examples:
- representing large, dense aggregations of primitive types, suitable for high-performance numerical computing.
- representing shapes in the
shape
dialect, which consist of small 1D tensors ofindex
data type. - representing aggregations of strings or “variant” types.
- representing large, sparse aggregations of primitive types, suitable for high-performance numerical computing.
Thus, for the tensor
dialect, we prefer for now to constrain the
scope as much as possible. The expectation is that at some point
in the future, the tensor
dialect’s scope may be broadened through a
careful discussion of the tradeoffs.
The tensor
type is actually a builtin type (it lives in the builtin
dialect), and does not live in this dialect.
Operation definition ¶
tensor.cast
(::mlir::tensor::CastOp) ¶
tensor cast operation
Syntax:
operation ::= `tensor.cast` $source attr-dict `:` type($source) `to` type($dest)
Convert a tensor from one type to an equivalent type without changing any data elements. The source and destination types must both be tensor types with the same element type. If both are ranked, then the rank should be the same and static dimensions should match. The operation is invalid if converting to a mismatching constant dimension.
Example:
// Convert from unknown rank to rank 2 with unknown dimension sizes.
%2 = tensor.cast %1 : tensor<*xf32> to tensor<?x?xf32>
// Convert to a type with more known dimensions.
%3 = tensor.cast %2 : tensor<?x?xf32> to tensor<4x?xf32>
// Discard static dimension and rank information.
%4 = tensor.cast %3 : tensor<4x?xf32> to tensor<?x?xf32>
%5 = tensor.cast %4 : tensor<?x?xf32> to tensor<*xf32>
Interfaces: CastOpInterface, NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
source | tensor of any type values |
Results: ¶
Result | Description |
---|---|
dest | tensor of any type values |
tensor.collapse_shape
(::mlir::tensor::CollapseShapeOp) ¶
operation to produce a tensor with a smaller rank
Syntax:
operation ::= `tensor.collapse_shape` $src $reassociation attr-dict `:` type($src) `into` type($result)
The tensor.collapse_shape
op produces a new tensor with a smaller
rank whose sizes are a reassociation of the original src
.
A reassociation is defined as a continuous grouping of dimensions and is represented with an array of I64ArrayAttr attribute.
The verification rule is that the reassociation maps are applied to the operand tensor with the higher rank to obtain the result tensor with the smaller rank.
The result tensor type of a reshape can be zero-ranked if the operand tensor type is statically shaped with all dimensions being unit extent. In such case the reassociation map is empty.
Examples:
// Dimension collapse (i, j) -> i' and k -> k'
%b = tensor.collapse_shape %a [[0, 1], [2]]
: tensor<?x?x?xf32> into tensor<?x?xf32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Attributes: ¶
Attribute | MLIR Type | Description |
---|---|---|
reassociation | ::mlir::ArrayAttr | Array of 64-bit integer array attributes |
Operands: ¶
Operand | Description |
---|---|
src | tensor of any type values |
Results: ¶
Result | Description |
---|---|
result | tensor of any type values |
tensor.dim
(::mlir::tensor::DimOp) ¶
dimension index operation
Syntax:
operation ::= `tensor.dim` attr-dict $source `,` $index `:` type($source)
The tensor.dim
operation takes a tensor and a dimension operand of type
index
. It returns the size of the requested dimension of the given
tensor. If the dimension index is out of bounds, the behavior is undefined.
The specified tensor type is that of the first operand.
Example:
// Always returns 4, can be constant folded:
%c0 = arith.constant 0 : index
%x = tensor.dim %A, %c0 : tensor<4x?xf32>
// Returns the dynamic dimension of %A.
%c1 = arith.constant 1 : index
%y = tensor.dim %A, %c1 : memref<4x?xf32>
// Equivalent generic form:
%x = "tensor.dim"(%A, %c0) : (memref<4x?xf32>, index) -> index
%y = "tensor.dim"(%A, %c1) : (memref<4x?xf32>, index) -> index
Interfaces: InferTypeOpInterface, NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
source | tensor of any type values |
index | index |
Results: ¶
Result | Description |
---|---|
result | index |
tensor.expand_shape
(::mlir::tensor::ExpandShapeOp) ¶
operation to produce a tensor with a higher rank
Syntax:
operation ::= `tensor.expand_shape` $src $reassociation attr-dict `:` type($src) `into` type($result)
The tensor.expand_shape
op produces a new tensor with a higher
rank whose sizes are a reassociation of the original src
.
A reassociation is defined as a continuous grouping of dimensions and is represented with an array of I64ArrayAttr attribute.
The verification rule is that the reassociation maps are applied to the result tensor with the higher rank to obtain the operand tensor with the smaller rank.
The operand tensor type of a reshape can be zero-ranked if the result tensor type is statically shaped with all dimensions being unit extent. In such cases the reassociation map is empty.
Examples:
// Dimension expansion i -> (i', j') and (k) -> (k')
%b = tensor.expand_shape %a [[0, 1], [2]]
: tensor<?x?xf32> into tensor<?x?x?xf32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Attributes: ¶
Attribute | MLIR Type | Description |
---|---|---|
reassociation | ::mlir::ArrayAttr | Array of 64-bit integer array attributes |
Operands: ¶
Operand | Description |
---|---|
src | tensor of any type values |
Results: ¶
Result | Description |
---|---|
result | tensor of any type values |
tensor.extract
(::mlir::tensor::ExtractOp) ¶
element extraction operation
Syntax:
operation ::= `tensor.extract` $tensor `[` $indices `]` attr-dict `:` type($tensor)
The tensor.extract
op reads a tensor and returns one
element from it specified by an index list. The output of the op is a
new value with the same type as the elements of the tensor. The
arity of indices must match the rank of the accessed value (i.e., if a
tensor is of rank 3, then 3 indices are required for the extract. The
indices should all be of index
type.
Example:
%4 = tensor.extract %t[%1, %2] : tensor<4x4xi32>
%5 = tensor.extract %rt[%1, %2] : tensor<?x?xi32>
%6 = tensor.extract %ut[%1, %2] : tensor<*xi32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
tensor | tensor of any type values |
indices | index |
Results: ¶
Result | Description |
---|---|
result | any type |
tensor.extract_slice
(::mlir::tensor::ExtractSliceOp) ¶
extract slice operation
Syntax:
operation ::= `tensor.extract_slice` $source ``
custom<OperandsOrIntegersOffsetsOrStridesList>($offsets, $static_offsets)
custom<OperandsOrIntegersSizesList>($sizes, $static_sizes)
custom<OperandsOrIntegersOffsetsOrStridesList>($strides, $static_strides)
attr-dict `:` type($source) `to` type($result)
The “extract_slice” operation extract a tensor from another tensor as specified by the operation’s offsets, sizes and strides arguments.
The extract_slice operation supports the following arguments:
- source: the “base” tensor from which to extract a slice.
- offsets: tensor-rank number of offsets into the “base” tensor from which to extract the slice.
- sizes: tensor-rank number of sizes which specify the sizes of the result tensor type.
- strides: tensor-rank number of strides specifying subsampling in each dimension.
The representation based on offsets, sizes and strides support a
partially-static specification via attributes specified through the
static_offsets
, static_sizes
and static_strides
arguments. A special
sentinel value ShapedType::kDynamicSize and
ShapedType::kDynamicStrideOrOffset encodes that the corresponding entry has
a dynamic value.
After buffer allocation, the “extract_slice” op is expected to lower into a memref.subview op.
An extract_slice operation may additionally reduce the rank of the resulting tensor by removing dimensions that are statically known to be of size 1. This rank-reduction behavior is not required by the op semantics: this flexibility allows to progressively drop unit dimensions while lowering between different flavors of ops on that operate on tensors.
Example:
// Rank-reducing extract_slice.
%1 = tensor.extract_slice %0[0, 0, 0][1, 16, 4][1, 1, 1] :
tensor<8x16x4xf32> to tensor<16x4xf32>
%3 = tensor.extract_slice %2[%o0, 4, %o2][1, %sz1, 1][1, %st1, 1] :
tensor<8x16x4xf32> to tensor<1x?xf32>
Traits: AttrSizedOperandSegments
Interfaces: NoSideEffect (MemoryEffectOpInterface), OffsetSizeAndStrideOpInterface, ReifyRankedShapedTypeOpInterface
Effects: MemoryEffects::Effect{}
Attributes: ¶
Attribute | MLIR Type | Description |
---|---|---|
static_offsets | ::mlir::ArrayAttr | 64-bit integer array attribute |
static_sizes | ::mlir::ArrayAttr | 64-bit integer array attribute |
static_strides | ::mlir::ArrayAttr | 64-bit integer array attribute |
Operands: ¶
Operand | Description |
---|---|
source | ranked tensor of any type values |
offsets | index |
sizes | index |
strides | index |
Results: ¶
Result | Description |
---|---|
result | ranked tensor of any type values |
tensor.from_elements
(::mlir::tensor::FromElementsOp) ¶
tensor from elements operation.
Syntax:
operation ::= `tensor.from_elements` $elements attr-dict `:` type($result)
Create a N-D tensor from a range of same-type arguments. The number of
provided elements
should equal to the number of the elements in the
result type. The elements
correspond to a flattened tensor.
Example:
tensor.from_elements %a, %b, %c, %d, %e, %f : tensor<2x3xindex>
will result in a tensor
[[%a, %b, %c] [%d, %e, %f]]
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
elements | any type |
Results: ¶
Result | Description |
---|---|
result | statically shaped tensor of any type values |
tensor.generate
(::mlir::tensor::GenerateOp) ¶
Creates a dynamically sized tensor from elements
Syntax:
operation ::= `tensor.generate` $dynamicExtents $body attr-dict `:` type($result)
This operation creates a dynamically sized tensor with elements of any type. It expects one index operand per dynamic extent of the result tensor.
The body region defines the tensor’s elements. It takes index operands as
its region arguments that span the index space. The element at the given
position is yielded with the yield
operation (see YieldOp
). There is
no defined ordering to the invocations of the body. It is conceptually
a “parallel map” operation.
Example:
%tnsr = tensor.generate %m, %n {
^bb0(%i : index, %j : index, %k : index):
...
yield %elem : f32
} : tensor<?x3x?f32>
Traits: RecursiveSideEffects, SingleBlockImplicitTerminatormlir::tensor::YieldOp
Interfaces: ReifyRankedShapedTypeOpInterface
Operands: ¶
Operand | Description |
---|---|
dynamicExtents | index |
Results: ¶
Result | Description |
---|---|
result | ranked tensor of any type values |
tensor.insert
(::mlir::tensor::InsertOp) ¶
element insertion operation
Syntax:
operation ::= `tensor.insert` $scalar `into` $dest `[` $indices `]` attr-dict `:` type($dest)
The tensor.insert
op writes a tensor into a tensor dest
as specified by
the operation’s indices.
It returns a copy of dest
with the proper slice updated with the value
of scalar
.
The arity of indices must match the rank of the tensor dest
(i.e., if a
tensor is of rank 3, then 3 indices are required for the extract. The
indices should all be of index
type.
Example:
%4 = tensor.insert %t into %dest[%1, %2] : tensor<4x4xi32>
%5 = tensor.insert %rt into %dest[%1, %2] : tensor<?x?xi32>
%6 = tensor.insert %ut into %dest[%1, %2] : tensor<*xi32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
scalar | any type |
dest | tensor of any type values |
indices | index |
Results: ¶
Result | Description |
---|---|
result | tensor of any type values |
tensor.insert_slice
(::mlir::tensor::InsertSliceOp) ¶
insert_slice operation
Syntax:
operation ::= `tensor.insert_slice` $source `into` $dest ``
custom<OperandsOrIntegersOffsetsOrStridesList>($offsets, $static_offsets)
custom<OperandsOrIntegersSizesList>($sizes, $static_sizes)
custom<OperandsOrIntegersOffsetsOrStridesList>($strides, $static_strides)
attr-dict `:` type($source) `into` type($dest)
The “insert_slice” operation insert a tensor source
into another
tensor dest
as specified by the operation’s offsets, sizes and strides
arguments.
It returns a copy of dest
with the proper slice updated with the value
of source
.
The insert_slice operation supports the following arguments:
- source: the tensor that is inserted.
- dest: the tensor into which the source tensor is inserted.
- offsets: tensor-rank number of offsets into the
dest
tensor into which the slice is inserted. - sizes: tensor-rank number of sizes which specify the sizes of the source tensor type.
- strides: tensor-rank number of strides that specify subsampling in each dimension.
The representation based on offsets, sizes and strides support a
partially-static specification via attributes specified through the
static_offsets
, static_sizes
and static_strides
arguments. A special
sentinel value ShapedType::kDynamicSize and
ShapedType::kDynamicStrideOrOffset encodes that the corresponding entry has
a dynamic value.
After buffer allocation, the “insert_slice” op is expected to lower into a memref.subview op.
An insert_slice operation may additionally specify insertion into a tensor of higher rank than the source tensor, along dimensions that are statically known to be of size 1. This rank-altering behavior is not required by the op semantics: this flexibility allows to progressively drop unit dimensions while lowering between different flavors of ops on that operate on tensors. The rank-altering behavior of tensor.insert_slice matches the rank-reducing behavior of tensor.extract_slice.
Example:
// Rank-altering insert_slice.
%1 = tensor.insert_slice %t into %0[0, 0, 0][1, 16, 4][1, 1, 1] :
tensor<16x4xf32> into tensor<8x16x4xf32>
%3 = tensor.insert_slice %tt into %2[%o0, 4, %o2][1, %sz1, 1][1, %st1, 1] :
tensor<1x?xf32> into tensor<8x16x4xf32>
Traits: AttrSizedOperandSegments
Interfaces: NoSideEffect (MemoryEffectOpInterface), OffsetSizeAndStrideOpInterface, ReifyRankedShapedTypeOpInterface
Effects: MemoryEffects::Effect{}
Attributes: ¶
Attribute | MLIR Type | Description |
---|---|---|
static_offsets | ::mlir::ArrayAttr | 64-bit integer array attribute |
static_sizes | ::mlir::ArrayAttr | 64-bit integer array attribute |
static_strides | ::mlir::ArrayAttr | 64-bit integer array attribute |
Operands: ¶
Operand | Description |
---|---|
source | ranked tensor of any type values |
dest | ranked tensor of any type values |
offsets | index |
sizes | index |
strides | index |
Results: ¶
Result | Description |
---|---|
result | ranked tensor of any type values |
tensor.pad
(::mlir::tensor::PadOp) ¶
tensor pad operation
Syntax:
operation ::= `tensor.pad` $source
(`nofold` $nofold^)?
`low` `` custom<OperandsOrIntegersSizesList>($low, $static_low)
`high` `` custom<OperandsOrIntegersSizesList>($high, $static_high)
$region attr-dict `:` type($source) `to` type($result)
tensor.pad
is an operation that pads the source
tensor
with given low
and high
padding config.
The PadOp operation supports the following arguments:
- source: the “base” tensor on which to pad.
- low: A list contains the padding along the start of each
dimension, i.e
low
. - high: A list contains the padding along the end of each
dimension, i.e.
high
. - nofold: indicates that the operation should not be folded when source and result types are equal.
The result tensor dimensions are low
+ dim
+ high
along that
dimension. The number of elements of low
and high
must match
the rank of the input tensor. They can be either a constant or a
dynamic value.
The region of the tensor.pad
operation returns the value to use
for the padding. The arguments of the region represent the index
of the source being accessed. There should be as many arguments as
the rank of the source
tensor. The value yield
-ed by the
region is used as the value of the view at the given position.
If nofold
is set, the padding operation will not be folded away even
if the source type and the padded type have the same static shape. This can
be used, e.g., for packing or promotion to faster memory.
Example 1:
%pad_value = ... : f32
%0 = tensor.pad %0 low[1, 2] high[2, 3] {
^bb0(%arg0 : index, %arg1 : index):
tensor.yield %pad_value : f32
} : tensor<?x?xf32> to tensor<?x?xf32>
Example 2:
%pad_value = ... : f32
%0 = tensor.pad %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] {
^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index):
tensor.yield %pad_value : f32
} : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32>
Example 3:
%pad_value = ... : f32
%0 = tensor.pad %arg0 low[0, 0] high[%ub0, %ub1] {
^bb0(%arg1: index, %arg2: index):
tensor.yield %pad_value : f32
} : tensor<2x3xf32> to tensor<?x?xf32>
Example 4:
// Force a padded value to be always exist with `nofold`.
%pad_value = ... : f32
%0 = tensor.pad %arg0 nofold low[0, 0] high[0, 0] {
^bb0(%arg1: index, %arg2: index):
tensor.yield %pad_value : f32
} : tensor<2x3xf32> to tensor<2x3xf32>
Traits: AttrSizedOperandSegments, SingleBlockImplicitTerminatormlir::tensor::YieldOp
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Attributes: ¶
Attribute | MLIR Type | Description |
---|---|---|
static_low | ::mlir::ArrayAttr | 64-bit integer array attribute |
static_high | ::mlir::ArrayAttr | 64-bit integer array attribute |
nofold | ::mlir::UnitAttr | unit attribute |
Operands: ¶
Operand | Description |
---|---|
source | tensor of any type values |
low | index |
high | index |
Results: ¶
Result | Description |
---|---|
result | tensor of any type values |
tensor.rank
(::mlir::tensor::RankOp) ¶
rank operation
Syntax:
operation ::= `tensor.rank` $tensor attr-dict `:` type($tensor)
The tensor.rank
operation takes a tensor operand and returns its rank.
Example:
%0 = tensor.rank %arg0 : tensor<*xf32>
%1 = tensor.rank %arg1 : tensor<?x?xf32>
Interfaces: InferTypeOpInterface, NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
tensor | tensor of any type values |
Results: ¶
Result | Description |
---|---|
«unnamed» | index |
tensor.reshape
(::mlir::tensor::ReshapeOp) ¶
tensor reshape operation
Syntax:
operation ::= `tensor.reshape` $source `(` $shape `)` attr-dict `:` functional-type(operands, results)
The reshape
operation converts a tensor from one type to an equivalent
type with a provided shape. The source and destination types are compatible
if both have the same element type, same number of elements. The following
combinations are possible:
a. Source type is ranked or unranked. Shape argument has static size. Result type is ranked.
// Reshape statically-shaped tensor.
%dst = tensor.reshape %src(%shape)
: (tensor<4x1xf32>, tensor<1xi32>) -> tensor<4xf32>
%dst0 = tensor.reshape %src(%shape0)
: (tensor<4x1xf32>, tensor<2xi32>) -> tensor<2x2xf32>
// Flatten unranked tensor.
%dst = tensor.reshape %src(%shape)
: (tensor<*xf32>, tensor<1xi32>) -> tensor<?xf32>
b. Source type is ranked or unranked. Shape argument has dynamic size. Result type is unranked.
// Reshape dynamically-shaped 1D tensor.
%dst = tensor.reshape %src(%shape)
: (tensor<?xf32>, tensor<?xi32>) -> tensor<*xf32>
// Reshape unranked tensor.
%dst = tensor.reshape %src(%shape)
: (tensor<*xf32>, tensor<?xi32>) -> tensor<*xf32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
source | tensor of any type values |
shape | 1D tensor of signless integer or index values |
Results: ¶
Result | Description |
---|---|
result | tensor of any type values |
tensor.splat
(::mlir::tensor::SplatOp) ¶
tensor splat or broadcast operation
Syntax:
operation ::= `tensor.splat` $input attr-dict `:` type($aggregate)
Broadcast the operand to all elements of the result tensor. The operand is required to be of integer/index/float type, and the result tensor must be statically shaped.
Example:
%s = arith.constant 10.1 : f32
%t = tensor.splat %s : tensor<8x16xi32>
TODO: This operation is easy to extend to broadcast to dynamically shaped tensors:
// Broadcasts %s to a 2-d dynamically shaped tensor, with %m, %n binding
// to the sizes of the two dynamic dimensions.
%m = "foo"() : () -> (index)
%n = "bar"() : () -> (index)
%t = tensor.splat %s [%m, %n] : tensor<?x?xi32>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
input | integer/index/float type |
Results: ¶
Result | Description |
---|---|
aggregate | statically shaped tensor of any type values |
tensor.yield
(::mlir::tensor::YieldOp) ¶
Yield a value from a region
Syntax:
operation ::= `tensor.yield` $value attr-dict `:` type($value)
This operation is used to yield a single value from a within a region. It
is used to create dynamically sized tensors
(see tensor.generate
and tensor.pad
ops).
Traits: HasParent<::mlir::tensor::GenerateOp, ::mlir::tensor::PadOp>, ReturnLike, Terminator
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands: ¶
Operand | Description |
---|---|
value | any type |