# MLIR

Multi-Level IR Compiler Framework

# 'tensor' Dialect

The tensor dialect is intended to hold core tensor creation and manipulation ops, which are not strongly associated with any particular other dialect or domain abstraction. The primary smoke test of this is ops that make sense for any tensor element type.

We leave it to other dialects to hold the vast swath of possible computations one might want to do on a tensor.

The tensor type is (for better or for worse) used to represent all kinds of things, and supports an open-ended set of element types. Examples:

• representing large, dense aggregations of primitive types, suitable for high-performance numerical computing.
• representing shapes in the shape dialect, which consist of small 1D tensors of index data type.
• representing aggregations of strings or “variant” types.
• representing large, sparse aggregations of primitive types, suitable for high-performance numerical computing.

Thus, for the tensor dialect, we prefer for now to constrain the scope as much as possible. The expectation is that at some point in the future, the tensor dialect’s scope may be broadened through a careful discussion of the tradeoffs.

The tensor type is actually a builtin type (it lives in the builtin dialect), and does not live in this dialect.

## Operation definition ¶

### tensor.cast (::mlir::tensor::CastOp) ¶

tensor cast operation

Syntax:

operation ::= tensor.cast $source attr-dict : type($source) to type($dest)  Convert a tensor from one type to an equivalent type without changing any data elements. The source and destination types must both be tensor types with the same element type. If both are ranked, then the rank should be the same and static dimensions should match. The operation is invalid if converting to a mismatching constant dimension. Example: // Convert from unknown rank to rank 2 with unknown dimension sizes. %2 = tensor.cast %1 : tensor<*xf32> to tensor<?x?xf32> // Convert to a type with more known dimensions. %3 = tensor.cast %2 : tensor<?x?xf32> to tensor<4x?xf32> // Discard static dimension and rank information. %4 = tensor.cast %3 : tensor<4x?xf32> to tensor<?x?xf32> %5 = tensor.cast %4 : tensor<?x?xf32> to tensor<*xf32>  #### Operands: ¶ OperandDescription sourcetensor of any type values #### Results: ¶ ResultDescription desttensor of any type values ### tensor.dim (::mlir::tensor::DimOp) ¶ dimension index operation Syntax: operation ::= tensor.dim attr-dict$source , $index : type($source)


The dim operation takes a tensor and a dimension operand of type index. It returns the size of the requested dimension of the given tensor. If the dimension index is out of bounds, the behavior is undefined.

The specified tensor type is that of the first operand.

Example:

// Always returns 4, can be constant folded:
%c0 = constant 0 : index
%x = tensor.dim %A, %c0 : tensor<4x?xf32>

// Returns the dynamic dimension of %A.
%c1 = constant 1 : index
%y = tensor.dim %A, %c1 : memref<4x?xf32>

// Equivalent generic form:
%x = "tensor.dim"(%A, %c0) : (memref<4x?xf32>, index) -> index
%y = "tensor.dim"(%A, %c1) : (memref<4x?xf32>, index) -> index


#### Operands: ¶

OperandDescription
sourcetensor of any type values
indexindex

#### Results: ¶

ResultDescription
resultindex

### tensor.extract (::mlir::tensor::ExtractOp) ¶

element extraction operation

Syntax:

operation ::= tensor.extract $tensor [$indices ] attr-dict : type($tensor)  The tensor.extract op reads a tensor and returns one element from it specified by an index list. The output of the op is a new value with the same type as the elements of the tensor. The arity of indices must match the rank of the accessed value (i.e., if a tensor is of rank 3, then 3 indices are required for the extract. The indices should all be of index type. Example: %4 = tensor.extract %t[%1, %2] : tensor<4x4xi32> %5 = tensor.extract %rt[%1, %2] : tensor<?x?xi32> %6 = tensor.extract %ut[%1, %2] : tensor<*xi32>  #### Operands: ¶ OperandDescription tensortensor of any type values indicesindex #### Results: ¶ ResultDescription resultany type ### tensor.extract_slice (::mlir::tensor::ExtractSliceOp) ¶ extract slice operation Syntax: operation ::= tensor.extract_slice$source 
custom<OperandsOrIntegersOffsetsOrStridesList>($offsets,$static_offsets)
custom<OperandsOrIntegersSizesList>($sizes,$static_sizes)
custom<OperandsOrIntegersOffsetsOrStridesList>($strides,$static_strides)
attr-dict : type($source) to type($result)


The “extract_slice” operation extract a tensor from another tensor as specified by the operation’s offsets, sizes and strides arguments.

The extract_slice operation supports the following arguments:

• source: the “base” tensor from which to extract a slice.
• offsets: tensor-rank number of offsets into the “base” tensor from which to extract the slice.
• sizes: tensor-rank number of sizes which specify the sizes of the result tensor type.
• strides: tensor-rank number of strides specifying subsampling in each dimension.

The representation based on offsets, sizes and strides support a partially-static specification via attributes specified through the static_offsets, static_sizes and static_strides arguments. A special sentinel value ShapedType::kDynamicSize and ShapedType::kDynamicStrideOrOffset encodes that the corresponding entry has a dynamic value.

After buffer-allocation, the “extract_slice” op is expected to lower into a “subview” op.

An extract_slice operation may additionally reduce the rank of the resulting tensor by removing dimensions that are statically known to be of size 1.

Example:

// Rank-reducing extract_slice.
%1 = tensor.extract_slice %0[0, 0, 0][1, 16, 4][1, 1, 1] :
tensor<8x16x4xf32> to tensor<16x4xf32>
%3 = tensor.extract_slice %2[3, 4, 2][1, 6, 3][1, 1, 1] :
tensor<8x16x4xf32> to tensor<6x3xf32>


#### Attributes: ¶

AttributeMLIR TypeDescription
static_offsets::mlir::ArrayAttr64-bit integer array attribute
static_sizes::mlir::ArrayAttr64-bit integer array attribute
static_strides::mlir::ArrayAttr64-bit integer array attribute

#### Operands: ¶

OperandDescription
sourceranked tensor of any type values
offsetsindex
sizesindex
stridesindex

#### Results: ¶

ResultDescription
resultranked tensor of any type values

### tensor.from_elements (::mlir::tensor::FromElementsOp) ¶

tensor from elements operation.

Syntax:

operation ::= tensor.from_elements $elements attr-dict : type($result)


Create a 1D tensor from a range of same-type arguments.

Example:

tensor.from_elements i_1, ..., i_N :  tensor<Nxindex>


#### Operands: ¶

OperandDescription
elementsany type

#### Results: ¶

ResultDescription
result1D tensor of any type values

### tensor.generate (::mlir::tensor::GenerateOp) ¶

Creates a dynamically sized tensor from elements

Syntax:

operation ::= tensor.generate $dynamicExtents$body attr-dict : type($result)  This operation creates a dynamically sized tensor with elements of any type. It expects one index operand per dynamic extent of the result tensor. The body region defines the tensor’s elements. It takes index operands as its region arguments that span the index space. The element at the given position is yielded with the yield operation (see YieldOp). There is no defined ordering to the invocations of the body. It is conceptually a “parallel map” operation. Example:  %tnsr = tensor.generate %m, %n { ^bb0(%i : index, %j : index, %k : index): ... yield %elem : f32 } : tensor<?x3x?f32>  #### Operands: ¶ OperandDescription dynamicExtentsindex #### Results: ¶ ResultDescription resultranked tensor of any type values ### tensor.insert (::mlir::tensor::InsertOp) ¶ element insertion operation Syntax: operation ::= tensor.insert$scalar into $dest [$indices ] attr-dict : type($dest)  The tensor.insert op writes a tensor into a tensor destas specified by the operation’s indices. It returns a copy of dest with the proper slice updated with the value of scalar. The arity of indices must match the rank of the tensor dest (i.e., if a tensor is of rank 3, then 3 indices are required for the extract. The indices should all be of index type. Example: %4 = tensor.insert %t into %dest[%1, %2] : tensor<4x4xi32> %5 = tensor.insert %rt into %dest[%1, %2] : tensor<?x?xi32> %6 = tensor.insert %ut into %dest[%1, %2] : tensor<*xi32>  #### Operands: ¶ OperandDescription scalarany type desttensor of any type values indicesindex #### Results: ¶ ResultDescription resulttensor of any type values ### tensor.insert_slice (::mlir::tensor::InsertSliceOp) ¶ insert_slice operation Syntax: operation ::= tensor.insert_slice$source into $dest  custom<OperandsOrIntegersOffsetsOrStridesList>($offsets, $static_offsets) custom<OperandsOrIntegersSizesList>($sizes, $static_sizes) custom<OperandsOrIntegersOffsetsOrStridesList>($strides, $static_strides) attr-dict : type($source) into type($dest)  The “insert_slice” operation insert a tensor source into another tensor dest as specified by the operation’s offsets, sizes and strides arguments. It returns a copy of dest with the proper slice updated with the value of source. The insert_slice operation supports the following arguments: • source: the tensor that is inserted. • dest: the tensor into which the source tensor is inserted. • offsets: tensor-rank number of offsets into the dest tensor into which the slice is inserted. • sizes: tensor-rank number of sizes which specify the sizes of the result tensor type. • strides: tensor-rank number of strides that specify subsampling in each dimension. The representation based on offsets, sizes and strides support a partially-static specification via attributes specified through the static_offsets, static_sizes and static_strides arguments. A special sentinel value ShapedType::kDynamicSize and ShapedType::kDynamicStrideOrOffset encodes that the corresponding entry has a dynamic value. After buffer-allocation, the “insert_slice” op is expected to become an in-place buffer update. #### Attributes: ¶ AttributeMLIR TypeDescription static_offsets::mlir::ArrayAttr64-bit integer array attribute static_sizes::mlir::ArrayAttr64-bit integer array attribute static_strides::mlir::ArrayAttr64-bit integer array attribute #### Operands: ¶ OperandDescription sourceranked tensor of any type values destranked tensor of any type values offsetsindex sizesindex stridesindex #### Results: ¶ ResultDescription resultranked tensor of any type values ### tensor.reshape (::mlir::tensor::ReshapeOp) ¶ tensor reshape operation Syntax: operation ::= tensor.reshape$source ( $shape ) attr-dict : functional-type(operands, results)  The reshape operation converts a tensor from one type to an equivalent type with a provided shape. The source and destination types are compatible if both have the same element type, same number of elements. The following combinations are possible: a. Source type is ranked or unranked. Shape argument has static size. Result type is ranked. // Reshape statically-shaped tensor. %dst = tensor.reshape %src(%shape) : (tensor<4x1xf32>, tensor<1xi32>) -> tensor<4xf32> %dst0 = tensor.reshape %src(%shape0) : (tensor<4x1xf32>, tensor<2xi32>) -> tensor<2x2xf32> // Flatten unranked tensor. %dst = tensor.reshape %src(%shape) : (tensor<*xf32>, tensor<1xi32>) -> tensor<?xf32>  b. Source type is ranked or unranked. Shape argument has dynamic size. Result type is unranked. // Reshape dynamically-shaped 1D tensor. %dst = tensor.reshape %src(%shape) : (tensor<?xf32>, tensor<?xi32>) -> tensor<*xf32> // Reshape unranked tensor. %dst = tensor.reshape %src(%shape) : (tensor<*xf32>, tensor<?xi32>) -> tensor<*xf32>  #### Operands: ¶ OperandDescription sourcetensor of any type values shape1D tensor of signless integer or index values #### Results: ¶ ResultDescription resulttensor of any type values ### tensor.yield (::mlir::tensor::YieldOp) ¶ Yield a value from a region Syntax: operation ::= tensor.yield$value attr-dict : type(\$value)


This operation is used to yield a single value from a within a region. It is used to create dynamically sized tensors (see tensor.generate op).

#### Operands: ¶

OperandDescription
valueany type