# MLIR

Multi-Level IR Compiler Framework

# 'sparse_tensor' Dialect

The SparseTensor dialect supports all the attributes, types, operations, and passes that are required to make sparse tensor types first class citizens within the MLIR compiler infrastructure. The dialect forms a bridge between high-level operations on sparse tensors types and lower-level operations on the actual sparse storage schemes consisting of pointers, indices, and values. Lower-level support may consist of fully generated code or may be provided by means of a small sparse runtime support library.

The concept of treating sparsity as a property, not a tedious implementation detail, by letting a sparse compiler generate sparse code automatically was pioneered for linear algebra by [Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor Algebra Compiler (TACO) project (see http://tensor-compiler.org).

The MLIR implementation [Biketal22] closely follows the “sparse iteration theory” that forms the foundation of TACO. A rewriting rule is applied to each tensor expression in the Linalg dialect (MLIR’s tensor index notation) where the sparsity of tensors is indicated using the per-dimension level types dense/compressed together with a specification of the order on the dimensions (see [Chou18] for an in-depth discussions and possible extensions to these level types). Subsequently, a topologically sorted iteration graph, reflecting the required order on indices with respect to the dimensions of each tensor, is constructed to ensure that all tensors are visited in natural index order. Next, iteration lattices are constructed for the tensor expression for every index in topological order. Each iteration lattice point consists of a conjunction of tensor indices together with a tensor (sub)expression that needs to be evaluated for that conjunction. Within the lattice, iteration points are ordered according to the way indices are exhausted. As such these iteration lattices drive actual sparse code generation, which consists of a relatively straightforward one-to-one mapping from iteration lattices to combinations of for-loops, while-loops, and if-statements. Sparse tensor outputs that materialize uninitialized are handled with insertions in pure lexicographical index order if all parallel loops are outermost or using a 1-dimensional access pattern expansion (a.k.a. workspace) where feasible [Gustavson72,Bik96,Kjolstad19].

• [Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, Leiden University, May 1996.
• [Biketal22] Aart J.C. Bik, Penporn Koanantakool, Tatiana Shpeisman, Nicolas Vasilache, Bixia Zheng, and Fredrik Kjolstad. Compiler Support for Sparse Tensor Computations in MLIR. ACM Transactions on Architecture and Code Optimization, June, 2022. See: https://dl.acm.org/doi/10.1145/3544559
• [Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of the ACM on Programming Languages, October 2018.
• [Gustavson72] Fred G. Gustavson. Some basic techniques for solving sparse systems of linear equations. In Sparse Matrices and Their Applications, pages 41–52. Plenum Press, New York, 1972.
• [Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of the ACM on Programming Languages, October 2017.
• [Kjolstad19] Fredrik Berg Kjolstad, Peter Ahrens, Shoaib Ashraf Kamil, and Saman Amarasinghe. Tensor Algebra Compilation with Workspaces, Proceedings of the IEEE/ACM International Symposium on Code Generation and Optimization, 2019.
• [Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation. PhD thesis, MIT, February, 2020.

## Operation definition ¶

### sparse_tensor.binary (::mlir::sparse_tensor::BinaryOp) ¶

Binary set operation utilized within linalg.generic

Syntax:

operation ::= sparse_tensor.binary $x ,$y : attr-dict type($x) , type($y) to type($output) \n overlap =$overlapRegion \n
left = (identity $left_identity^):($leftRegion)? \n
right = (identity $right_identity^):($rightRegion)?


Defines a computation within a linalg.generic operation that takes two operands and executes one of the regions depending on whether both operands or either operand is nonzero (i.e. stored explicitly in the sparse storage format).

Three regions are defined for the operation and must appear in this order:

• overlap (elements present in both sparse tensors)
• left (elements only present in the left sparse tensor)
• right (element only present in the right sparse tensor)

Each region contains a single block describing the computation and result. Every non-empty block must end with a sparse_tensor.yield and the return type must match the type of output. The primary region’s block has two arguments, while the left and right region’s block has only one argument.

A region may also be declared empty (i.e. left={}), indicating that the region does not contribute to the output. For example, setting both left={} and right={} is equivalent to the intersection of the two inputs as only the overlap region will contribute values to the output.

As a convenience, there is also a special token identity which can be used in place of the left or right region. This token indicates that the return value is the input value (i.e. func(%x) => return %x). As a practical example, setting left=identity and right=identity would be equivalent to a union operation where non-overlapping values in the inputs are copied to the output unchanged.

Example of isEqual applied to intersecting elements only:

%C = bufferization.alloc_tensor...
%0 = linalg.generic #trait
ins(%A: tensor<?xf64, #SparseVector>,
%B: tensor<?xf64, #SparseVector>)
outs(%C: tensor<?xi8, #SparseVector>) {
^bb0(%a: f64, %b: f64, %c: i8) :
%result = sparse_tensor.binary %a, %b : f64, f64 to i8
overlap={
^bb0(%arg0: f64, %arg1: f64):
%cmp = arith.cmpf "oeq", %arg0, %arg1 : f64
%ret_i8 = arith.extui %cmp : i1 to i8
sparse_tensor.yield %ret_i8 : i8
}
left={}
right={}
linalg.yield %result : i8
} -> tensor<?xi8, #SparseVector>


Example of A+B in upper triangle, A-B in lower triangle:

%C = bufferization.alloc_tensor...
%1 = linalg.generic #trait
ins(%A: tensor<?x?xf64, #CSR>, %B: tensor<?x?xf64, #CSR>
outs(%C: tensor<?x?xf64, #CSR> {
^bb0(%a: f64, %b: f64, %c: f64) :
%row = linalg.index 0 : index
%col = linalg.index 1 : index
%result = sparse_tensor.binary %a, %b : f64, f64 to f64
overlap={
^bb0(%x: f64, %y: f64):
%cmp = arith.cmpi "uge", %col, %row : index
%upperTriangleResult = arith.addf %x, %y : f64
%lowerTriangleResult = arith.subf %x, %y : f64
%ret = arith.select %cmp, %upperTriangleResult, %lowerTriangleResult : f64
sparse_tensor.yield %ret : f64
}
left=identity
right={
^bb0(%y: f64):
%cmp = arith.cmpi "uge", %col, %row : index
%lowerTriangleResult = arith.negf %y : f64
%ret = arith.select %cmp, %y, %lowerTriangleResult : f64
sparse_tensor.yield %ret : f64
}
linalg.yield %result : f64
} -> tensor<?x?xf64, #CSR>


Example of set difference. Returns a copy of A where its sparse structure is not overlapped by B. The element type of B can be different than A because we never use its values, only its sparse structure:

%C = bufferization.alloc_tensor...
%2 = linalg.generic #trait
ins(%A: tensor<?x?xf64, #CSR>, %B: tensor<?x?xi32, #CSR>
outs(%C: tensor<?x?xf64, #CSR> {
^bb0(%a: f64, %b: i32, %c: f64) :
%result = sparse_tensor.binary %a, %b : f64, i32 to f64
overlap={}
left=identity
right={}
linalg.yield %result : f64
} -> tensor<?x?xf64, #CSR>


Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Attributes: ¶

AttributeMLIR TypeDescription
left_identity::mlir::UnitAttrunit attribute
right_identity::mlir::UnitAttrunit attribute

#### Operands: ¶

OperandDescription
xany type
yany type

#### Results: ¶

ResultDescription
outputany type

### sparse_tensor.compress (::mlir::sparse_tensor::CompressOp) ¶

Compressed an access pattern for insertion

Syntax:

operation ::= sparse_tensor.compress $tensor ,$indices , $values ,$filled , $added ,$count attr-dict : type($tensor) , type($indices) , type($values) , type($filled) , type($added) , type($count)


Finishes a single access pattern expansion by moving inserted elements into the sparse storage scheme. The values and filled array are reset in a sparse fashion by only iterating over set elements through an indirection using the added array, so that the operations are kept proportional to the number of nonzeros. See the ‘expand’ operation for more details.

Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.

Example:

sparse_tensor.compress %0, %1, %values, %filled, %added, %2
: tensor<4x4xf64, #CSR>, memref<?xindex>, memref<?xf64>,
memref<?xi1>, memref<?xindex>, index


#### Operands: ¶

OperandDescription
tensorsparse tensor of any type values
indices1D memref of index values
valuesstrided memref of any type values of rank 1
filled1D memref of 1-bit signless integer values
added1D memref of index values
countindex

### sparse_tensor.concatenate (::mlir::sparse_tensor::ConcatenateOp) ¶

Concatenates a list of tensors into a single tensor.

Syntax:

operation ::= sparse_tensor.concatenate $inputs attr-dict : type($inputs) to type($result)  Concatenates a list input tensors and the output tensor with the same rank. The concatenation happens on the specified dimension (0<= dimension < rank). The resulting dimension size is the sum of all the input dimension sizes, while all the other dimensions should have the same size in the input and output tensors. Only statically-sized input tensors are accepted, while the output tensor can be dynamically-sized. Example: %0 = sparse_tensor.concatenate %1, %2 { dimension = 0 : index } : tensor<64x64xf64, #CSR>, tensor<64x64xf64, #CSR> to tensor<128x64xf64, #CSR>  #### Attributes: ¶ AttributeMLIR TypeDescription dimension::mlir::IntegerAttrindex attribute #### Operands: ¶ OperandDescription inputsranked tensor of any type values #### Results: ¶ ResultDescription resultranked tensor of any type values ### sparse_tensor.convert (::mlir::sparse_tensor::ConvertOp) ¶ Converts between different tensor types Syntax: operation ::= sparse_tensor.convert$source attr-dict : type($source) to type($dest)


Converts one sparse or dense tensor type to another tensor type. The rank of the source and destination types must match exactly, and the dimension sizes must either match exactly or relax from a static to a dynamic size. The sparse encoding of the two types can obviously be completely different. The name convert was preferred over cast, since the operation may incur a non-trivial cost.

When converting between two different sparse tensor types, only explicitly stored values are moved from one underlying sparse storage format to the other. When converting from an unannotated dense tensor type to a sparse tensor type, an explicit test for nonzero values is used. When converting to an unannotated dense tensor type, implicit zeroes in the sparse storage format are made explicit. Note that the conversions can have non-trivial costs associated with them, since they may involve elaborate data structure transformations. Also, conversions from sparse tensor types into dense tensor types may be infeasible in terms of storage requirements.

Examples:

%0 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<32x32xf32, #CSR>
%1 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<?x?xf32, #CSR>
%2 = sparse_tensor.convert %b : tensor<8x8xi32, #CSC> to tensor<8x8xi32, #CSR>
%3 = sparse_tensor.convert %c : tensor<4x8xf64, #CSR> to tensor<4x?xf64, #CSC>

// The following conversion is not allowed (since it would require a
// runtime assertion that the source's dimension size is actually 100).
%4 = sparse_tensor.convert %d : tensor<?xf64> to tensor<100xf64, #SV>


Traits: SameOperandsAndResultElementType

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
sourcetensor of any type values

#### Results: ¶

ResultDescription
desttensor of any type values

### sparse_tensor.expand (::mlir::sparse_tensor::ExpandOp) ¶

Expands an access pattern for insertion

Syntax:

operation ::= sparse_tensor.expand $tensor attr-dict : type($tensor) to type($values) , type($filled) , type($added) , type($count)


Performs an access pattern expansion for the innermost dimensions of the given tensor. This operation is useful to implement kernels in which a sparse tensor appears as output. This technique is known under several different names and using several alternative implementations, for example, phase counter [Gustavson72], expanded or switch array [Pissanetzky84], in phase scan [Duff90], access pattern expansion [Bik96], and workspaces [Kjolstad19].

The values and filled array have sizes that suffice for a dense innermost dimension (e.g. a full row for matrices). The added array and count are used to store new indices when a false value is encountered in the filled array. All arrays should be allocated before the loop (possibly even shared between loops in a future optimization) so that their dense initialization can be amortized over many iterations. Setting and resetting the dense arrays in the loop nest itself is kept sparse by only iterating over set elements through an indirection using the added array, so that the operations are kept proportional to the number of nonzeros.

Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.

Example:

%values, %filled, %added, %count = sparse_tensor.expand %0
: tensor<4x4xf64, #CSR> to memref<?xf64>, memref<?xi1>, memref<?xindex>, index


#### Operands: ¶

OperandDescription
tensorsparse tensor of any type values

#### Results: ¶

ResultDescription
valuesstrided memref of any type values of rank 1
filled1D memref of 1-bit signless integer values
added1D memref of index values
countindex

### sparse_tensor.lex_insert (::mlir::sparse_tensor::LexInsertOp) ¶

Inserts a value into given sparse tensor in lexicographical index order

Syntax:

operation ::= sparse_tensor.lex_insert $tensor ,$indices , $value attr-dict : type($tensor) , type($indices) , type($value)


Inserts the given value at given indices into the underlying sparse storage format of the given tensor with the given indices. This operation can only be applied when a tensor materializes unintialized with a bufferization.alloc_tensor operation, the insertions occur in strict lexicographical index order, and the final tensor is constructed with a load operation that has the hasInserts attribute set.

Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.

Example:

sparse_tensor.lex_insert %tensor, %indices, %val
: tensor<1024x1024xf64, #CSR>, memref<?xindex>, memref<f64>


#### Operands: ¶

OperandDescription
tensorsparse tensor of any type values
indices1D memref of index values
valueany type

### sparse_tensor.load (::mlir::sparse_tensor::LoadOp) ¶

Rematerializes tensor from underlying sparse storage format

Syntax:

operation ::= sparse_tensor.load $tensor (hasInserts$hasInserts^)? attr-dict : type($tensor)  Rematerializes a tensor from the underlying sparse storage format of the given tensor. This is similar to the bufferization.to_tensor operation in the sense that it provides a bridge between a bufferized world view and a tensor world view. Unlike the bufferization.to_tensor operation, however, this sparse operation is used only temporarily to maintain a correctly typed intermediate representation during progressive bufferization. The hasInserts attribute denote whether insertions to the underlying sparse storage format may have occurred, in which case the underlying sparse storage format needs to be finalized. Otherwise, the operation simply folds away. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. Example: %1 = sparse_tensor.load %0 : tensor<8xf64, #SV>  Traits: SameOperandsAndResultType Interfaces: InferTypeOpInterface #### Attributes: ¶ AttributeMLIR TypeDescription hasInserts::mlir::UnitAttrunit attribute #### Operands: ¶ OperandDescription tensorsparse tensor of any type values #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.new (::mlir::sparse_tensor::NewOp) ¶ Materializes a new sparse tensor from given source Syntax: operation ::= sparse_tensor.new$source attr-dict : type($source) to type($result)


Materializes a sparse tensor with contents taken from an opaque pointer provided by source. For targets that have access to a file system, for example, this pointer may be a filename (or file) of a sparse tensor in a particular external storage format. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as pointers to buffers or runnable initialization code. The operation is provided as an anchor that materializes a properly typed sparse tensor with inital contents into a computation.

Example:

sparse_tensor.new %source : !Source to tensor<1024x1024xf64, #CSR>


Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
sourceany type

#### Results: ¶

ResultDescription
resultsparse tensor of any type values

### sparse_tensor.out (::mlir::sparse_tensor::OutOp) ¶

Outputs a sparse tensor to the given destination

Syntax:

operation ::= sparse_tensor.out $tensor ,$dest attr-dict : type($tensor) , type($dest)


Outputs the contents of a sparse tensor to the destination defined by an opaque pointer provided by dest. For targets that have access to a file system, for example, this pointer may specify a filename (or file) for output. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as sending the contents to a buffer defined by a pointer.

Example:

sparse_tensor.out %t, %dest : tensor<1024x1024xf64, #CSR>, !Dest


#### Operands: ¶

OperandDescription
tensorsparse tensor of any type values
destany type

### sparse_tensor.reduce (::mlir::sparse_tensor::ReduceOp) ¶

Custom reduction operation utilized within linalg.generic

Syntax:

operation ::= sparse_tensor.reduce $x ,$y , $identity attr-dict : type($output) $region  Defines a computation with a linalg.generic operation that takes two operands and an identity value and reduces all values down to a single result based on the computation in the region. The region must contain exactly one block taking two arguments. The block must end with a sparse_tensor.yield and the output must match the input argument types. Note that this operation is only required for custom reductions beyond the standard operations (add, mul, and, or, etc). The linalg.generic iterator_types defines which indices are being reduced. When the associated operands are used in an operation, a reduction will occur. The use of this explicit reduce operation is not required in most cases. Example of Matrix->Vector reduction using max(product(x_i), 100): %cf1 = arith.constant 1.0 : f64 %cf100 = arith.constant 100.0 : f64 %C = bufferization.alloc_tensor... %0 = linalg.generic #trait ins(%A: tensor<?x?xf64, #SparseMatrix>) outs(%C: tensor<?xf64, #SparseVector>) { ^bb0(%a: f64, %c: f64) : %result = sparse_tensor.reduce %c, %a, %cf1 : f64 { ^bb0(%arg0: f64, %arg1: f64): %0 = arith.mulf %arg0, %arg1 : f64 %cmp = arith.cmpf "ogt", %0, %cf100 : f64 %ret = arith.select %cmp, %cf100, %0 : f64 sparse_tensor.yield %ret : f64 } linalg.yield %result : f64 } -> tensor<?xf64, #SparseVector>  Traits: SameOperandsAndResultType Interfaces: InferTypeOpInterface, NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription xany type yany type identityany type #### Results: ¶ ResultDescription outputany type ### sparse_tensor.indices (::mlir::sparse_tensor::ToIndicesOp) ¶ Extracts indices array at given dimension from a tensor Syntax: operation ::= sparse_tensor.indices$tensor , $dim attr-dict : type($tensor) to type($result)  Returns the indices array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the indices array. Example: %1 = sparse_tensor.indices %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensorsparse tensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.pointers (::mlir::sparse_tensor::ToPointersOp) ¶ Extracts pointers array at given dimension from a tensor Syntax: operation ::= sparse_tensor.pointers$tensor , $dim attr-dict : type($tensor) to type($result)  Returns the pointers array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the pointers array. Example: %1 = sparse_tensor.pointers %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensorsparse tensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.values (::mlir::sparse_tensor::ToValuesOp) ¶ Extracts numerical values array from a tensor Syntax: operation ::= sparse_tensor.values$tensor attr-dict : type($tensor) to type($result)


Returns the values array of the sparse storage format for the given sparse tensor, independent of the actual dimension. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the values array.

Example:

%1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref<?xf64>


Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
tensorsparse tensor of any type values

#### Results: ¶

ResultDescription
resultstrided memref of any type values of rank 1

### sparse_tensor.unary (::mlir::sparse_tensor::UnaryOp) ¶

Unary set operation utilized within linalg.generic

Syntax:

operation ::= sparse_tensor.unary $x attr-dict : type($x) to type($output) \n present =$presentRegion \n
absent = $absentRegion  Defines a computation with a linalg.generic operation that takes a single operand and executes one of two regions depending on whether the operand is nonzero (i.e. stored explicitly in the sparse storage format). Two regions are defined for the operation must appear in this order: • present (elements present in the sparse tensor) • absent (elements not present in the sparse tensor) Each region contains a single block describing the computation and result. A non-empty block must end with a sparse_tensor.yield and the return type must match the type of output. The primary region’s block has one argument, while the missing region’s block has zero arguments. A region may also be declared empty (i.e. absent={}), indicating that the region does not contribute to the output. Example of A+1, restricted to existing elements: %C = bufferization.alloc_tensor... %0 = linalg.generic #trait ins(%A: tensor<?xf64, #SparseVector>) outs(%C: tensor<?xf64, #SparseVector>) { ^bb0(%a: f64, %c: f64) : %result = sparse_tensor.unary %a : f64 to f64 present={ ^bb0(%arg0: f64): %cf1 = arith.constant 1.0 : f64 %ret = arith.addf %arg0, %cf1 : f64 sparse_tensor.yield %ret : f64 } absent={} linalg.yield %result : f64 } -> tensor<?xf64, #SparseVector>  Example returning +1 for existing values and -1 for missing values: %result = sparse_tensor.unary %a : f64 to i32 present={ ^bb0(%x: f64): %ret = arith.constant 1 : i32 sparse_tensor.yield %ret : i32 } absent={ %ret = arith.constant -1 : i32 sparse_tensor.yield %ret : i32 }  Example showing a structural inversion (existing values become missing in the output, while missing values are filled with 1): %result = sparse_tensor.unary %a : f64 to i64 present={} absent={ %ret = arith.constant 1 : i64 sparse_tensor.yield %ret : i64 }  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription xany type #### Results: ¶ ResultDescription outputany type ### sparse_tensor.yield (::mlir::sparse_tensor::YieldOp) ¶ Yield from sparse_tensor set-like operations Syntax: operation ::= sparse_tensor.yield$result attr-dict : type(\$result)


Yields a value from within a binary or unary block.

Example:

%0 = sparse_tensor.unary %a : i64 to i64 {
^bb0(%arg0: i64):
%cst = arith.constant 1 : i64
%ret = arith.addi %arg0, %cst : i64
sparse_tensor.yield %ret : i64
}


Traits: Terminator

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
resultany type

## Attribute definition ¶

### SparseTensorEncodingAttr ¶

An attribute to encode TACO-style information on sparsity properties of tensors. The encoding is eventually used by a sparse compiler pass to generate sparse code fully automatically for all tensor expressions that involve tensors with a sparse encoding. Compiler passes that run before this sparse compiler pass need to be aware of the semantics of tensor types with such an encoding.

The attribute consists of the following fields.

• Dimension level type for each dimension of a tensor type:
• dense : dimension is dense, all entries along this dimension are stored.
• compressed : dimension is sparse, only nonzeros along this dimensions are stored, without duplicates, i.e., compressed (unique).
• Dimension ordering on the indices of this tensor type. Unlike dense storage, most sparse storage schemes do not provide fast random access. This affine map specifies the order of dimensions that should be supported by the sparse storage scheme. For example, for a 2-d tensor, “(i,j) -> (i,j)” requests row-wise storage and “(i,j) -> (j,i)” requests column-wise storage.
• The required bit width for “pointer” storage (integral offsets into the sparse storage scheme). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum number of stored entries over all indirection dimensions). The choices are 8, 16, 32, 64, or 0 for a native width.
• The required bit width for “index” storage (elements of the coordinates of stored entries). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum value of each tensor index over all dimensions). The choices are 8, 16, 32, 64, or 0 for a native width.

Example:

#DCSC = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 32,
indexBitWidth = 8
}>

... tensor<8x8xf64, #DCSC> ...


#### Parameters: ¶

ParameterC++ typeDescription
dimLevelType::llvm::ArrayRef<SparseTensorEncodingAttr::DimLevelType>Per-dimension level type (dense or compressed)
dimOrderingAffineMap
pointerBitWidthunsigned
indexBitWidthunsigned