# MLIR

Multi-Level IR Compiler Framework

# 'sparse_tensor' Dialect

The SparseTensor dialect supports all the attributes, types, operations, and passes that are required to make sparse tensor types first class citizens within the MLIR compiler infrastructure. The dialect forms a bridge between high-level operations on sparse tensors types and lower-level operations on the actual sparse storage schemes consisting of pointers, indices, and values. Lower-level support may consist of fully generated code or may be provided by means of a small sparse runtime support library.

The concept of treating sparsity as a property, not a tedious implementation detail, by letting a sparse compiler generate sparse code automatically was pioneered for dense linear algebra by [Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor Algebra Compiler (TACO) project (see http://tensor-compiler.org).

The MLIR implementation closely follows the “sparse iteration theory” that forms the foundation of TACO. A rewriting rule is applied to each tensor expression in the Linalg dialect (MLIR’s tensor index notation) where the sparsity of tensors is indicated using the per-dimension level types dense/compressed together with a specification of the order on the dimensions (see [Chou18] for an in-depth discussions and possible extensions to these level types). Subsequently, a topologically sorted iteration graph, reflecting the required order on indices with respect to the dimensions of each tensor, is constructed to ensure that all tensors are visited in natural index order. Next, iteration lattices are constructed for the tensor expression for every index in topological order. Each iteration lattice point consists of a conjunction of tensor indices together with a tensor (sub)expression that needs to be evaluated for that conjunction. Within the lattice, iteration points are ordered according to the way indices are exhausted. As such these iteration lattices drive actual sparse code generation, which consists of a relatively straightforward one-to-one mapping from iteration lattices to combinations of for-loops, while-loops, and if-statements.

• [Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, Leiden University, May 1996.
• [Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of the ACM on Programming Languages, October 2017.
• [Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation. PhD thesis, MIT, February, 2020.
• [Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of the ACM on Programming Languages, October 2018.

## Attribute definition ¶

### SparseTensorEncodingAttr ¶

An attribute to encode TACO-style information on sparsity properties of tensors. The encoding is eventually used by a sparse compiler pass to generate sparse code fully automatically for all tensor expressions that involve tensors with a sparse encoding. Compiler passes that run before this sparse compiler pass need to be aware of the semantics of tensor types with such an encoding.

Example:

#DCSC = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 32,
indexBitWidth = 8
}>

... tensor<8x8xf64, #DCSC> ...


#### Parameters: ¶

ParameterC++ typeDescription
dimLevelType::llvm::ArrayRef<SparseTensorEncodingAttr::DimLevelType>Per-dimension level type
dimOrderingAffineMap
pointerBitWidthunsigned
indexBitWidthunsigned

## Operation definition ¶

### sparse_tensor.convert (::mlir::sparse_tensor::ConvertOp) ¶

Converts between different tensor types

Syntax:

operation ::= sparse_tensor.convert $source attr-dict : type($source) to type($dest)  Converts one sparse or dense tensor type to another tensor type. The rank and dimensions of the source and destination types must match exactly, only the sparse encoding of these types may be different. The name convert was preferred over cast, since the operation may incur a non-trivial cost. When converting between two different sparse tensor types, only explicitly stored values are moved from one underlying sparse storage format to the other. When converting from an unannotated dense tensor type to a sparse tensor type, an explicit test for nonzero values is used. When converting to an unannotated dense tensor type, implicit zeroes in the sparse storage format are made explicit. Note that the conversions can have non-trivial costs associated with them, since they may involve elaborate data structure transformations. Also, conversions from sparse tensor types into dense tensor types may be infeasible in terms of storage requirements. Examples: %0 = sparse_tensor.convert %1 : tensor<32x32xf32> to tensor<32x32xf32, #CSR> %2 = sparse_tensor.convert %3 : tensor<8x8xi32, #CSC> to tensor<8x8xi32, #CSR>  #### Operands: ¶ OperandDescription sourcetensor of any type values #### Results: ¶ ResultDescription desttensor of any type values ### sparse_tensor.new (::mlir::sparse_tensor::NewOp) ¶ Constructs a new sparse tensor Syntax: operation ::= sparse_tensor.new$source attr-dict : type($source) to type($result)


Constructs a sparse tensor value with contents taken from an opaque pointer provided by source. For targets that have access to a file system, for example, this pointer may be a filename (or file) of a sparse tensor in a particular external storage format. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as pointers to buffers or runnable initialization code. The operation is provided as an anchor that materializes a fully typed sparse tensor values into a computation.

Example:

sparse_tensor.new %source : !Source to tensor<1024x1024xf64, #CSR>


#### Operands: ¶

OperandDescription
sourceany type

#### Results: ¶

ResultDescription
resulttensor of any type values

### sparse_tensor.indices (::mlir::sparse_tensor::ToIndicesOp) ¶

Extract indices array at given dimension from a tensor

Syntax:

operation ::= sparse_tensor.indices $tensor ,$dim attr-dict : type($tensor) to type($result)


Returns the indices array of the sparse storage scheme at the given dimension for the given sparse tensor. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the indices array.

Example:

%1 = sparse_tensor.indices %0, %c1
: tensor<64x64xf64, #CSR> to memref<?xindex>


#### Operands: ¶

OperandDescription
tensortensor of any type values
dimindex

#### Results: ¶

ResultDescription
resultstrided memref of any type values of rank 1

### sparse_tensor.pointers (::mlir::sparse_tensor::ToPointersOp) ¶

Extract pointers array at given dimension from a tensor

Syntax:

operation ::= sparse_tensor.pointers $tensor ,$dim attr-dict : type($tensor) to type($result)


Returns the pointers array of the sparse storage scheme at the given dimension for the given sparse tensor. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the pointers array.

Example:

%1 = sparse_tensor.pointers %0, %c1
: tensor<64x64xf64, #CSR> to memref<?xindex>


#### Operands: ¶

OperandDescription
tensortensor of any type values
dimindex

#### Results: ¶

ResultDescription
resultstrided memref of any type values of rank 1

### sparse_tensor.tensor (::mlir::sparse_tensor::ToTensorOp) ¶

Reconstructs tensor from arrays(s)

Syntax:

operation ::= sparse_tensor.tensor $memrefs attr-dict : type($memrefs) to type($result)  Reconstructs the sparse tensor from the sparse storage scheme array(s). This is similar to the memref.load operation in the sense that it provides a bridge between a bufferized world view and a tensor world view. Unlike the memref.load operation, however, this sparse operation is used only temporarily to maintain a correctly typed intermediate representation during progressive bufferization. Eventually the operation is folded away. The input arrays are defined unambigously by the sparsity annotations (pointers and indices for overhead storage in every compressed dimension, followed by one final values array). Examples: %1 = sparse_tensor.tensor %0 : memref<?xf64> to tensor<64x64xf64, #Dense> %3 = sparse_tensor.tensor %0, %1, %2 : memref<?xindex>, memref<?xindex>, memref<?xf32> to tensor<10x10xf32, #CSR>  #### Operands: ¶ OperandDescription memrefsstrided memref of any type values of rank 1 #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.values (::mlir::sparse_tensor::ToValuesOp) ¶ Extract numerical values array from a tensor Syntax: operation ::= sparse_tensor.values$tensor attr-dict : type($tensor) to type($result)


Returns the values array of the sparse storage scheme for the given sparse tensor, independent of the actual dimension. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the values array.

Example:

%1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref<?xf64>


#### Operands: ¶

OperandDescription
tensortensor of any type values

#### Results: ¶

ResultDescription
resultstrided memref of any type values of rank 1