# MLIR

Multi-Level IR Compiler Framework

# 'sparse_tensor' Dialect

The SparseTensor dialect supports all the attributes, types, operations, and passes that are required to make sparse tensor types first class citizens within the MLIR compiler infrastructure. The dialect forms a bridge between high-level operations on sparse tensors types and lower-level operations on the actual sparse storage schemes consisting of pointers, indices, and values. Lower-level support may consist of fully generated code or may be provided by means of a small sparse runtime support library.

The concept of treating sparsity as a property, not a tedious implementation detail, by letting a sparse compiler generate sparse code automatically was pioneered for linear algebra by [Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor Algebra Compiler (TACO) project (see http://tensor-compiler.org).

The MLIR implementation closely follows the “sparse iteration theory” that forms the foundation of TACO. A rewriting rule is applied to each tensor expression in the Linalg dialect (MLIR’s tensor index notation) where the sparsity of tensors is indicated using the per-dimension level types dense/compressed together with a specification of the order on the dimensions (see [Chou18] for an in-depth discussions and possible extensions to these level types). Subsequently, a topologically sorted iteration graph, reflecting the required order on indices with respect to the dimensions of each tensor, is constructed to ensure that all tensors are visited in natural index order. Next, iteration lattices are constructed for the tensor expression for every index in topological order. Each iteration lattice point consists of a conjunction of tensor indices together with a tensor (sub)expression that needs to be evaluated for that conjunction. Within the lattice, iteration points are ordered according to the way indices are exhausted. As such these iteration lattices drive actual sparse code generation, which consists of a relatively straightforward one-to-one mapping from iteration lattices to combinations of for-loops, while-loops, and if-statements. Sparse tensor outputs that materialize uninitialized are handled with insertions in pure lexicographical index order if all parallel loops are outermost or using a 1-dimensional access pattern expansion (a.k.a. workspace) where feasible [Gustavson72,Bik96,Kjolstad19].

• [Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, Leiden University, May 1996.
• [Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of the ACM on Programming Languages, October 2018.
• [Gustavson72] Fred G. Gustavson. Some basic techniques for solving sparse systems of linear equations. In Sparse Matrices and Their Applications, pages 41–52. Plenum Press, New York, 1972.
• [Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of the ACM on Programming Languages, October 2017.
• [Kjolstad19] Fredrik Berg Kjolstad, Peter Ahrens, Shoaib Ashraf Kamil, and Saman Amarasinghe. Tensor Algebra Compilation with Workspaces, Proceedings of the IEEE/ACM International Symposium on Code Generation and Optimization, 2019.
• [Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation. PhD thesis, MIT, February, 2020.

## Attribute constraint definition ¶

### ¶

An attribute to encode TACO-style information on sparsity properties of tensors. The encoding is eventually used by a sparse compiler pass to generate sparse code fully automatically for all tensor expressions that involve tensors with a sparse encoding. Compiler passes that run before this sparse compiler pass need to be aware of the semantics of tensor types with such an encoding.

Example:

#DCSC = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 32,
indexBitWidth = 8
}>

... tensor<8x8xf64, #DCSC> ...


## Attribute definition ¶

### SparseTensorEncodingAttr ¶

An attribute to encode TACO-style information on sparsity properties of tensors. The encoding is eventually used by a sparse compiler pass to generate sparse code fully automatically for all tensor expressions that involve tensors with a sparse encoding. Compiler passes that run before this sparse compiler pass need to be aware of the semantics of tensor types with such an encoding.

Example:

#DCSC = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 32,
indexBitWidth = 8
}>

... tensor<8x8xf64, #DCSC> ...


#### Parameters: ¶

ParameterC++ typeDescription
dimLevelType::llvm::ArrayRef<SparseTensorEncodingAttr::DimLevelType>Per-dimension level type
dimOrderingAffineMap
pointerBitWidthunsigned
indexBitWidthunsigned

## Operation definition ¶

### sparse_tensor.binary (::mlir::sparse_tensor::BinaryOp) ¶

Binary set operation utilized within linalg.generic

Syntax:

operation ::= sparse_tensor.binary $x ,$y : attr-dict type($x) , type($y) to type($output) \n overlap =$overlapRegion \n
left = (identity $left_identity^):($leftRegion)? \n
right = (identity $right_identity^):($rightRegion)?


Defines a computation within a linalg.generic operation that takes two operands and executes one of the regions depending on whether both operands or either operand is nonzero (i.e. stored explicitly in the sparse storage format).

Three regions are defined for the operation and must appear in this order:

• overlap (elements present in both sparse tensors)
• left (elements only present in the left sparse tensor)
• right (element only present in the right sparse tensor)

Each region contains a single block describing the computation and result. Every non-empty block must end with a sparse_tensor.yield and the return type must match the type of output. The primary region’s block has two arguments, while the left and right region’s block has only one argument.

A region may also be declared empty (i.e. left={}), indicating that the region does not contribute to the output. For example, setting both left={} and right={} is equivalent to the intersection of the two inputs as only the overlap region will contribute values to the output.

As a convenience, there is also a special token identity which can be used in place of the left or right region. This token indicates that the return value is the input value (i.e. func(%x) => return %x). As a practical example, setting left=identity and right=identity would be equivalent to a union operation where non-overlapping values in the inputs are copied to the output unchanged.

Example of isEqual applied to intersecting elements only.

%C = sparse_tensor.init...
%0 = linalg.generic #trait
ins(%A: tensor<?xf64, #SparseVec>, %B: tensor<?xf64, #SparseVec>)
outs(%C: tensor<?xi8, #SparseVec>) {
^bb0(%a: f64, %b: f64, %c: i8) :
%result = sparse_tensor.binary %a, %b : f64, f64 to i8
overlap={
^bb0(%arg0: f64, %arg1: f64):
%cmp = arith.cmpf "oeq", %arg0, %arg1 : f64
%ret_i8 = arith.extui %cmp : i1 to i8
sparse_tensor.yield %ret_i8 : i8
}
left={}
right={}
linalg.yield %result : i8
} -> tensor<?xi8, #SparseVec>


Example of A+B in upper triangle, A-B in lower triangle (not working yet, but construct will be available soon).

%C = sparse_tensor.init...
%1 = linalg.generic #trait
ins(%A: tensor<?x?xf64, #CSR>, %B: tensor<?x?xf64, #CSR>
outs(%C: tensor<?x?xf64, #CSR> {
^bb0(%a: f64, %b: f64, %c: f64) :
%row = linalg.index 0 : index
%col = linalg.index 1 : index
%result = sparse_tensor.binary %a, %b : f64, f64 to f64
overlap={
^bb0(%x: f64, %y: f64):
%cmp = arith.cmpi "uge", %column, %row : index
%upperTriangleResult = arith.addf %x, %y : f64
%lowerTriangleResult = arith.subf %x, %y : f64
%ret = arith.select %cmp, %upperTriangleResult, %lowerTriangleResult : f64
sparse_tensor.yield %ret : f64
}
left=identity
right={
^bb0(%y: f64):
%cmp = arith.cmpi "uge", %column, %row : index
%lowerTriangleResult = arith.negf %y : f64
%ret = arith.select %cmp, %y, %lowerTriangleResult
sparse_tensor.yield %ret : f64
}
linalg.yield %result : f64
} -> tensor<?x?xf64, #CSR>


Example of set difference. Returns a copy of A where its sparse structure is not overlapped by B. The element type of B can be different than A because we never use its values, only its sparse structure.

%C = sparse_tensor.init...
%2 = linalg.generic #trait
ins(%A: tensor<?x?xf64, #CSR>, %B: tensor<?x?xi32, #CSR>
outs(%C: tensor<?x?xf64, #CSR> {
^bb0(%a: f64, %b: i32, %c: f64) :
%result = sparse_tensor.binary %a, %b : f64, i32 to f64
overlap={}
left=identity
right={}
linalg.yield %result : f64
} -> tensor<?x?xf64, #CSR>


Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Attributes: ¶

AttributeMLIR TypeDescription
left_identity::mlir::UnitAttrunit attribute
right_identity::mlir::UnitAttrunit attribute

#### Operands: ¶

OperandDescription
xany type
yany type

#### Results: ¶

ResultDescription
outputany type

### sparse_tensor.compress (::mlir::sparse_tensor::CompressOp) ¶

Compressed an access pattern for insertion

Syntax:

operation ::= sparse_tensor.compress $tensor ,$indices , $values ,$filled , $added ,$count attr-dict : type($tensor) , type($indices) , type($values) , type($filled) , type($added) , type($count)


Finishes a single access pattern expansion by moving inserted elements into the sparse storage scheme. The values and filled array are reset in a sparse fashion by only iterating over set elements through an indirection using the added array, so that the operations are kept proportional to the number of nonzeros. See the ‘expand’ operation for more details.

Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.

Example:

sparse_tensor.compress %0, %1, %values, %filled, %added, %2
: tensor<4x4xf64, #CSR>, memref<?xindex>, memref<?xf64>,
memref<?xi1>, memref<?xindex>, index


#### Operands: ¶

OperandDescription
tensortensor of any type values
indices1D memref of index values
valuesstrided memref of any type values of rank 1
filled1D memref of 1-bit signless integer values
added1D memref of index values
countindex

### sparse_tensor.convert (::mlir::sparse_tensor::ConvertOp) ¶

Converts between different tensor types

Syntax:

operation ::= sparse_tensor.convert $source attr-dict : type($source) to type($dest)  Converts one sparse or dense tensor type to another tensor type. The rank of the source and destination types must match exactly, and the dimension sizes must either match exactly or relax from a static to a dynamic size. The sparse encoding of the two types can obviously be completely different. The name convert was preferred over cast, since the operation may incur a non-trivial cost. When converting between two different sparse tensor types, only explicitly stored values are moved from one underlying sparse storage format to the other. When converting from an unannotated dense tensor type to a sparse tensor type, an explicit test for nonzero values is used. When converting to an unannotated dense tensor type, implicit zeroes in the sparse storage format are made explicit. Note that the conversions can have non-trivial costs associated with them, since they may involve elaborate data structure transformations. Also, conversions from sparse tensor types into dense tensor types may be infeasible in terms of storage requirements. Examples: %0 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<32x32xf32, #CSR> %1 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<?x?xf32, #CSR> %2 = sparse_tensor.convert %b : tensor<8x8xi32, #CSC> to tensor<8x8xi32, #CSR> %3 = sparse_tensor.convert %c : tensor<4x8xf64, #CSR> to tensor<4x?xf64, #CSC> // The following conversion is not allowed (since it would require a // runtime assertion that the source's dimension size is actually 100). %4 = sparse_tensor.convert %d : tensor<?xf64> to tensor<100xf64, #SV>  Traits: SameOperandsAndResultElementType Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sourcetensor of any type values #### Results: ¶ ResultDescription desttensor of any type values ### sparse_tensor.expand (::mlir::sparse_tensor::ExpandOp) ¶ Expands an access pattern for insertion Syntax: operation ::= sparse_tensor.expand$tensor attr-dict : type($tensor) to type($values) , type($filled) , type($added) , type($count)  Performs an access pattern expansion for the innermost dimensions of the given tensor. This operation is useful to implement kernels in which a sparse tensor appears as output. This technique is known under several different names and using several alternative implementations, for example, phase counter [Gustavson72], expanded or switch array [Pissanetzky84], in phase scan [Duff90], access pattern expansion [Bik96], and workspaces [Kjolstad19]. The values and filled array have sizes that suffice for a dense innermost dimension (e.g. a full row for matrices). The added array and count are used to store new indices when a false value is encountered in the filled array. All arrays should be allocated before the loop (possibly even shared between loops in a future optimization) so that their dense initialization can be amortized over many iterations. Setting and resetting the dense arrays in the loop nest itself is kept sparse by only iterating over set elements through an indirection using the added array, so that the operations are kept proportional to the number of nonzeros. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. Example: %values, %filled, %added, %count = sparse_tensor.expand %0 : tensor<4x4xf64, #CSR> to memref<?xf64>, memref<?xi1>, memref<?xindex>, index  #### Operands: ¶ OperandDescription tensortensor of any type values #### Results: ¶ ResultDescription valuesstrided memref of any type values of rank 1 filled1D memref of 1-bit signless integer values added1D memref of index values countindex ### sparse_tensor.init (::mlir::sparse_tensor::InitOp) ¶ Materializes an unitialized sparse tensor Syntax: operation ::= sparse_tensor.init [$sizes ] attr-dict : type($result)  Materializes an uninitialized sparse tensor with given shape (either static or dynamic). The operation is provided as an anchor that materializes a properly typed but uninitialized sparse tensor into the output clause of a subsequent operation that yields a sparse tensor as the result. Example: %c = sparse_tensor.init_tensor [%d1, %d2] : tensor<?x?xf32, #SparseMatrix> %0 = linalg.matmul ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>) outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sizesindex #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.lex_insert (::mlir::sparse_tensor::LexInsertOp) ¶ Inserts a value into given sparse tensor in lexicographical index order Syntax: operation ::= sparse_tensor.lex_insert$tensor , $indices ,$value attr-dict : type($tensor) , type($indices) , type($value)  Inserts the given value at given indices into the underlying sparse storage format of the given tensor with the given indices. This operation can only be applied when a tensor materializes unintialized with an init operation, the insertions occur in strict lexicographical index order, and the final tensor is constructed with a tensor operation that has the hasInserts attribute set. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. sparse_tensor.lex_insert %tensor, %indices, %val : tensor<1024x1024xf64, #CSR>, memref<?xindex>, f64  #### Operands: ¶ OperandDescription tensortensor of any type values indices1D memref of index values valueany type ### sparse_tensor.load (::mlir::sparse_tensor::LoadOp) ¶ Rematerializes tensor from underlying sparse storage format Syntax: operation ::= sparse_tensor.load$tensor (hasInserts $hasInserts^)? attr-dict : type($tensor)


Rematerializes a tensor from the underlying sparse storage format of the given tensor. This is similar to the bufferization.to_tensor operation in the sense that it provides a bridge between a bufferized world view and a tensor world view. Unlike the bufferization.to_tensor operation, however, this sparse operation is used only temporarily to maintain a correctly typed intermediate representation during progressive bufferization.

The hasInserts attribute denote whether insertions to the underlying sparse storage format may have occurred, in which case the underlying sparse storage format needs to be finalized. Otherwise, the operation simply folds away.

Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.

Example:

%1 = sparse_tensor.load %0 : tensor<8xf64, #SV>


Traits: SameOperandsAndResultType

Interfaces: InferTypeOpInterface

#### Attributes: ¶

AttributeMLIR TypeDescription
hasInserts::mlir::UnitAttrunit attribute

#### Operands: ¶

OperandDescription
tensortensor of any type values

#### Results: ¶

ResultDescription
resulttensor of any type values

### sparse_tensor.new (::mlir::sparse_tensor::NewOp) ¶

Materializes a new sparse tensor from given source

Syntax:

operation ::= sparse_tensor.new $source attr-dict : type($source) to type($result)  Materializes a sparse tensor with contents taken from an opaque pointer provided by source. For targets that have access to a file system, for example, this pointer may be a filename (or file) of a sparse tensor in a particular external storage format. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as pointers to buffers or runnable initialization code. The operation is provided as an anchor that materializes a properly typed sparse tensor with inital contents into a computation. Example: sparse_tensor.new %source : !Source to tensor<1024x1024xf64, #CSR>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sourceany type #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.out (::mlir::sparse_tensor::OutOp) ¶ Outputs a sparse tensor to the given destination Syntax: operation ::= sparse_tensor.out$tensor , $dest attr-dict : type($tensor) , type($dest)  Outputs the contents of a sparse tensor to the destination defined by an opaque pointer provided by dest. For targets that have access to a file system, for example, this pointer may specify a filename (or file) for output. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as sending the contents to a buffer defined by a pointer. Example: sparse_tensor.out %t, %dest : tensor<1024x1024xf64, #CSR>, !Dest  #### Operands: ¶ OperandDescription tensorany type destany type ### sparse_tensor.release (::mlir::sparse_tensor::ReleaseOp) ¶ Releases underlying sparse storage format of given tensor Syntax: operation ::= sparse_tensor.release$tensor attr-dict : type($tensor)  Releases the underlying sparse storage format for a tensor that materialized earlier through a new operator, init operator, or a convert operator with an annotated tensor type as destination (unless that convert is folded away since the source and destination types were identical). This operation should only be called once for any materialized tensor. Also, after this operation, any subsequent memref querying operation on the tensor returns undefined results. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. Example: sparse_tensor.release %tensor : tensor<1024x1024xf64, #CSR>  #### Operands: ¶ OperandDescription tensortensor of any type values ### sparse_tensor.indices (::mlir::sparse_tensor::ToIndicesOp) ¶ Extracts indices array at given dimension from a tensor Syntax: operation ::= sparse_tensor.indices$tensor , $dim attr-dict : type($tensor) to type($result)  Returns the indices array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the indices array. Example: %1 = sparse_tensor.indices %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensortensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.pointers (::mlir::sparse_tensor::ToPointersOp) ¶ Extracts pointers array at given dimension from a tensor Syntax: operation ::= sparse_tensor.pointers$tensor , $dim attr-dict : type($tensor) to type($result)  Returns the pointers array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the pointers array. Example: %1 = sparse_tensor.pointers %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex>  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensortensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.values (::mlir::sparse_tensor::ToValuesOp) ¶ Extracts numerical values array from a tensor Syntax: operation ::= sparse_tensor.values$tensor attr-dict : type($tensor) to type($result)


Returns the values array of the sparse storage format for the given sparse tensor, independent of the actual dimension. This is similar to the bufferization.to_memref operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the bufferization.to_memref operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the values array.

Example:

%1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref<?xf64>


Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
tensortensor of any type values

#### Results: ¶

ResultDescription
resultstrided memref of any type values of rank 1

### sparse_tensor.unary (::mlir::sparse_tensor::UnaryOp) ¶

Unary set operation utilized within linalg.generic

Syntax:

operation ::= sparse_tensor.unary $x attr-dict : type($x) to type($output) \n present =$presentRegion \n
absent = $absentRegion  Defines a computation with a linalg.generic operation that takes a single operand and executes one of two regions depending on whether the operand is nonzero (i.e. stored explicitly in the sparse storage format). Two regions are defined for the operation must appear in this order: • present (elements present in the sparse tensor) • absent (elements not present in the sparse tensor) Each region contains a single block describing the computation and result. A non-empty block must end with a sparse_tensor.yield and the return type must match the type of output. The primary region’s block has one argument, while the missing region’s block has zero arguments. A region may also be declared empty (i.e. absent={}), indicating that the region does not contribute to the output. Example of A+1, restricted to existing elements: %C = sparse_tensor.init... %0 = linalg.generic #trait ins(%A: tensor<?xf64, #SparseVec>) outs(%C: tensor<?xf64, #SparseVec>) { ^bb0(%a: f64, %c: f64) : %result = sparse_tensor.unary %a : f64 to f64 present={ ^bb0(%arg0: f64): %cf1 = arith.constant 1.0 : f64 %ret = arith.addf %arg0, %cf1 : f64 sparse_tensor.yield %ret : f64 } absent={} linalg.yield %result : f64 } -> tensor<?xf64, #SparseVec>  Example returning +1 for existing values and -1 for missing values: %result = sparse_tensor.unary %a : f64 to i32 present={ ^bb0(%x: f64): %ret = arith.constant 1 : i32 sparse_tensor.yield %ret : i32 } absent={ %ret = arith.constant -1 : i32 sparse_tensor.yield %ret : i32 }  Example showing a structural inversion (existing values become missing in the output, while missing values are filled with 1): %result = sparse_tensor.unary %a : f64 to i64 present={} absent={ %ret = arith.constant 1 : i64 sparse_tensor.yield %ret : i64 }  Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription xany type #### Results: ¶ ResultDescription outputany type ### sparse_tensor.yield (::mlir::sparse_tensor::YieldOp) ¶ Yield from sparse_tensor set-like operations Syntax: operation ::= sparse_tensor.yield$result attr-dict : type(\$result)


Yields a value from within a binary or unary block.

Example:

%0 = sparse_tensor.unary %a : i64 to i64 {
^bb0(%arg0: i64):
%cst = arith.constant 1 : i64
%ret = arith.addi %arg0, %cst : i64
sparse_tensor.yield %ret : i64
}


Traits: Terminator

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

#### Operands: ¶

OperandDescription
resultany type