MLIR

Multi-Level IR Compiler Framework

Transform Dialect

Fine-grain transformation control dialect.

Disclaimer 

This dialect is actively developed and may change frequently.

To decrease the maintenance burden and churn, please post a description of the intended use case on the MLIR forum. A few in-tree use cases are currently supported:

  • high-level transformations on “structured ops” (i.e. ops that operate on chunks of data in a way that can be decomposed into operations on smaller chunks of data and control flow) in Linalg, Tensor and Vector dialects;
  • loop transformations in the SCF dialect.

Overview 

This dialect provides operations that can be used to control transformation of the IR using a different portion of the IR. It refers to the IR being transformed as payload IR, and to the IR guiding the transformation as transform IR.

The main use case for this dialect is orchestrating fine-grain transformations on individual IR objects (operations or values) or sets thereof. For example, it may involve finding loop-like operations with specific properties (e.g., large size) in the payload IR, applying loop tiling to those and only those operations, and then applying loop unrolling to the inner loops produced by the previous transformations. As such, it is not intended as a replacement for the pass infrastructure, nor for the pattern rewriting infrastructure. In the most common case, the transform IR will be processed and applied to the payload IR by a pass. Transformations expressed by the transform dialect may be implemented using the pattern infrastructure or any other relevant MLIR component.

The following IR gives a rough idea of what the operations in this dialect may look like without using actually existing operations:

%0 = transform.loop.find { size > 42 } : !transform.interface<tileable>
%1 = transform.compute_trailing_tile_size %0 : !transform.param<index>
%2:2 = transform.loop.tile %0 tile_sizes(1, 4, %1)
      : (!transform.interface<tileable>)
     -> (!transform.op<loop>, !transform.op<loop>)
%3 = transform.get_op_result [0] %2#0 : !transform.any_value
transform.assign_to_fast_memory %3
transform.loop.unroll %1#1 : !transform.op<loop>

The values used in the Transform dialect may correspond to:

  • sets of operations in the payload IR;

  • sets of values in the payload IR;

  • sets of parameters (attributes) known at the execution time of the transform dialect.

The former two kinds of values are also referred to as operation and value handles, respectively. In the example above, %0 corresponds to the set of loops found in the payload IR that satisfy the condition, and %2 correspond to groups of outer and inner loops, respectively, produced by the tiling transformation. %3 corresponds to a set of values that are produced by the outer loops after tiling. %1 corresponds to a list of tile sizes selected for each of the operations that %0 corresponds to.

An operation handle such as %0 may be associated with multiple payload operations. This is conceptually a set of operations and no assumptions should be made about the order of ops unless specified otherwise by the operation. Similarly, a value handle such as %3 may be associated with a set of payload IR values. Transform dialect operations may take as operands and produce an arbitrary combination of values representing handles and parameters. Most Transform IR ops support operand values that are mapped to multiple payload objects. They usually apply the respective transformation for every mapped object (“batched execution”). Deviations from this convention are described in the documentation of Transform IR ops.

The transform IR values have transform IR types, which should implement exactly one of:

The goal of these type interfaces, beyond providing a common base for accepted types, is to verify the properties of the associated objects. For example, a handle type interface implementation may check whether all associated payload IR operations implement the “TileableOp” interface or have a specific “loop” kind. Similarly, a value handle type interface implementation may check if the associated payload IR values are block arguments or have a specific type, or a parameter type interface may check whether the associated attributes contain non-negative integer values. These properties are used to statically indicate pre- and post-conditions of a transformation connected to a Transform dialect operation. The conditions are verified when payload objects operations are first associated with a transform handle. By convention, Transform dialect operations are expected to indicate narrow preconditions for their operands by enforcing operand type constraints in the their definitions and verifiers. On the contrary, operations are expected to have few constraints on their results. Specific instances of a transform operation can then be created with a more restricted result type than the constraint in the operation (e.g., the “find” operation only constrains the result type to be a transform IR type while its concrete instance can have a type with stricter constraints such as implementing the “tilable” interface). The verification will then happen at transform execution time. This approach allows one to capture payload IR operation properties in the transform IR without resorting to excessive use of type casts or coupling dialect extensions between themselves. It is a trade-off between verbosity/complexity and static hardening, which can be revised in the future.

Overall, Transform IR ops are expected to be contained in a single top-level op. Such top-level ops specify how to apply the transformations described by the operations they contain, e.g., transform.sequence executes transformations one by one and fails if any of them fails. Such ops are expected to have the PossibleTopLevelTransformOpTrait and may be used without arguments.

A program transformation expressed using the Transform dialect can be programmatically triggered by calling:

LogicalResult transform::applyTransforms(
    Operation *payloadRoot,
    const RaggedArray<transform::MappedValue> &extraMappings,
    TransformOpInterface transform,
    const TransformOptions &options);

that applies the transformations specified by the top-level transform to payload IR contained in payloadRoot. The payload root operation will be associated with the first argument of the entry block of the top-level transform op. This block may have additional arguments, handles or parameters. They will be associated with values provided as extraMappings. The call will report an error and return if the wrong number of mappings is provided.

Dialect Extension Mechanism 

This dialect is designed to be extensible, that is, clients of this dialect are allowed to inject additional operations into this dialect using the TransformDialectExtension mechanism. This allows the dialect to avoid a dependency on the implementation of the transformation as well as to avoid introducing dialect-specific transform dialects. In the example above, the operations may have been injected by a notional loop dialect rather than defined in this dialect, hence the common prefix.

It is recommended to prefix injected operations with one or several dot-separated words that indicate which extension adds them. For dialect-specific transformations, the prefix is naturally the name of the dialect, e.g., transform.affine.reschedule. For dialect-agnostic transformations (typically implemented using interfaces), the prefix may be derived from the interface name or from a common concept, e.g., transform.loop.tile may apply to any loop-like operation that implements TileableOpInterface. The C++ classes for the dialect extension should include the prefix in their name, e.g., AffineTransformDialectExtension or LoopTransformDialectExtension in the cases above. Unprefixed operation names are reserved for ops defined directly in the Transform dialect.

Operations injected into the dialect must:

  • Implement the TransformOpInterface to execute the corresponding transformation on the payload IR.

  • Implement the MemoryEffectsOpInterface to annotate the effects of the transform IR operation on the payload IR as well as on the mapping between transform IR values and payload IR operations. See below for the description of available effects.

The presence of interface implementations is checked at runtime when the dialect is loaded to allow for those implementations to be supplied by separate dialect extensions if desired.

Similarly to operations, additional types can be injected into the dialect using the same extension mechanism. The types must:

  • Implement exactly one of TransformHandleTypeInterface, TransformValueHandleTypeInterface, TransformParamTypeInterface.

Side Effects 

The Transform dialect relies on MLIR side effect modelling to enable optimization of the transform IR. More specifically, it provides several side effect resource objects and expects operations to describe their effects on these resources.

  • TransformMappingResource - side effect resource corresponding to the mapping between transform IR values and payload IR operations.

    • An Allocate effect from this resource means creating a new mapping entry, it is always accompanied by a Write effect.

    • A Read effect from this resource means accessing the mapping.

    • A Free effect on this resource indicates the removal of the mapping entry, typically after a transformation that modifies the payload IR operations associated with one of the transform IR operation’s operands. It is always accompanied by a Read effect.

  • PayloadIRResource - side effect resource corresponding to the payload IR itself.

    • A Read effect from this resource means accessing the payload IR.

    • A Write effect on this resource means mutating the payload IR. It is almost always accompanied by a Read.

The typical flow of values in the transform IR is as follows. Most operations produce new transform IR values and immediately associate them with a list of payload IR operations. This corresponds to Allocate and Write effects on the TransformMappingResource, and often requires at least a Read effect on the PayloadIRResource. Transform operations that only inspect the payload IR to produce new handles are usually limited to these effects on their operands. Transform operations that mutate the payload IR are thought to consume the handles provided as operands, that is have the Read and Free effects on them. As with the usual memory effects, using a value after it was freed is incorrect. In case of the transform IR, this value is likely associated with payload IR operations that were modified or even removed by the transformation, so it is meaningless to refer to them. When further transformations are desired, the transform operations can return new handles that can be read or consumed by subsequent operations.

Execution Model 

The transformation starts at the user-specified top-level transform IR operation and applies to some user-specified payload IR scope, identified by the payload IR op that contains the IR to transform. It is the responsibility of the user to properly select the scope and/or to avoid the transformations to modify the IR outside of the given scope. The top-level transform IR operation may contain further transform operations and execute them in the desired order.

Transformation application functions produce a tri-state status:

  • success;
  • recoverable (silenceable) failure;
  • irrecoverable failure.

Transformation container operations may intercept recoverable failures and perform the required recovery steps thus succeeding themselves. On the other hand, they must propagate irrecoverable failures. For such failures, the diagnostics are emitted immediately whereas their emission is postponed for recoverable failures. Transformation container operations may also fail to recover from a theoretically recoverable failure, in which case they can either propagate it to their parent or emit the diagnostic and turn the failure into an irrecoverable one. A recoverable failure produced by applying the top-level transform IR operation is considered irrecoverable.

Transformation container operations are allowed to “step over” some nested operations if the application of some previous operation produced a failure. This can be conceptually thought of as having a global “recoverable error register” that is read/write accessed by each transform operation as a side effect. The transformation is skipped if the register already contains an error description, and the control flow proceeds to the following operation.

Note that a silenceable failure, if emitted, is a compiler error rather than a warning. Transformations are expected to produce silenceable failures if they haven’t yet modified the payload IR, i.e. when reporting a precondition failure, and an irrecoverable failure when they modified the IR in a way that is contrary to the semantics of the transform operation or would fail a postcondition. Some “navigation” operations that identify payload IR targets for the following transformation may have a conceptual “failure to match” that is considered a successful execution in the execution model but results in handles associated with empty payload IR operation lists.

Handle Invalidation 

The execution model of the transform dialect allows a payload IR operation to be associated with multiple handles as well as nested payload IR operations to be associated with different handles. Similarly, a payload IR value may be associated with multiple transform IR value handles. When a transform IR operation consumes a handle, it usually indicates that the corresponding payload IR object was destroyed and should no longer be referenced. Transform IR handles that may be pointing to an erased payload IR object are invalidated. The mere presence of an invalidated handle in the transform IR is not a problem, but using it results in undefined behavior. Invalidated handles can be thought of as dangling pointers. Note that the entire handle is invalidated, even if some of the payload IR objects associated with it remain live.

The following handle invalidation rules apply.

  • When an operation handle is consumed, are invalidated:

    • operation handles associated with one of the payload operations that the consumed handle is associated with;

    • operation handles associated with one of the operations nested in the payload operations described above;

    • value handles associated with any result of any operation described above;

    • value handles associated with any argument of a block contained in a region attached to any operation described above.

  • When a value handle is consumed, are invalidated:

    • operation handles associated with payload operations that produce as result any value associated with the consumed handle (when the associated is an operation result);

    • operation handles associated with payload operations nested in the payload operations described above;

    • operation handles associated with payload operations (recursively) contained in the block that defines as argument any value associated with the consumed handle (when the associated value is a block argument); note that the adjacent blocks are not affected;

    • value handles associated with any result of any operation described above, including all results of the operation defining as result the value associated with the consumed handle;

    • value handles associated with any argument of a block contained in a region attached to any operation described above.

More intuitively, consuming a handle invalidates any handle that may be pointing to an object defined or contained in the payload IR subtree rooted at the closest operation or block.

The Transform dialect infrastructure has the capability of checking whether the transform IR op operand is invalidated before applying the transformation. However, such a check is computationally expensive and must be enabled explicitly through TransformOptions. Additionally, the transform-dialect-check-uses pass emits warnings when a handle may be used after it has been consumed, but does so abstractly, without processing the payload IR.

Values associated with parameters (non-handles) cannot be invalidated.

Intended Use and Integrations 

The transformation control infrastructure provided by this dialect is positioned roughly between rewrite patterns and passes. A transformation that is executed by a transform operation is likely to be sufficiently complex to require at least a set of patterns to be implemented. It is also expected to be more focused than a pass: a pass typically applies identical transformations everywhere in the IR, a transform dialect-controlled transformation would apply to a small subset of operations selected, e.g., by a pattern-matching operation or generated by a previous transformation. It is discouraged, although technically possible, to run a pass pipeline as part of the transform op implementation.

One of the main scenarios for using this dialect is fine-grain chaining of transformations. For example, a loop-like operation may see its iteration domain split into two parts, implemented as separate loops (transformation known as index-set splitting), each of which is then transformed differently (e.g., the first loop is tiled and the second unrolled) with the necessary enabling and cleanup patterns around the main transformation:

// <generate %loop, e.g., by pattern-matching>
// ...
%parts:2 = transform.loop.split %loop { upper_bound_divisible_by = 8 }
transform.loop.tile %parts#0 { tile_sizes = [8] }
transform.loop.unroll %parts#1 { full }

This composition would have been difficult to implement as separate passes since the hypothetical “tiling” and “unrolling” pass would need to somehow differentiate between the parts of the loop produced by the previous pass (both are the same operation, and it is likely undesirable to pollute the operation with pass-specific information). Implementing passes that run the combined transformation would have run into the combinatorial explosion issue due to multiple possible transform compositions or into the need for deep pass parameterization, the ultimate form of which is an ad-hoc dialect to specify which transformations the pass should run. The transform dialect provides a uniform, extensible mechanism for controlling transformations in such cases.

The transform dialect is supposed to be consumed by an “interpreter” pass that drives the application of transformations. To ensure extensibility and composability, this pass is not expected to actually perform the transformations specified by the ops. Instead, the transformations are implemented by the transform ops themselves via TransformOpInterface. The pass serves as the entry point, handles the flow of transform operations and takes care of bookkeeping. As such, the transform dialect does not provide the interpreter pass. Instead, it provides a set of utilities that can be used by clients to define their own interpreter passes or as part of a more complex pass. For example, the mapping between values in the transform IR and operations in the payload IR, or the function that applies the transformations specified by ops in the given block sequentially. Note that a transform op may have regions with further transform ops in them, with the op itself guiding how to dispatch the transformation control flow to those regions. This approach allows clients to decide on the relative location of the transform IR in their input (e.g., nested modules, separate modules, optional regions to certain operations, etc.), register additional transform operations and perform client-specific bookkeeping.

Effects on the Infrastructure 

Although scoped to a single dialect, this functionality conceptually belongs to the MLIR infrastructure. It aims to be minimally intrusive and opt-in.

Some infrastructural components may grow extra functionality to support the transform dialect. In particular, the pattern infrastructure may add extra hooks to identify the “main results” of a transformation or to notify external observers about changes made to certain operations. These are not expected to affect the existing uses of the infrastructure.

For the sake of reusability, transformations should be implemented as utility functions that are called from the interface methods of transform ops rather than having the methods directly act on the payload IR.

Type Definitions 

AnyOpType 

Syntax: !transform.any_op

Transform IR handle that can be associated with a list of arbitrary Payload IR operations.

AnyValueType 

Syntax: !transform.any_value

Transform IR value that can be associated with a list of Payload IR values.

OperationType 

Syntax:

!transform.op<
  ::llvm::StringRef   # operation_name
>

Transform IR handle that can be associated with a list of Payload IR operations with the specified operation name.

Parameters: 

ParameterC++ typeDescription
operation_name::llvm::StringRefName of the allowed payload operation

ParamType 

Syntax:

!transform.param<
  ::mlir::Type   # type
>

Transform IR value that can be associated with the list of parameters of the given type. Types are currently limited to integers, but may be extended in the future to other types values of which can be contained in attributes.

Parameters: 

ParameterC++ typeDescription
type::mlir::TypeUnderlying type of the parameter

Core Operations 

transform.alternatives (::mlir::transform::AlternativesOp) 

Attempts sequences of transforms until one succeeds

Syntax:

operation ::= `transform.alternatives` ($scope^ `:` type($scope))? (`->` type($results)^)? attr-dict-with-keyword regions

This op may have an arbitrary number of regions, each of which represents a sequence of transform operations to be applied to the same payload IR. The regions are visited in order of appearance, and transforms in them are applied in their respective order of appearance. If one of these transforms fails to apply, the remaining ops in the same region are skipped an the next region is attempted. If all transformations in a region succeed, the remaining regions are skipped and the entire “alternatives” transformation succeeds. If all regions contained a failing transformation, the entire “alternatives” transformation fails.

It is up to the nested operations to define which errors are “recoverable” (or “silenceable”) and allow another alternatives to be attempted, and which errors should be propagated without attempting the other alternatives.

The single operand of this operation is the scope in which the alternative transformation sequences are attempted, that is, an operation in the payload IR that contains all the other operations that may be modified by the transformations. The scope operation must be isolated from above. There is no check that the transforms are indeed scoped as their “apply” methods can be arbitrarily complex. Therefore it is the responsibility of the user to ensure that the transforms are scoped correctly, or to produce an irrecoverable error and thus abort the execution without attempting the remaining alternatives. Note that the payload IR outside of the given scope is not necessarily in the valid state, or even accessible to the transformation.

The changes to the IR within the scope performed by transforms in the failed alternative region are reverted before attempting the next region. Practically, this is achieved by cloning the scope. Therefore it is advised to limit the scope as much as possible and place the most likely alternatives early in the region list. The operation is also isolated from above and requires rediscovering the operations within the given scope to avoid additional handle invalidation. The latter restriction may be lifted in the future.

Each of the regions may yield transform IR handles. The handles of the first successful alternative region are returned as the results of the “alternatives” op. Therefore, each alternative region must yield the same number of results, which should also match the number and the types of the “alternatives” op results.

Remark: this op allows one to implement a simple “try” construct as follows:

%result = transform.alternatives %scope {
^bb0(%arg0: !pdl.operation):
  // Try a fallible transformation.
  %0 = transform.fallible %arg0 // ...
  // If succeeded, yield the the result of the transformation.
  transform.yield %0 : !pdl.operation
}, {
^bb0(%arg0: !pdl.operation):
  // Otherwise, the second alternative is tried and it always succeeds by
  // returning the original handle.
  transform.yield %arg0 : !pdl.operation
}

Traits: IsolatedFromAbove, PossibleTopLevelTransformOpTrait, SingleBlockImplicitTerminator<::mlir::transform::YieldOp>

Interfaces: MemoryEffectOpInterface, RegionBranchOpInterface, TransformOpInterface

Operands: 

OperandDescription
scopeTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.cast (::mlir::transform::CastOp) 

Syntax:

operation ::= `transform.cast` $input attr-dict `:` type($input) `to` type($output)

Traits: TransformEachOpTrait

Interfaces: CastOpInterface, MemoryEffectOpInterface, TransformOpInterface

Operands: 

OperandDescription
inputTransformHandleTypeInterface instance

Results: 

ResultDescription
outputTransformHandleTypeInterface instance

transform.foreach (::mlir::transform::ForeachOp) 

Executes the body for each payload op

Syntax:

operation ::= `transform.foreach` $target `:` type($target) (`->` type($results)^)? $body attr-dict

This op has exactly one region with exactly one block (“body”). The body is executed for each payload op that is associated to the target operand in an unbatched fashion. I.e., the block argument (“iteration variable”) is always mapped to exactly one payload op.

This op always reads the target handle. Furthermore, it consumes the handle if there is a transform op in the body that consumes the iteration variable. This op does not return anything.

The transformations inside the body are applied in order of their appearance. During application, if any transformation in the sequence fails, the entire sequence fails immediately leaving the payload IR in potentially invalid state, i.e., this operation offers no transformation rollback capabilities.

This op generates as many handles as the terminating YieldOp has operands. For each result, the payload ops of the corresponding YieldOp operand are merged and mapped to the same resulting handle.

Traits: SingleBlockImplicitTerminator<::mlir::transform::YieldOp>

Interfaces: MemoryEffectOpInterface, RegionBranchOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.get_closest_isolated_parent (::mlir::transform::GetClosestIsolatedParentOp) 

Gets handles to the closest isolated-from-above parents

Syntax:

operation ::= `transform.get_closest_isolated_parent` $target attr-dict `:` functional-type(operands, results)

The handles defined by this Transform op correspond to the closest isolated from above ancestor of the Payload IR operations associated with its operand. If any of the given Payload IR ops has no such parent (unlikely as there usually is a top-level ModuleOp), the transformation is considered to have failed.

Ancestor ops follow the same order as the ops associated with the operand, except for potential duplicates (multiple Payload IR ops associated with the operand have the same parent) for which the ancestor will only be listed once for the first time it occurs. For example, given the list “(childof(A), childof(B), childof(B), childof(A), childof(B))”, the resulting list will be just “(A, B)”. Note that no other semantic ordering is applied, e.g., “B” may itself be a parent of “A”. This may have an impact on the further transformation applied to the handle produced here.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
parentTransformHandleTypeInterface instance

transform.get_consumers_of_result (::mlir::transform::GetConsumersOfResult) 

Get handle to the consumers of this operation’s result number

Syntax:

operation ::= `transform.get_consumers_of_result` $target `[` $result_number `]` attr-dict `:` functional-type(operands, results)

The handle defined by this Transform op corresponds to all operations that consume the SSA value defined by the target and result_number arguments. This operation applies to a single payload operation, otherwise it definitely fails. The return handle points to the consuming operations operations, which can be empty.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
result_number::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
consumersTransformHandleTypeInterface instance

transform.get_defining_op (::mlir::transform::GetDefiningOp) 

Get handle to the defining op of a value

Syntax:

operation ::= `transform.get_defining_op` $target attr-dict `:` functional-type(operands, results)

The handle defined by this Transform op corresponds to the defining op of the targeted value.

This transform fails silently if the targeted value is a block argument.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformValueHandleTypeInterface instance

Results: 

ResultDescription
resultTransformHandleTypeInterface instance

transform.get_producer_of_operand (::mlir::transform::GetProducerOfOperand) 

Get handle to the producer of this operation’s operand number

Syntax:

operation ::= `transform.get_producer_of_operand` $target `[` $operand_number `]` attr-dict `:` functional-type(operands, results)

The handle defined by this Transform op corresponds to operation that produces the SSA value defined by the target and operand_number arguments. If the origin of the SSA value is not an operations (i.e. it is a block argument), the transform silently fails. The return handle points to only the subset of successfully produced computational operations, which can be empty.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
operand_number::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
producerTransformHandleTypeInterface instance

transform.get_result (::mlir::transform::GetResultOp) 

Get handle to the a result of the targeted op

Syntax:

operation ::= `transform.get_result` $target `[` $result_number `]` attr-dict `:` functional-type(operands, results)

The handle defined by this Transform op corresponds to the OpResult with result_number that is defined by the given target operation.

This transform fails silently if the targeted operation does not have enough results. It reads the target handle and produces the result handle.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
result_number::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultTransformValueHandleTypeInterface instance

transform.include (::mlir::transform::IncludeOp) 

Includes a named transform sequence

Syntax:

operation ::= `transform.include` $target `failures` `(` $failure_propagation_mode `)``(` $operands `)` attr-dict `:` functional-type($operands, $results)

The application of this transform operation is equivalent to applying the operations contained in the named transform sequence with operands being remapped to block arguments. The behavior of the operation when a transformation in the included named sequence produces a silenceable error is controlled by the failure_propagation_mode attribute. When set to propagate, the failure of any nested transformation in the sequence implies immediate failure of the entire sequence with a silenceable error, and no further transformation is attempted. When set to suppress, silenceable errors in nested operations are ignored and further transformations are applied. Beware that even silenceable errors may leave the payload IR in a state unsuitable for further transformations. It is the responsibility of the user to ensure the following transformations are robust enough when errors are suppressed. Definite errors are propagated immediately regardless of the mode. The objects associated with the results of this operation are the same as those associated with the operands of the transform.yield in the referenced named sequence.

Interfaces: CallOpInterface, MemoryEffectOpInterface, SymbolUserOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
target::mlir::SymbolRefAttrsymbol reference attribute
failure_propagation_mode::mlir::transform::FailurePropagationModeAttrSilenceable error propagation policy

Operands: 

OperandDescription
operandsany transform handle or parameter

Results: 

ResultDescription
resultsany transform handle or parameter

transform.merge_handles (::mlir::transform::MergeHandlesOp) 

Merges handles into one pointing to the union of payload ops

Syntax:

operation ::= `transform.merge_handles` ($deduplicate^)? $handles attr-dict `:` type($result)

Creates a new Transform IR handle value that points to the same Payload IR operations as the operand handles. The Payload IR operations are listed in the same order as they are in the operand handles, grouped by operand handle, e.g., all Payload IR operations associated with the first handle come first, then all Payload IR operations associated with the second handle and so on. If deduplicate is set, do not add the given Payload IR operation more than once to the final list regardless of it coming from the same or different handles. Consumes the operands and produces a new handle.

Traits: SameOperandsAndResultType

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
deduplicate::mlir::UnitAttrunit attribute

Operands: 

OperandDescription
handlesTransformHandleTypeInterface instance

Results: 

ResultDescription
resultTransformHandleTypeInterface instance

transform.named_sequence (::mlir::transform::NamedSequenceOp) 

Named transform sequence that can be included elsewhere

Defines a named (callable, function-like) sequence of other Transform dialect operations that can be included using transform.include as part of another Transform dialect construct. This sequence is not processed immediately but rather dispatched to when the inclusion is processed. The arguments and results can be used to communicate a subset of mapping into the named sequence. The sequence must consist of a single block and end with a transform.yield terminator. The operands of the terminator become the results of the transform.include.

When dispatched to, the operations in the named sequence are executed one by one, similarly to the regular unnamed sequence. The failure propagation mode is specified on the transform.include. Different inclusions may use different failure propagation modes. This transform operation always succeeds by itself, but the inclusion may fail if any of the operations fail.

Named sequences can only appear at the top-level of the Transform dialect nesting structure. That is, they cannot be nested in other Transform dialect operations. Furthermore, one of the ancestors must have the SymbolTable trait and have the transform.with_named_sequence attribute attached.

Named sequences may include other named sequences via transform.include, but recursion is not allowed.

Traits: IsolatedFromAbove

Interfaces: CallableOpInterface, FunctionOpInterface, MemoryEffectOpInterface, Symbol, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
sym_name::mlir::StringAttrstring attribute
function_type::mlir::TypeAttrfunction type attribute
arg_attrs::mlir::ArrayAttrArray of dictionary attributes
res_attrs::mlir::ArrayAttrArray of dictionary attributes

transform.pdl_match (::mlir::transform::PDLMatchOp) 

Finds ops that match the named PDL pattern

Syntax:

operation ::= `transform.pdl_match` $pattern_name `in` $root attr-dict `:` functional-type(operands, results)

Find Payload IR ops nested within the Payload IR op associated with the operand that match the PDL pattern identified by its name. The pattern is expected to be defined in the closest surrounding WithPDLPatternsOp.

Produces a Transform IR value associated with the list of Payload IR ops that matched the pattern. The order of results in the list is that of the Operation::walk, clients are advised not to rely on a specific order though. If the operand is associated with multiple Payload IR ops, finds matching ops nested within each of those and produces a single list containing all of the matched ops.

The transformation is considered successful regardless of whether some Payload IR ops actually matched the pattern and only fails if the pattern could not be looked up or compiled.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
pattern_name::mlir::SymbolRefAttrsymbol reference attribute

Operands: 

OperandDescription
rootTransformHandleTypeInterface instance

Results: 

ResultDescription
matchedTransformHandleTypeInterface instance

transform.print (::mlir::transform::PrintOp) 

Dump each payload op

Syntax:

operation ::= `transform.print` $target attr-dict (`:` type($target)^)?

This op dumps each payload op that is associated with the target operand to stderr. It also prints the name string attribute. If no target is specified, the top-level op is dumped.

This op is useful for printf-style debugging.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
name::mlir::StringAttrstring attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

transform.replicate (::mlir::transform::ReplicateOp) 

Lists payload ops multiple times in the new handle

Syntax:

operation ::= `transform.replicate` `num` `(` $pattern `)` $handles attr-dict `:` type($pattern) `,` type($handles)

Produces a new handle associated with a list of payload IR ops that is computed by repeating the list of payload IR ops associated with the operand handle as many times as the “pattern” handle has associated operations. For example, if pattern is associated with [op1, op2] and the operand handle is associated with [op3, op4, op5], the resulting handle will be associated with [op3, op4, op5, op3, op4, op5].

This transformation is useful to “align” the sizes of payload IR lists before a transformation that expects, e.g., identically-sized lists. For example, a transformation may be parameterized by same notional per-target size computed at runtime and supplied as another handle, the replication allows this size to be computed only once and used for every target instead of replicating the computation itself.

Note that it is undesirable to pass a handle with duplicate operations to an operation that consumes the handle. Handle consumption often indicates that the associated payload IR ops are destroyed, so having the same op listed more than once will lead to double-free. Single-operand MergeHandlesOp may be used to deduplicate the associated list of payload IR ops when necessary. Furthermore, a combination of ReplicateOp and MergeHandlesOp can be used to construct arbitrary lists with repetitions.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Operands: 

OperandDescription
patternTransformHandleTypeInterface instance
handlesany transform handle or parameter

Results: 

ResultDescription
replicatedany transform handle or parameter

transform.sequence (::mlir::transform::SequenceOp) 

Contains a sequence of other transform ops to apply

Syntax:

operation ::= `transform.sequence` custom<SequenceOpOperands>($root, type($root), $extra_bindings, type($extra_bindings)) (`->` type($results)^)? `failures` `(` $failure_propagation_mode `)` attr-dict-with-keyword regions

The transformations indicated by the sequence are applied in order of their appearance. Each value produced by a transformation within the sequence corresponds to a group of operations or values in the payload IR, or to a group of parameters, depending on the type of the value. The behavior of the operation when a nested transformation produces a silenceable error is controlled by the failure_propagation_mode attribute. When set to propagate, the failure of any nested transformation in the sequence implies immediate failure of the entire sequence with a silenceable error, and no further transformation is attempted. When set to suppress, silenceable errors in nested operations are ignored and further transformations are applied. Beware that even silenceable errors may leave the payload IR in a state unsuitable for further transformations. It is the responsibility of the caller to ensure the following transformations are robust enough when errors are suppressed. Definite errors reported by nested transformations abort the sequence regardless of the propagation mode. The set of modes may be extended in the future, e.g., to collect silenceable errors and report them after attempting all transformations in the sequence.

The entry block of this operation has a single argument that maps to either the operand if provided or the top-level container operation of the payload IR, typically the root operation of the pass interpreting the transform dialect. Operand omission is only allowed for sequences not contained in another sequence.

The body of the sequence terminates with an implicit or explicit transform.yield op. The operands of the terminator are returned as the results of the sequence op.

Traits: AttrSizedOperandSegments, PossibleTopLevelTransformOpTrait, SingleBlockImplicitTerminator<::mlir::transform::YieldOp>

Interfaces: MemoryEffectOpInterface, OpAsmOpInterface, RegionBranchOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
failure_propagation_mode::mlir::transform::FailurePropagationModeAttrSilenceable error propagation policy

Operands: 

OperandDescription
rootTransformHandleTypeInterface instance
extra_bindingsany transform handle or parameter

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.split_handles (::mlir::transform::SplitHandlesOp) 

Splits handles from a union of payload ops to a list

Syntax:

operation ::= `transform.split_handles` $handle `in` `[` $num_result_handles `]`
              attr-dict `:` functional-type(operands, results)

Creates num_result_handles transform IR handles extracted from the handle operand. The resulting Payload IR operation handles are listed in the same order as the operations appear in the source handle. This is useful for ensuring a statically known number of operations are tracked by the source handle and to extract them into individual handles that can be further manipulated in isolation.

This operation succeeds and returns num_result_handles if the statically specified num_result_handles corresponds to the dynamic number of operations contained in the source handle. Otherwise it silently fails.

Traits: FunctionalStyleTransformOpTrait

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
num_result_handles::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
handleTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.with_pdl_patterns (::mlir::transform::WithPDLPatternsOp) 

Contains PDL patterns available for use in transforms

Syntax:

operation ::= `transform.with_pdl_patterns` ($root^ `:` type($root))? attr-dict-with-keyword regions

This op contains a set of named PDL patterns that are available for the Transform dialect operations to be used for pattern matching. For example, PDLMatchOp can be used to produce a Transform IR value associated with all Payload IR operations that match the pattern as follows:

transform.with_pdl_patterns {
^bb0(%arg0: !pdl.operation):
  pdl.pattern @my_pattern : benefit(1) {
    %0 = pdl.operation //...
    // Regular PDL goes here.
    pdl.rewrite %0 with "transform.dialect"
  }

  sequence %arg0 failures(propagate) {
  ^bb0(%arg1: !pdl.operation):
    %1 = pdl_match @my_pattern in %arg1
    // Use %1 as handle
  }
}

Note that the pattern is expected to finish with a pdl.rewrite terminator that points to the custom rewriter named “transform.dialect”. The rewriter actually does nothing, but the transform application will keep track of the operations that matched the pattern.

This op is expected to contain pdl.pattern operations and exactly one another Transform dialect operation that gets executed with all patterns available. This op is a possible top-level Transform IR op, the argument of its entry block corresponds to either the root op of the payload IR or the ops associated with its operand when provided.

Traits: NoTerminator, PossibleTopLevelTransformOpTrait, SymbolTable

Interfaces: MemoryEffectOpInterface, OpAsmOpInterface, TransformOpInterface

Operands: 

OperandDescription
rootTransformHandleTypeInterface instance

transform.yield (::mlir::transform::YieldOp) 

Yields operation handles from a transform IR region

Syntax:

operation ::= `transform.yield` operands attr-dict (`:` type($operands)^)?

This terminator operation yields operation handles from regions of the transform IR ops back to the containing op. It is not itself associated with any transformation on the payload IR and is used for flow purposes only.

Traits: Terminator

Interfaces: MemoryEffectOpInterface

Operands: 

OperandDescription
operandsany transform handle or parameter

Affine Transform Operations 

transform.affine.simplify_bounded_affine_ops (::mlir::transform::SimplifyBoundedAffineOpsOp) 

Syntax:

operation ::= `transform.affine.simplify_bounded_affine_ops` $target `with` `[` $bounded_values `]`
              `within` $lower_bounds `and` $upper_bounds attr-dict

Simplify the targeted affine.min / affine.max ops given the supplied lower and upper bounds for values that may be used as target op operands.

Example:

%0 = transform.structured.match ops{["affine.min", "affine.max"]} in %arg1
%1 = transform.structured.match ops{["gpu.lane_id"]} in %arg1
transform.affine.simplify_bounded_affine_ops %0 with [%1] within [0] and [32]

// Multiple bounds can be specified.
transform.affine.simplify_bounded_affine_ops %0 with [%1, %2] within [0, 5] and [32, 50]

Bounded op handles (%1 and `%2) must be mapped to ops that have a single result of index type. The sets of target ops and bounded ops must not overlap.

Return modes 

Target ops must be affine.min or affine.max ops. This transform consumes the target handle and does not produce any handle. It reads the bounded op handles.

TODO: Support affine.apply targets. TODO: Allow mixed PDL_Operation/int64_t for lower_bounds and upper_bounds.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
lower_bounds::mlir::DenseI64ArrayAttri64 dense array attribute
upper_bounds::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *
bounded_valuesPDL handle to an mlir::Operation *

Bufferization Transform Operations 

transform.bufferization.empty_tensor_to_alloc_tensor (::mlir::transform::EmptyTensorToAllocTensorOp) 

Syntax:

operation ::= `transform.bufferization.empty_tensor_to_alloc_tensor` $target attr-dict `:` functional-type(operands, results)

Replace a tensor.empty with a bufferization.tensor_alloc.

Return modes 

This operation consumes the target handle and produces the transformed handle. target is expected to be a tensor.empty operation. The transform always succeeds.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransform IR handle to tensor.empty operations

Results: 

ResultDescription
transformedTransform IR handle to bufferization.alloc_tensor operations

transform.bufferization.one_shot_bufferize (::mlir::transform::OneShotBufferizeOp) 

Syntax:

operation ::= `transform.bufferization.one_shot_bufferize` (`layout` `{` $function_boundary_type_conversion^ `}`)?
              $target attr-dict

Indicates that the given target op should be bufferized with One-Shot Bufferize. The bufferization can be configured with various attributes that corresponding to options in BufferizationOptions and the one-shot-bufferize pass. More information can be found in the pass documentation.

If target_is_module is set, target must be a module. In that case the target handle can be reused by other transform ops. When bufferizing other ops, the target handled is freed after bufferization and can no longer be used.

Note: Only ops that implement BufferizableOpInterface are bufferized. All other ops are ignored if allow_unknown_ops. If allow_unknown_ops is unset, this transform fails when an unknown/non-bufferizable op is found. Many ops implement BufferizableOpInterface via an external model. These external models must be registered when applying this transform op; otherwise, said ops would be considered non-bufferizable.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
function_boundary_type_conversion::mlir::bufferization::LayoutMapOptionAttroption for map layout
allow_return_allocs::mlir::BoolAttrbool attribute
allow_unknown_ops::mlir::BoolAttrbool attribute
bufferize_function_boundaries::mlir::BoolAttrbool attribute
create_deallocs::mlir::BoolAttrbool attribute
target_is_module::mlir::BoolAttrbool attribute
test_analysis_only::mlir::BoolAttrbool attribute
print_conflicts::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

GPU Transform Operations 

transform.gpu.map_forall_to_blocks (::mlir::transform::MapForallToBlocks) 

Syntax:

operation ::= `transform.gpu.map_forall_to_blocks` $target
              (`generate_gpu_launch` $generate_gpu_launch^)?
              (`grid_dims` `=` $grid_dims^)?
              attr-dict

Target the gpu_launch op and rewrite the top level scf.forall to distributed gpu.block_id attribute. If generate_gpu_launch attribute is set, then first generates gpu_launch and moves the top level scf.forall inside.

The operation searches top level scf.forall ops under gpu_launch and maps each such op to GPU blocks. Mapping is one-to-one and the induction variables of scf.forall are rewritten to gpu.block_id according to the thread_dim_mapping attribute.

Dynamic, scf.forall trip counts are currently not supported. Dynamic block dim sizes are currently not supported.

Only bufferized scf.forall are currently supported. Only scf.forall distributed to at most 3 dimensions are currently supported.

The operation alters the block size of the given gpu_launch using the grid_dims argument.

Return modes: 

This operation ignores non-gpu_launch ops and drops them in the return.

If any scf.forall with tensors is found, the transform definitely fails.

If all the scf.forall operations contained within the LaunchOp referred to by the target PDLOperation lower to GPU properly, the transform succeeds. Otherwise the transform definitely fails.

The returned handle points to the same LaunchOp operand, consuming it and producing a new SSA value to satisfy chaining and linearity of the IR properties.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
grid_dims::mlir::DenseI64ArrayAttri64 dense array attribute
generate_gpu_launch::mlir::UnitAttrunit attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
resultPDL handle to an mlir::Operation *

transform.gpu.map_nested_forall_to_threads (::mlir::transform::MapNestedForallToThreads) 

Syntax:

operation ::= `transform.gpu.map_nested_forall_to_threads` $target
              `block_dims` `=` $block_dims
              (`warp_dims` `=` $warp_dims^)?
              (`sync_after_distribute` `=` $sync_after_distribute^)?
              attr-dict

Target the gpu.launch op and rewrite all scf.forall nested in it to distributed gpu.thread_id attribute.

The operation searches for scf.forall ops nested under target and maps each such op to GPU threads.

scf.forall induction variables are rewritten to gpu.thread_id according to the mapping attribute.

Different types of mappings attributes are supported:

  • the block_dims is a list of integers that specifies the number of threads in each dimension. This is a mandatory attribute that is used to constrain the number of threads in each dimension. If an scf.forall op is mapped to fewer threads, predication occurs.
  • the warp_dims is a list of integers that specifies the number of warps in each dimension. This is an optional attribute that is used to constrain the number of warps in each dimension. When present, this attribute must be specified in a way that is compatible with the block_dims attribute. If an scf.forall op is mapped to fewer warps, predicaiton occurs.

Dynamic scf.forall trip counts are currently not supported. Dynamic block dim sizes are currently not supported.

Only bufferized scf.forall are currently supported. Only scf.forall distributed to at most 3 dimensions are currently supported.

The sync_after_distributeattribute controls whether a gpu.barrier is inserted after each scf.forall op. At this time, this is an all or nothing choice. This will need to be tightened in the future.

The operation alters the block size of the given gpu_launch using the mandatory block_dims argument.

Return modes: 

This operation ignores non-gpu_launch ops and drops them in the return.

If any scf.forall with tensors is found, the transform definitely fails.

If all the scf.forall operations with gpu.thread mapping contained within the LaunchOp referred to by the target PDLOperation lower to GPU properly, the transform succeeds. Otherwise the transform definitely fails.

scf.forall operations with mappings other than gpu.thread are ignored.

The returned handle points to the same LaunchOp operand, consuming it and producing a new SSA value to satisfy chaining and linearity of the IR properties.

Example: 

gpu.launch blocks(%bx, %by, %bz) in (%x = %0, %y = %1, %z = %2)
           threads(%tx, %ty, %tz) in (%tx = %3, %ty = %4, %tz = %5) {
  scf.forall (%i, %j) in (7, 9) {
    ... // body 1
  } {mapping = [#gpu.thread<x>, #gpu.thread<y>, #gpu.thread<z>]}
  scf.forall (%i) in (12) {
    ... // body 2
  } {mapping = [#gpu.thread<x>]}
  gpu.terminator
}

is translated to:

%bdimX = arith.constant 12 : index
%bdimY = arith.constant 9 : index
gpu.launch blocks(%bx, %by, %bz) in (%x = %0, %y = %1, %z = %2)
       threads(%tx, %ty, %tz) in (%tx = %bdimX, %ty = %bdimY, %tz = %5) {
  if (threadIdx.x < 9 && threadIdx.y < 7) {
    ... // body 1
  }
  gpu.barrier
  if (threadIdx.y < 1) {
    ... // body 2
  }
  gpu.barrier
  gpu.terminator
}

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
block_dims::mlir::DenseI64ArrayAttri64 dense array attribute
warp_dims::mlir::DenseI64ArrayAttri64 dense array attribute
sync_after_distribute::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
resultPDL handle to an mlir::Operation *

Loop (SCF) Transform Operations 

transform.loop.get_parent_for (::mlir::transform::GetParentForOp) 

Gets a handle to the parent ‘for’ loop of the given operation

Syntax:

operation ::= `transform.loop.get_parent_for` $target attr-dict `:` functional-type(operands, results)

Produces a handle to the n-th (default 1) parent scf.for or affine.for (when the affine flag is true) loop for each Payload IR operation associated with the operand. Fails if such a loop cannot be found. The list of operations associated with the handle contains parent operations in the same order as the list associated with the operand, except for operations that are parents to more than one input which are only present once.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
num_loops::mlir::IntegerAttr64-bit signless integer attribute whose value is positive
affine::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
parentTransformHandleTypeInterface instance

transform.loop.coalesce (::mlir::transform::LoopCoalesceOp) 

Coalesces the perfect loop nest enclosed by a given loop

Syntax:

operation ::= `transform.loop.coalesce` $target attr-dict `:` functional-type($target, $transformed)

Given a perfect loop nest identified by the outermost loop, perform loop coalescing in a bottom-up one-by-one manner.

Return modes 

The return handle points to the coalesced loop if coalescing happens, or the given input loop if coalescing does not happen.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.loop.outline (::mlir::transform::LoopOutlineOp) 

Outlines a loop into a named function

Syntax:

operation ::= `transform.loop.outline` $target attr-dict `:` functional-type(operands, results)

Moves the loop into a separate function with the specified name and replaces the loop in the Payload IR with a call to that function. Takes care of forwarding values that are used in the loop as function arguments. If the operand is associated with more than one loop, each loop will be outlined into a separate function. The provided name is used as a base for forming actual function names following SymbolTable auto-renaming scheme to avoid duplicate symbols. Expects that all ops in the Payload IR have a SymbolTable ancestor (typically true because of the top-level module). Returns the handle to the list of outlined functions in the same order as the operand handle.

Traits: FunctionalStyleTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
func_name::mlir::StringAttrstring attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.loop.peel (::mlir::transform::LoopPeelOp) 

Peels the last iteration of the loop

Syntax:

operation ::= `transform.loop.peel` $target attr-dict `:` functional-type(operands, results)

Updates the given loop so that its step evenly divides its range and puts the remaining iteration into a separate loop or a conditional.

In the absence of sufficient static information, this op may peel a loop, even if the step always divides the range evenly at runtime.

Return modes 

This operation ignores non-scf::ForOp ops and drops them in the return.

This operation always succeeds and returns the scf::ForOp with the postcondition: “the loop trip count is divisible by the step”. This operation may return the same unmodified loop handle when peeling did not modify the IR (i.e. the loop trip count was already divisible).

Note that even though the Payload IR modification may be performed in-place, this operation consumes the operand handle and produces a new one.

TODO: Return both the peeled loop and the remainder loop.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
fail_if_already_divisible::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetTransform IR handle to scf.for operations

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.loop.pipeline (::mlir::transform::LoopPipelineOp) 

Applies software pipelining to the loop

Syntax:

operation ::= `transform.loop.pipeline` $target attr-dict `:` functional-type(operands, results)

Transforms the given loops one by one to achieve software pipelining for each of them. That is, performs some amount of reads from memory before the loop rather than inside the loop, the same amount of writes into memory after the loop, and updates each iteration to read the data for a following iteration rather than the current one.

The amount is specified by the attributes.

The values read and about to be stored are transferred as loop iteration arguments. Currently supports memref and vector transfer operations as memory reads/writes.

Return modes 

This operation ignores non-scf::For ops and drops them in the return. If all the operations referred to by the target PDLOperation pipeline properly, the transform succeeds. Otherwise the transform silently fails. The return handle points to only the subset of successfully produced pipelined loops, which can be empty.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
iteration_interval::mlir::IntegerAttr64-bit signless integer attribute
read_latency::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransform IR handle to scf.for operations

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.loop.unroll (::mlir::transform::LoopUnrollOp) 

Unrolls the given loop with the given unroll factor

Syntax:

operation ::= `transform.loop.unroll` $target attr-dict `:` type($target)

Unrolls each loop associated with the given handle to have up to the given number of loop body copies per iteration. If the unroll factor is larger than the loop trip count, the latter is used as the unroll factor instead.

Return modes 

This operation ignores non-scf::For, non-affine::For ops and drops them in the return. If all the operations referred to by the target PDLOperation unroll properly, the transform succeeds. Otherwise the transform silently fails.

Does not return handles as the operation may result in the loop being removed after a full unrolling.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
factor::mlir::IntegerAttr64-bit signless integer attribute whose value is positive

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

MemRef Transform Operations 

transform.memref.multibuffer (::mlir::transform::MemRefMultiBufferOp) 

Multibuffers an allocation

Syntax:

operation ::= `transform.memref.multibuffer` $target attr-dict `:` functional-type(operands, results)

Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations. This transform expands the size of an allocation by a given multiplicative factor and fixes up any users of the multibuffered allocation. If skip analysis is not set the transformation will only apply if it can prove that there is no data being carried across loop iterations.

Return modes 

This operation returns the new allocation if multi-buffering succeeds, and failure otherwise.

Traits: FunctionalStyleTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
factor::mlir::IntegerAttr64-bit signless integer attribute whose value is positive
skip_analysis::mlir::UnitAttrunit attribute

Operands: 

OperandDescription
targetTransform IR handle to memref.alloc operations

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

Structured (Linalg) Transform Operations 

transform.structured.bufferize_to_allocation (::mlir::transform::BufferizeToAllocationOp) 

Syntax:

operation ::= `transform.structured.bufferize_to_allocation` $target attr-dict

This transform materializes an allocation for the targeted tensor value. It replaces all original uses of the target with the newly allocated buffer, wrapped in a bufferization.to_tensor op. It returns a handle to the result of the to_tensor op.

Example:

%0 = "some_op"() : () -> (tensor<10xf32>)
"some_use"(%0) : (tensor<10xf32>) -> ()

Is rewritten to:

%0 = "some_op"() : () -> (tensor<10xf32>)
%1 = memref.alloc() : memref<10xf32>
memref.tensor_store %0, %1 : memref<10xf32>
%2 = bufferization.to_tensor %1 restrict writable : memref<10xf32>
"some_use"(%2) : (tensor<10xf32>) -> ()

This transform has optimized lowerings for certain targets that are results of non-DPS ops. For such targets, not only a buffer allocation is emitted but also the defining op is bufferized. This is to avoid a second allocation for the missing destination of the non-DPS op (when subsequently running a bufferization pass/transform). Currently supported ops with optimized lowerings:

  • tensor.pad

An optional memory space attribute can be specified for the materialized buffer allocation.

Return modes 

This operation consumes the target handle and produces the transformed handle. It always succeeds.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
memory_space::mlir::Attributeany attribute

Operands: 

OperandDescription
target

Results: 

ResultDescription
transformed

transform.structured.convert_conv2d_to_img2col (::mlir::transform::ConvertConv2DToImg2ColOp) 

Syntax:

operation ::= `transform.structured.convert_conv2d_to_img2col` $target attr-dict `:` functional-type($target, results)

Convert linalg.conv_2d_xxx into linalg.generic (for img2col packing) and linalg.matmul.

A convolution operation can be written as a matrix-matrix multiplication by unfolding the cross-correlation between input and filter and explicitly copy overlapped sliding window inputs.

Consider 2D input X with single channel input and output and 2x2 filter W:

[x(0, 0)  , x(0, 1)  , ...,   x(0, n)  ]
[x(1, 0)  , x(1, 1)  , ...,   x(1, n)  ]
[.        ,  .       ,.   ,      .     ]            [w(0, 0), w(0, 1)]
[.        ,  .       , .  ,      .     ]    (conv)  [w(1, 0), w(1, 1)]
[.        ,  .       ,   .,      .     ]
[x(n-1, 0), x(n-1, 1), ..., x(n-1, n-1)]

The packed input data (img2col) is a matrix with |rows| = output spatial size, |columns| = filter spatial size. To compute the output Y(i, j) we need to calculate the dot product between filter window at input X(x, y)) and the filter which will look like the following where r.h.s is the img2col matrix and l.h.s is the flattned filter:

[x(0,0), x(0,1), x(1,0), x(1,1)]
[x(0,1), x(1,1), x(0,2), x(1,2)] (matmul) [w(0,0), w(0,1), w(1,0), w(1,1)]
[x(0,1), x(1,1), x(0,2), x(1,2)]
[   .  ,    .  ,    .  ,    .  ]

In general for 2D case with (N, H, W, C) input and (Kh, Kw, C, D) filter and output (N, Ho, Wo, D) the convolution is the following matrix-matrix multiplication (Ho x Wo, Kh x Kw x C) * (Kh x Kw x C, D) for each input in the N input. For the case where N > 1 its a batched matrxi-matrix multplication.

Returns two handles:

  • One on the operation that produces the img2col tensor.
  • One on the final operation of the sequence that replaces the original convolution.

Return modes: 

Returns a definite failure if target is not isolated from above. Returns a silenceable failure if the pattern application failed.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
img2col_tensorTransformHandleTypeInterface instance
transformedTransformHandleTypeInterface instance

transform.structured.decompose (::mlir::transform::DecomposeOp) 

Syntax:

operation ::= `transform.structured.decompose` $target attr-dict

Decomposes named complex operations, such as higher-dimensional (depthwise) convolutions, into combinations of lower-dimensional equivalents when possible.

Return modes 

This operation ignores non-Linalg ops and drops them in the return. If all the operations referred to by the target PDLOperation decompose properly, the transform succeeds. Otherwise the transform silently fails. The return handle points to only the subset of successfully produced computational operations, which can be empty.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

transform.structured.fuse_into_containing_op (::mlir::transform::FuseIntoContainingOp) 

Fuse a producer into a containing operation.

Syntax:

operation ::= `transform.structured.fuse_into_containing_op` $producer_op `into` $containing_op attr-dict

Fuses the producer_op into the containing_op. Returns a handle to the fused ops.

The producer is typically a slice of a tileable op (i.e., implements TilingInterface). In that case, this transform computes the accessed producer slice inside of the containing op (“tile and fuse”). Otherwise, the entire producer is cloned inside the containing op (“clone and fuse”).

The containing op handle must be associated with exactly one payload op. The producer op handle may be associated with multiple payload ops. This transform fuses producers one-by-one, always picking an unspecified producer that has at least one use inside the containing op among the producers.

Note: If a producer has multiple uses inside the containing op, it is currently tiled and/or cloned multiple times into the containing op. TODO: Reuse already fused OpResults instead of tiling/cloning a second time when possible. Fuse producers according to a topological sorting to achieve the largest amount of reuse.

Return modes 

If at least one producer could not be fused, this operation fails silently. This is the case when tiling fails or when no producer op could be found among the remaining producers that has at least one use within the containing op. I.e., “producers” that are not consumed within the containing op are rejected by this operation.

This operation reads and frees the producer handle. This operation reads the containing op handle.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Operands: 

OperandDescription
producer_opPDL handle to an mlir::Operation *
containing_opPDL handle to an mlir::Operation *

Results: 

ResultDescription
fused_opPDL handle to an mlir::Operation *

transform.structured.fuse (::mlir::transform::FuseOp) 

Tiles the operations pointed to by the target handle and fuses their producers greedily using the options provided as attributes.

Traits: FunctionalStyleTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
tile_sizes::mlir::ArrayAttr64-bit integer array attribute
tile_interchange::mlir::ArrayAttr64-bit integer array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *
loopsPDL handle to an mlir::Operation *

transform.structured.generalize (::mlir::transform::GeneralizeOp) 

Syntax:

operation ::= `transform.structured.generalize` $target attr-dict

Transforms a named structured operation into the generic form with the explicit attached region.

Return modes 

This operation ignores non-Linalg ops and drops them in the return. If all the operations referred to by the target PDLOperation generalize properly, the transform succeeds. Otherwise the transform silently fails. The return handle points to only the subset of successfully produced equivalent generic operations, which can be empty or contain the original ops if they were already in generic form.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

transform.structured.hoist_pad (::mlir::transform::HoistPadOp) 

Syntax:

operation ::= `transform.structured.hoist_pad` $target
              `by` $num_loops `loops`
              (`,` `transpose` `by` $transpose^)?
              attr-dict
              `:` functional-type(operands, results)

Hoist the tensor.pad target operation by at most the given number of loops. Optionally apply the transpose attribute to the inner dimensions.

TODO: In the future, we should consider rewriting as a tensor.pack after hoisting since this abstraction is now available. TODO: Maybe also return the linalg.generic transpose created at some point.

Return modes 

This operation ignores non-tensor.pad ops and drops them in the result. If any non-tensor.pad is passed, the transform emits a silenceable failure.

If all the operations referred to by the target handle padproperly, the transform succeeds. Otherwise the transform silently fails.

The return handle points to only the subset of successfully hoisted tensor.pad operations, which can be empty.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
num_loops::mlir::IntegerAttr64-bit signless integer attribute
transpose::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.structured.hoist_redundant_tensor_subsets (::mlir::transform::HoistRedundantTensorSubsetsOp) 

Syntax:

operation ::= `transform.structured.hoist_redundant_tensor_subsets` $target attr-dict `:` functional-type(operands, results)

Hoists supported tensor subset extract/insert operation pairs out of immediately enclosing loop iteratively, if the following conditions are true:

  1. The 2 ops access the same tensor subset.
  2. All operands are invariant under the enclosing loop.

The supported subset extract/insert operation pairs currently comprise:

  • tensor.extract_slice / tensor.insert_slice
  • vector.transfer_read / vector.transfer_write on tensors

Only scf.for loops are currently supported.

When applied to:

  1. an scf.for loop, hoist out of this loop only.
  2. a non-loop op, apply hoisting to all the contained loop ops.

Return modes: 

The operation always succeeds and returns a handle to the transformed function op.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.structured.hoist_redundant_vector_transfers (::mlir::transform::HoistRedundantVectorTransfersOp) 

Syntax:

operation ::= `transform.structured.hoist_redundant_vector_transfers` $target attr-dict `:` functional-type(operands, results)

Hoist vector.transfer_read / vector.transfer_write pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true:

  1. The 2 ops access the same memref with the same indices.
  2. All operands are invariant under the enclosing scf::ForOp.
  3. No uses of the memref either dominate the transfer_read or are dominated by the transfer_write (i.e. no aliasing between the write and the read across the loop)

WARNING: This hoisting does not model parallelism and is generally incorrect when used on distributed loops with memref semantics! TODO: obsolete and should be retired.

Return modes: 

The operation always succeeds and returns a handle to the transformed function op.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.structured.interchange (::mlir::transform::InterchangeOp) 

Syntax:

operation ::= `transform.structured.interchange` $target
              (`iterator_interchange` `=` $iterator_interchange^)? attr-dict

Interchanges the iterators of the operations pointed to by the target handle using the iterator interchange attribute.

Return modes 

This operation ignores non-linalg::Generic ops and drops them in the return. This operation fails if the interchange attribute is invalid. If all the operations referred to by the target PDLOperation interchange properly, the transform succeeds. If any interchange fails, the transform definitely fails. The return handle points to only the subset of successfully produced interchanged operations, which can be empty.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
iterator_interchange::mlir::DenseI64ArrayAttri64 dense array attribute whose value is non-negative

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

transform.structured.lower_pack (::mlir::transform::LowerPackOp) 

Syntax:

operation ::= `transform.structured.lower_pack` $target attr-dict `:` functional-type(operands, results)

Rewrite a tensor.pack into tensor.pad + tensor.expand_shape + linalg.transpose.

Return modes 

This operation ignores non-pack ops and drops them in the return. This operation produces a silenceableFailure if the rewrite fails for any reason. If all the operations referred to by the target are rewritten, the transform succeeds. Return handles to the newly produced pad, expand_shape and transpose ops.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransform IR handle to tensor.pack operations

Results: 

ResultDescription
pad_opTransform IR handle to tensor.pad operations
expand_shape_opTransform IR handle to tensor.expand_shape operations
transpose_opTransform IR handle to linalg.transpose operations

transform.structured.lower_unpack (::mlir::transform::LowerUnPackOp) 

Syntax:

operation ::= `transform.structured.lower_unpack` $target attr-dict `:` functional-type(operands, results)

Lower a tensor.unpack into empty + linalg.transpose + tensor.collapse_shape + tensor.extract_slice.

Return modes 

This operation ignores non-unpack ops and drops them in the return. This operation produces a silenceableFailure if the rewrite fails for any reason. If all the operations referred to by the target are rewritten, the transform succeeds. Return handles to the newly produced empty, transpose, collapse_shape and extract_slice ops.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransform IR handle to tensor.unpack operations

Results: 

ResultDescription
empty_opTransform IR handle to tensor.empty operations
transpose_opTransform IR handle to linalg.transpose operations
collapse_shape_opTransform IR handle to tensor.collapse_shape operations
extract_slice_opTransform IR handle to tensor.extract_slice operations

transform.structured.masked_vectorize (::mlir::transform::MaskedVectorizeOp) 

Syntax:

operation ::= `transform.structured.masked_vectorize` $target
              `vector_sizes` custom<DynamicIndexList>($vector_sizes,
              $static_vector_sizes)
              attr-dict

Vectorize the target ops, which must be Linalg ops, with masked vectors of the specified size.

The vector sizes can be either static or dynamic (SSA values). In case of SSA values, the handle must be mapped to exactly one payload op with exactly one index-typed result.

Note: The input vector sizes must be bigger than or equal to their counterpart iteration space sizes.

Typically this operator should be applied to linalg operations that have already be tiled to the appropriate sizes.

Return modes: 

This operation produces a definite failure if the dynamic vector sizes (SSA values) do not satify the constraints mentioned above. It produces a silenceable failure if at least one target op is not a Linalg op or fails to vectorize.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
vectorize_nd_extract::mlir::UnitAttrunit attribute
static_vector_sizes::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *
vector_sizesPDL handle to an mlir::Operation *

transform.structured.match (::mlir::transform::MatchOp) 

Syntax:

operation ::= `transform.structured.match` (`ops` `{` $ops^ `}`)?
              (`interface` `{` $interface^ `}`)?
              (`attributes` $op_attrs^)?
              (`filter_result_type` `=` $filter_result_type^)?
              `in` $target attr-dict
              `:` functional-type($target, results)

Match op with the specified constraints, within the target op.

The following constraints are supported:

  • interface: an optional MatchInterfaceEnum specifying an enum representation for an interface to target.
  • ops: an optional StrArrayAttr specifying the concrete name of an op. Multiple names can be specified. Matched ops must have one of specified names.
  • attribute: the matched op must have all specified attributes (with their specified values).
  • filter_result_type: the matched op must return exactly this one type.

Note: Only ops that satisfy all specified constraints are matched.

TODO: Extend with regions to allow a limited form of constraints.

Return modes 

This op traverses the ops nested under target and returns the handles to all the operations that match the requirements.

This op fails if the target is not a handle to exactly one operation. Otherwise it succeeds.

This operation does not consume the target handle and produces new handles: it is a navigation op.

Traits: NavigationTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
ops::mlir::ArrayAttrstring array attribute
interfacemlir::transform::MatchInterfaceEnumAttrAn interface to match
op_attrs::mlir::DictionaryAttrdictionary of named attribute values
filter_result_type::mlir::TypeAttrany type attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.structured.multitile_sizes (::mlir::transform::MultiTileSizesOp) 

Syntax:

operation ::= `transform.structured.multitile_sizes` $target attr-dict `:` custom<MultitileSizesTypes>(type($target), type($low_size), type($high_size), type($split_point))

Emits the IR computing the tile sizes s1 and s2 such that:

  • there exists a combination of n tiles of size s1 and m tiles of size s2 that covers the entirety of the iteration space dimension of the target structured op;
  • s1, s2 is less than or equal to target_size;
  • s1 and s2 are divisible by `divisor.

For example, for a dimension of size 54 with target size 12 and divisor 2, this can emit the IR computing the tile size 10, used for 3 tiles, and 12, used for 2 tiles, totally 103 + 122 = 54. Note that when the divisor does not divide the original dimension size, it is impossible to compute such tile sizes. An assertion is emitted to guard against this in the dynamic case.

Expects the target size and the divisor to be strictly positive. Folds the IR as much as possible, normally obtaining constant sizes and numbers of tiles for a statically known dimension.

This does not consume the target handle and produces three handles each pointing to single-result index-typed operations (which may be arithmetic constant operations) defining the two respective tile sizes and the product of the first tile size with the number of tiles of that size (useful for splitting the iteration space).

This operation composes with the regular tiling when applied per-dimension:

%sz1, %sz2, %split = structured.multitile_sizes %target
                     { target_size = 10, dimension = 1 }
                   : !transform.any_op, !transform.param<i64>,
                     !transform.param<i64>, !transform.param<i64>
%low, %high = structured.split %target after %split { dimension = 1 }
            : !transform.any_op, !transform.param<i64>
%tiled_low, %loop1 = structured.tile %low [0, %sz1]
                   : (!transform.any_op, !transform.param<i64>)
                  -> (!transform.any_op, !transform.any_op)
%tiled_high, %loop2 = structured.tile %high [0, %sz2]
                    : (!transform.any_op, !transform.param<i64>)
                   -> (!transform.any_op, !transform.any_op)
%common = merge_handles %tiled_low, %tiled_high : !transform.any_op

%sz3, %sz4, %split = structured.multitile_size %target
                     { target_size = 42, dimension = 0 }
                   : !transform.any_op, !transform.any_op,
                     !transform.any_op, !transform.any_op
%sz3r, %sz4r, %splitr = replicate num(%common) %sz3, %sz4, %splitr
         : !transform.any_op, !transform.any_op, !transform.any_op
structured.split %common after %splitr { dimension = 0 }
         : !transform.any_op, !transform.any_op
// ...

Traits: TransformEachOpTrait

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
dimension::mlir::IntegerAttr64-bit signless integer attribute
target_size::mlir::IntegerAttr64-bit signless integer attribute
divisor::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
low_sizetransform ‘param’ type or any handle type
high_sizetransform ‘param’ type or any handle type
split_pointtransform ‘param’ type or any handle type

transform.structured.pack_greedily (::mlir::transform::PackGreedilyOp) 

Syntax:

operation ::= `transform.structured.pack_greedily` $target
              oilist(
              `matmul_packed_sizes` `=` custom<DynamicIndexList>($matmul_packed_sizes,
              $static_matmul_packed_sizes)
              `matmul_inner_dims_order` `=` $matmul_inner_dims_order
              )
              attr-dict
              `:` functional-type($target, results)

Target a Linalg op and rewrite it into packed LinalgOp form by trying to infer whether a known suboperation is embedded

Different packing strategies are applied in order, when one applies successfully, the transform returns:

  1. Matmul packing: Try to infer a matmul operation embedded in the target op. Specifically, this looks for 2 parallel dimensions that participate in an outer-product and 1 reduction dimension. These dimensions are referred as (m, n, k) to match canonical matmul terminology. The packed sizes for (m, n, k) are specified by matmul_packed_sizes. The ordering of the packed dimensions (mm, nn, kk) is specified by the matmul_inner_dims_order attribute.

Packing occurs as follows:

  1. Find the dimensions to pack according to the strategy.
  2. The target is converted to linalg.generic form.
  3. An interchange transform is applied to isolate the dimensions to pack as the most minor indexing dimensions of the linalg.generic. The most minor dimensions are themselves ordered according to inner_dims_order.
  4. Packing is performed by packed_sizes and following inner_dims_order.

By normalizing the most minor dimensions to inner_dims_order, the transform guarantees that packing immediates generates inner dimensions in a desirable layout.

Outer dimension layout permutations are not controlled by this transform op at the moment and can be obtained by composing with the pack_transpose transformation.

Return modes 

This operation ignores non-Linalg ops and drops them in the return. It returns the list of packed Linalg ops or the original op when all available packing strategies failed to apply.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
static_matmul_packed_sizes::mlir::DenseI64ArrayAttri64 dense array attribute
matmul_inner_dims_order::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance
matmul_packed_sizesPDL handle to an mlir::Operation *

Results: 

ResultDescription
packed_opTransform IR handle to linalg.generic operations

transform.structured.pack (::mlir::transform::PackOp) 

Syntax:

operation ::= `transform.structured.pack` $target
              `packed_sizes` `=` custom<DynamicIndexList>($packed_sizes,
              $static_packed_sizes)
              attr-dict
              `:` functional-type($target, results)

Pack a LinalgOp by applying a data tiling transformation on the op and packing the operands according to the packed_sizes specification.

Iterator dimensions are tiled in their canonical order in the op spec. Operands are packed according to the same canonical order of the op iterator dimensions.

Specifying a packed size of 0 for an iterator removes it from consideration for packing.

tensor.pack (resp. tensor.unpack) operations are inserted for the operands (resp. results) that need to be packed (resp. unpacked) according to the packed_sizes specification.

Example 

Consider a linalg.matmul with indexing maps:

  //              M   N   K       M   K
  // affine_map<(d0, d1, d2) -> (d0, d2)>
  //                              K   N
  // affine_map<(d0, d1, d2) -> (d2, d1)>
  //                              M   N
  // affine_map<(d0, d1, d2) -> (d0, d1)>
  %0 = linalg.matmul  ins(%A, %B: tensor<?x?xf32>, tensor<?x?xf32>)
                     outs(    %C: tensor<?x?xf32>)

Specifying packed_sizes [2, 3, 4] results in tiling the iterator dimensions M, N and K, in this order, in both the op and its operands.

  //              M   N   K   m   n   k       M   K   m   k
  // affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d2, d3, d5)>
  //                                          K   N   n   k
  // affine_map<(d0, d1, d2, d3, d4, d5) -> (d2, d1, d4, d5)>
  //                                          M   N   m   n
  // affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d1, d3, d4)>
  %0 = linalg.generic_representing_some_higher_d_matmul  
        ins(%A, %B: tensor<?x?x2x4xf32>, tensor<?x?x4x3xf32>)
       outs(    %C: tensor<?x?x2x3xf32>)

In particular, note that the second operand B has shape KxNxnxk (and not KxNxkxn as one could expect by looking only at the operand).

Other layouts can be obtained unsurprisingly from this canonical transformation by composing the resulting operation with a (future) transform.structured.pack_transpose op. This composition allows separating concerns and composes better compared to adding additional permutation attributes to this transform op.

Return modes 

This operation applies to a single Linalg op, otherwise it fails. This operation may produce a definiteFailure if the packing fails for any reason.

The returned handle point to the packed LinalgOp.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
static_packed_sizes::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance
packed_sizesPDL handle to an mlir::Operation *

Results: 

ResultDescription
packed_opTransformHandleTypeInterface instance

transform.structured.pack_transpose (::mlir::transform::PackTransposeOp) 

Syntax:

operation ::= `transform.structured.pack_transpose` $target_pack_or_un_pack_op
              `with_compute_op` `(` $target_linalg_op `)`
              (`outer_perm` `=` $outer_perm^ )?
              (`inner_perm` `=` $inner_perm^ )?
              attr-dict
              `:` functional-type(operands, results)

Apply a transposition to a single tensor.pack (resp. tensor.unpack) and update the linalg.generic op that consumes (resp. produces) the operation.

This transform allows composing a simple structured.pack with additional transpositions to e.g. match the data format required by a specific library call or ISA instruction.

The transpose spec must specify at least one of outer_perm or inner_perm attributes, which will act upon the outer_dims_perm or inner_dims_pos of the specified tensor.pack or tensor.unpack op.

If the target of this op is a tensor.pack then a new tensor.empty will be created along with transposed versions of the tensor.pack and the consuming linalg.generic, which is expected to be the sole consumer.

If the target of this op is a tensor.unpack then the whole pack / compute / unpack chain will be transposed and transposed clones of tensor.pack, the consuming linalg.generic and the tail tensor.pack will be created.

Return modes 

This operation targets a single tensor.pack / tensor.unpack op and a single matching linalg.generic that consumes / produces the op. Otherwise, it produces a silenceableFailure.

This operation may produce a silenceableFailure if the transpose spec is ill-formed (i.e. outer_perm or inner_perm are not permutations of the proper rank) or if the tranposition of all involved operations fails for any reason.

This operation returns 3 handles, one to the transformed LinalgOp, one to the transformed tensor.pack and one to the transformed tensor.unpack. The last handle for tensor.unpack is empty if target_pack_or_unpack_op was not itself a tensor.unpack.

Traits: FunctionalStyleTransformOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
outer_perm::mlir::DenseI64ArrayAttri64 dense array attribute
inner_perm::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
target_pack_or_un_pack_opTransformHandleTypeInterface instance
target_linalg_opTransformHandleTypeInterface instance

Results: 

ResultDescription
packed_opTransformHandleTypeInterface instance
pack_opTransformHandleTypeInterface instance
un_pack_opTransformHandleTypeInterface instance

transform.structured.pad (::mlir::transform::PadOp) 

Syntax:

operation ::= `transform.structured.pad` $target attr-dict

Pads the operations pointed to by the target handle using the options provides as operation attributes.

Return modes 

This operation ignores non-Linalg ops and drops them in the return. This operation may produce a definiteFailure if the padding fails for any reason. If all the operations referred to by the target PDLOperation pad properly, the transform succeeds. Otherwise the transform silently fails. The return handle points to only the subset of successfully produced padded operations, which can be empty.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
padding_values::mlir::ArrayAttrarray attribute
padding_dimensions::mlir::ArrayAttr64-bit integer array attribute
pack_paddings::mlir::ArrayAttr64-bit integer array attribute
transpose_paddings::mlir::ArrayAttrarray of arrays of i64

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

transform.structured.promote (::mlir::transform::PromoteOp) 

Syntax:

operation ::= `transform.structured.promote` $target attr-dict

Promotes the specified operands of the target into a separate memory buffer.

At this point, this transform does not allow customizing alloc/dealloc functions nor the behavior on copy in/out operations.

Return modes 

This operation applies to a single Linalg op that satisfies the promoteSubviewsPrecondition, otherwise it fails.

If the operations referred to by the target PDLOperation promote properly, the transform succeeds.

When successful, the return handle points to the $target operation that was modified inplace.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
operands_to_promote::mlir::ArrayAttr64-bit integer array attribute
use_full_tile_buffers::mlir::ArrayAttr1-bit boolean array attribute
use_full_tiles_by_default::mlir::UnitAttrunit attribute
use_alloca::mlir::UnitAttrunit attribute
mapping::mlir::ArrayAttrDevice Mapping array attribute
alignment::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

transform.structured.replace (::mlir::transform::ReplaceOp) 

Syntax:

operation ::= `transform.structured.replace` $target attr-dict-with-keyword regions

Replace all target payload ops with the single op that is contained in this op’s region. All targets must have zero arguments and must be isolated from above.

This op is for debugging/experiments only.

Return modes 

This operation consumes the target handle.

Traits: HasOnlyGraphRegion, IsolatedFromAbove, NoTerminator, SingleBlock

Interfaces: MemoryEffectOpInterface, RegionKindInterface, TransformOpInterface

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
replacementPDL handle to an mlir::Operation *

transform.structured.rewrite_in_destination_passing_style (::mlir::transform::RewriteInDestinationPassingStyleOp) 

Syntax:

operation ::= `transform.structured.rewrite_in_destination_passing_style` $target attr-dict
              `:` functional-type($target, results)

Rewrite a supported tensor operation that is not in destination-passing style into a form that is in destination-passing style. Currently supported operations are:

  • tensor.pad
  • tensor.generate
  • tensor.from_elements This dichotomy hints at a future interface, for now the implementation just switches between different implementation.

Return modes 

This operation ignores non-unsupported ops and drops them from the return. If all the operations referred to by the target PDLOperation generalize properly, the transform succeeds. Otherwise the transform silently fails. The return handle points to a subset of successfully produced operations:

  • tensor.pad case, the returned handle points to the tensor.insert_slice.
  • tensor.generate case, the returned handle points to the linalg.generic.
  • tensor.from_elements case, the returned handle points to the last tensor.insert.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
transformedTransformHandleTypeInterface instance

transform.structured.scalarize (::mlir::transform::ScalarizeOp) 

Syntax:

operation ::= `transform.structured.scalarize` $target attr-dict

Indicates that ops of a specific kind in the given function should be scalarized (i.e. their dynamic dimensions tiled by 1).

Return modes: 

This operation ignores non-Linalg ops and drops them in the return. This operation produces definiteFailure if the scalarization fails for any reason. If all the operations referred to by the target PDLOperation scalarize properly, the transform succeeds. Otherwise the transform silently fails.

The return handle points to only the subset of successfully produced tiled-by-1 operations, which can be empty.

This operation does not return handles to the tiled loop. We make this design choice because it is hard to know ahead of time the number of loops that will be produced (it depends on the number of dynamic dimensions after multiple transformations have been applied). Loops can always be recovered by navigating from the tiled operations if needed.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
resultPDL handle to an mlir::Operation *

transform.structured.split (::mlir::transform::SplitOp) 

Indicates that the given target op should be split into two complementary parts, which combined cover the entire iteration domain of the original op. The split is performed along the iteration space dimension provided as attribute. In case of dimension overflow, the transformation fails. The split is performed at the dimension iterator value specified as either the static split point attribute when it is known at transform IR construction time or as the handle to an operation producing a single index-typed value when it is computed by payload IR. In the latter case, the static split point must be set to ShapedType::kDynamic and the dynamic size handle must point to as many value-producing operations as there are structured operations pointed to by the target handle.

The operation consumes the target handle, but preserves the split point handle if provided. It produces two new handles pointing to the two parts of the structured op after splitting, in the same order as the target operand, with the first handle corresponding to the part with lower iteration space indices.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
dimension::mlir::IntegerAttr64-bit signless integer attribute
static_split_point::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance
dynamic_split_pointtransform ‘param’ type or any handle type

Results: 

ResultDescription
firstTransformHandleTypeInterface instance
secondTransformHandleTypeInterface instance

transform.structured.split_reduction (::mlir::transform::SplitReductionOp) 

Syntax:

operation ::= `transform.structured.split_reduction` $target attr-dict

Indicates that the given target op should be transformed with the splitReduction transformation and split factor provided as attribute.

The splitReduction transformation splits the first single linalg op reduction into a parallel and reduction dimension. A new linalg.generic op is created to perform the rest of the reduction.

The transformation supports different configurations attributes:

  • split_factor: the factor by which to split (i.e. the size of the remaining reduction after splitting).
  • insert_split_dimension: the dimension in the temporary tensor into which the new parallel dimension is inserted.
  • inner_parallel: specifies whether the parallel dimension is before or after the reduction dimension in the splitting op.
  • use_scaling_algorithm: whether to use a scaling based formulation that does not create an ExpandShapeOp (default: do not use scaling)
  • use_alloc: whether to use an alloc op to allocate the temporary tensor (default: do not use alloc op)

Return modes 

This operation ignores non-Linalg ops and drops them in the return. This operation produces definiteFailure if the splitting fails for any reason.

If all the operations referred to by the target PDLOperation split properly, the transform succeeds. Otherwise the transform silently fails. The 4 returned handles points to only the subset of successfully produced computational operations, which can all be empty. This 4 returned handles point to:

  • the init op (or tensor_alloc op if use_alloc = true),
  • the fill op used to initialize the neutral element,
  • the split op and
  • the result-combining op.

Example (default: use_scaling_algorithm = false, use_alloc = false): 

  %r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>,
                                        affine_map<(d0) -> ()>],
        iterator_types = ["reduction"]}
  ins(%in : tensor<32xf32>)
  outs(%out : tensor<f32>) {
  ^bb0(%arg1: f32, %arg2: f32):
    %y = arith.addf %arg1, %arg2 : f32
    linalg.yield %y : f32
  } -> tensor<f32>

is split into:

  %cst = arith.constant 0.000000e+00 : f32
  %0 = tensor.expand_shape %in [[0, 1]] : tensor<32xf32> into tensor<4x8xf32>
  %1 = tensor.empty() : tensor<4xf32>
  %2 = linalg.fill ins(%cst : f32) outs(%1 : tensor<4xf32>) -> tensor<4xf32>
  %3 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
                                        affine_map<(d0, d1) -> (d0)>],
    iterator_types = ["parallel", "reduction"]}
    ins(%0 : tensor<4x8xf32>) outs(%2 : tensor<4xf32>) {
    ^bb0(%arg3: f32, %arg5: f32):
    %5 = arith.addf %arg3, %arg4 : f32
    linalg.yield %5 : f32
  } -> tensor<4xf32>
  %r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>,
                                        affine_map<(d0) -> ()>],
    iterator_types = ["reduction"]}
    ins(%3 : tensor<4xf32>) outs(%out : tensor<f32>) {
    ^bb0(%arg3: f32, %arg4: f32):
    %5 = arith.addf %arg3, %arg4 : f32
    linalg.yield %5 : f32
  } -> tensor<f32>

Example (use_scaling_algorithm = true, use_alloc = true): 

Instead of introducing an ExpandShapeOp, this scaling-based implementation rewrites a reduction dimension k into k * split_factor + kk. The dimension kk is added as an extra parallel dimension to the intermediate output tensor at position insert_split_dimension.

Consider a minimal example where k is reduced: O(i, j) += I(i, j, k) Assume i=3, j=5, k=128, split_factor=16 and insert_split_dimension=0. The compute is rewritten as: a. O_i(kk, i, j) += I(i, j, 16 * k + kk) b. O(i, j) += O_i(kk, i, j) The intermediate tensor O_i is of shape (128/16)x3x5 == 8x3x5.

Example: 

 %0 = linalg.matmul ins(%A, %B: tensor<16x256xf32>, tensor<256x32xf32>)
   outs(%C: tensor<16x32xf32>) -> tensor<16x32xf32>

Is transformed to:

 #map0 = affine_map<(d0, d1, d2, d3) -> (d0, d2 * 4 + d3)>
 #map1 = affine_map<(d0, d1, d2, d3) -> (d2 * 4 + d3, d1)>
 #map2 = affine_map<(d0, d1, d2, d3) -> (d2, d3)>
 #map3 = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2)>
 #map4 = affine_map<(d0, d1, d2) -> (d0, d1, d2)>
 #map5 = affine_map<(d0, d1, d2) -> (d0, d1)>
 %0 = tensor.empty() : tensor<16x32x64xf32>
 %cst = arith.constant 0.000000e+00 : f32
 %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<16x32x64xf32>) ->
    tensor<16x32x64xf32>
 %2 = tensor.empty() : tensor<64x4xi1>

 %3 = linalg.generic {indexing_maps = [#map0, #map1, #map2, #map3],
   iterator_types = ["parallel", "parallel", "parallel", "reduction"]}
   ins(%A, %B, %2 : tensor<16x256xf32>, tensor<256x32xf32>, tensor<64x4xi1>)
   outs(%1 : tensor<16x32x64xf32>) {
     ^bb0(%arg3: f32, %arg4: f32, %arg5: i1, %arg6: f32):
       %5 = arith.mulf %arg3, %arg4 : f32
       %6 = arith.addf %arg6, %5 : f32
       linalg.yield %6 : f32
 } -> tensor<16x32x64xf32>

 %4 = linalg.generic {indexing_maps = [#map4, #map5],
   iterator_types = ["parallel", "parallel", "reduction"]}
   ins(%3 : tensor<16x32x64xf32>)
   outs(%C : tensor<16x32xf32>) {
     ^bb0(%arg3: f32, %arg4: f32):
       %5 = arith.addf %arg3, %arg4 : f32
       linalg.yield %5 : f32
 } -> tensor<16x32xf32>

 return %4 : tensor<16x32xf32>

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
split_factor::mlir::IntegerAttr64-bit signless integer attribute
insert_split_dimension::mlir::IntegerAttr64-bit signless integer attribute
inner_parallel::mlir::UnitAttrunit attribute
use_scaling_algorithm::mlir::UnitAttrunit attribute
use_alloc::mlir::UnitAttrunit attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
init_or_alloc_opPDL handle to an mlir::Operation *
fill_opPDL handle to an mlir::Operation *
split_linalg_opPDL handle to an mlir::Operation *
combining_linalg_opPDL handle to an mlir::Operation *

transform.structured.tile (::mlir::transform::TileOp) 

Indicates that the given target op should be tiled with the given sizes. This transform generates a loop nest with a smaller (“tiled”) target operation in its body. Currently limited to LinalgOps.

Tile sizes may be known at transformation time, in which case they are expected to be provided in the static_size attribute, or not, in which case the tile value must be computed by the payload IR and the handle to the operation computing it must be provided through dynamic_sizes. When the sizes are not known statically, the corresponding entry in the static_sizes attribute must be set to ShapedType::kDynamic. Only the dynamic sizes must be provided in dynamic_sizes, i.e., there should be as many handles as ShapedType::kDynamic values in the static_sizes attribute. A static size of 0 indicates that the dimension should not be tiled. No loop will be generated for such dimensions. If all tile sizes are 0, this transform is effectively a no-op.

This op returns handles to the tiled op (in the generated loop nest) and the generated loops. The number of loops is the number of tile sizes that are statically known to be non-zero.

Return modes 

On success, the resulting handles are associated with co-indexed lists of tiled operations and loops around them.

This operation only supports Linalg ops and produces a silenceable failure if the input contains any non-Linalg ops. The ops preceding it in the list associated with the target handle will have been tiled.

This operation produces a silenceable failure if the dynamic_sizes handles are associated with lists of payload operations of a size different than that of the list associated with the target handle.

If the internal implementation of tiling for any of the operations fails, produces a definite failure.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
static_sizes::mlir::DenseI64ArrayAttri64 dense array attribute
interchange::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance
dynamic_sizestransform ‘param’ type or any handle type

Results: 

ResultDescription
tiled_linalg_opTransformHandleTypeInterface instance
loopsTransformHandleTypeInterface instance

transform.structured.tile_reduction_using_forall (::mlir::transform::TileReductionUsingForallOp) 

Syntax:

operation ::= `transform.structured.tile_reduction_using_forall` $target
              `by`
              (`num_threads` `=` $num_threads^)?
              (`,` `tile_sizes` `=` $tile_sizes^)?
              (`,` `mapping` `=` $mapping^)?
              attr-dict

Tile a PartialReductionOpInterface op to a tiled scf.forall doing partial reduction.

This transformation tiles the target along the reduction dimensions. It creates a tensor initialized with the identity value. Then it creates a scf.forall loops with the number threads given by num_threads. The op is tiled op with a size equal to floordiv(size, num_threads). All the partial reduction value is are parallel inserted to create a new tensor. After the loop a merge operation is created to do a final reduction with the partial reductions tensor. If an extra tile_sizes parameter is passed the tiles are cyclically distributed on the threads of the scf.foralls loop.

Return modes 

This 4 returned handles point to:

  • the parent forall op,
  • the fill op used to initialize the neutral element,
  • the parallel tiled op and
  • the result-combining op.

Example: 

  %red = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
                                          affine_map<(d0, d1) -> (d0)>],
  iterator_types = ["parallel", "reduction"]}
  ins(%arg0 : tensor<?x?xf32>)
  outs(%out : tensor<?xf32>) {
    ^bb0(%arg7: f32, %arg9: f32):
    %1 = arith.addf %arg7, %arg9 : f32
    linalg.yield %1 : f32
  } -> tensor<?xf32>
  return %red : tensor<?xf32>

is transformed into:

  %0 = tensor.empty(%dim_1) : tensor<?x5xf32>
  %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x5xf32>) -> tensor<?x5xf32>
  %2 = scf.forall (%arg2) in (%c5) shared_outs(%arg3 = %1) -> (tensor<?x5xf32>) {
    %4 = affine.min #map(%arg2)[%dim_0]
    %5 = affine.max #map1(%4)
    %extracted_slice = tensor.extract_slice %arg3[0, %arg2] [%dim, 1] [1, 1] : tensor<?x5xf32> to tensor<?xf32>
    %6 = affine.apply #map2(%arg2)[%dim_0]
    %extracted_slice_2 = tensor.extract_slice %arg0[0, %6] [%dim, %5] [1, 1] : tensor<?x?xf32> to tensor<?x?xf32>
    %extracted_slice_3 = tensor.extract_slice %extracted_slice[0] [%dim] [1] : tensor<?xf32> to tensor<?xf32>
    %7 = linalg.generic {indexing_maps = [#map3, #map4], iterator_types = ["parallel", "reduction"]} ins(%extracted_slice_2 : tensor<?x?xf32>) outs(%extracted_slice_3 : tensor<?xf32>) {
    ^bb0(%in: f32, %out: f32):
      %9 = arith.addf %in, %out : f32
      linalg.yield %9 : f32
    } -> tensor<?xf32>
    scf.forall.in_parallel {
      tensor.parallel_insert_slice %7 into %arg3[0, %arg2] [%dim, 1] [1, 1] : tensor<?xf32> into tensor<?x5xf32>
    }
  } {mapping = []}
  %3 = linalg.generic {indexing_maps = [#map3, #map4], iterator_types = ["parallel", "reduction"]} ins(%2 : tensor<?x5xf32>) outs(%arg1 : tensor<?xf32>) {
  ^bb0(%in: f32, %out: f32):
    %4 = arith.addf %in, %out : f32
    linalg.yield %4 : f32
  } -> tensor<?xf32>

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
num_threads::mlir::DenseI64ArrayAttri64 dense array attribute
tile_sizes::mlir::DenseI64ArrayAttri64 dense array attribute
mapping::mlir::ArrayAttrDevice Mapping array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
forall_opPDL handle to an mlir::Operation *
fill_opPDL handle to an mlir::Operation *
split_linalg_opPDL handle to an mlir::Operation *
combining_linalg_opPDL handle to an mlir::Operation *

transform.structured.tile_reduction_using_scf (::mlir::transform::TileReductionUsingScfOp) 

Syntax:

operation ::= `transform.structured.tile_reduction_using_scf` $target
              `by` `tile_sizes` `=` $tile_sizes
              attr-dict

Indicates that the given target op should be transformed with the tileReduction transformation with the tile size provided as attribute.

This transformation tiles the target along the reduction dimensions. It creates a tensor initialized with the identity value. Then it creates nested loops with a parallel version of target op inside. The parallel op dimensions are less or equal to the tile size passed by user. After the loop a merge operation is created to do a final reduction with the partial reductions. The initial tensor always uses the tile size dimension. This may overallocate if the tile size is greater than the reduction dimension.

Return modes 

This 4 returned handles point to:

  • the parent for op,
  • the fill op used to initialize the neutral element,
  • the parallel tiled op and
  • the result-combining op.

Example: 

  %red = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
                                          affine_map<(d0, d1) -> (d0)>],
  iterator_types = ["parallel", "reduction"]}
  ins(%arg0 : tensor<?x?xf32>)
  outs(%out : tensor<?xf32>) {
    ^bb0(%arg7: f32, %arg9: f32):
    %1 = arith.addf %arg7, %arg9 : f32
    linalg.yield %1 : f32
  } -> tensor<?xf32>
  return %red : tensor<?xf32>

is transformed into:

  %0 = tensor.empty(%dim_1) : tensor<?x5xf32>
  %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x5xf32>) -> tensor<?x5xf32>
  %2 = scf.for %arg2 = %c0 to %dim_0 step %c5 iter_args(%arg3 = %1) -> (tensor<?x5xf32>) {
    %extracted_slice = tensor.extract_slice %1[0, 0] [%dim, 5] [1, 1] : tensor<?x5xf32> to tensor<?x5xf32>
    %extracted_slice_2 = tensor.extract_slice %arg0[0, %arg2] [%dim, 5] [1, 1] : tensor<?x?xf32> to tensor<?x5xf32>
    %4 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
                                          affine_map<(d0, d1) -> (d0, d1)>],
    iterator_types = ["parallel", "parallel"]}
    ins(%extracted_slice_2 : tensor<?x5xf32>)
    outs(%extracted_slice : tensor<?x5xf32>) {
    ^bb0(%in: f32, %out: f32):
      %5 = arith.addf %in, %out : f32
      linalg.yield %5 : f32
    } -> tensor<?x5xf32>
    %dim_3 = tensor.dim %1, %c0 : tensor<?x5xf32>
    %inserted_slice = tensor.insert_slice %4 into %arg3[0, 0] [%dim_3, 5] [1, 1] : tensor<?x5xf32> into tensor<?x5xf32>
    scf.yield %inserted_slice : tensor<?x5xf32>
  }
  %3 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
                                        affine_map<(d0, d1) -> (d0)>],
  iterator_types = ["parallel", "reduction"]}
  ins(%2 : tensor<?x5xf32>)
  outs(%arg1 : tensor<?xf32>) {
  ^bb0(%in: f32, %out: f32):
    %4 = arith.addf %in, %out : f32
    linalg.yield %4 : f32
  } -> tensor<?xf32>

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
tile_sizes::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
for_opPDL handle to an mlir::Operation *
fill_opPDL handle to an mlir::Operation *
split_linalg_opPDL handle to an mlir::Operation *
combining_linalg_opPDL handle to an mlir::Operation *

transform.structured.tile_to_forall_op (::mlir::transform::TileToForallOp) 

Syntax:

operation ::= `transform.structured.tile_to_forall_op` $target oilist(
              `num_threads` custom<PackedOrDynamicIndexList>($packed_num_threads,
              $num_threads,
              $static_num_threads) |
              `tile_sizes` custom<PackedOrDynamicIndexList>($packed_tile_sizes,
              $tile_sizes,
              $static_tile_sizes))
              (`(` `mapping` `=` $mapping^ `)`)? attr-dict

Tile a TilingInterface op to a tiled scf.forall.

Tiling is applied by either specifying num_threads or tile_size. If num_threads is specified, then the tile size for each dimension i is calculated dynamically via ceilDiv(dimSize[i], num_threads[i]). num_threads and tile_size can be either static index attributes or SSA values of PDL operation handle type (or a mix thereof). Operation handles must be mapped to exactly one op that has exactly one result of index type.

Static zero tile sizes indicate that the dimension is not tiled and can be thought of as tiling by the full size of data.

It is the user’s responsibility to ensure that num_threads/tile_sizes is a valid tiling specification (i.e. that only tiles parallel dimensions, e.g. in the Linalg case).

If non-empty, the mapping is added as an attribute to the resulting scf.forall.

Note: tile_sizes and num_threads are variadic. Each tile size/number of threads can be an index attribute or a transform handle that is mapped to exactly one payload op with exactly one index result.

Return modes 

This operation ignores ops that do not implement the TilingInterface and drops them in the return.

If all the operations referred to by the target PDLOperation tile successfully, the transform succeeds. Otherwise the transform silently fails.

The two returned handles point to only the subset of successfully produced tiled operations, which can all be empty.

These two returned handles point to:

  • the new scf.forall op,
  • the tiled op that implements TilingInterface.

Example using num_threads 

%0 = pdl_match @match_matmul in %arg1
%3:2 = transform.structured.tile_to_forall_op %0 num_threads [10, 20]

Example using tile_sizes 

%0 = pdl_match @match_matmul in %arg1
%sz = pdl_match @match_size_op in %arg1
%3:2 = transform.structured.tile_to_forall_op %0 tile_sizes [0, %sz, 20]

Traits: AttrSizedOperandSegments

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
static_num_threads::mlir::DenseI64ArrayAttri64 dense array attribute
static_tile_sizes::mlir::DenseI64ArrayAttri64 dense array attribute
mapping::mlir::ArrayAttrDevice Mapping array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *
num_threadsPDL handle to an mlir::Operation *
tile_sizesPDL handle to an mlir::Operation *
packed_num_threadsPDL handle to an mlir::Operation *
packed_tile_sizesPDL handle to an mlir::Operation *

Results: 

ResultDescription
forall_opPDL handle to an mlir::Operation *
tiled_opPDL handle to an mlir::Operation *

transform.structured.tile_to_scf_for (::mlir::transform::TileToScfForOp) 

Indicates that the given target op should be tiled with the given sizes. This transform generates a loop nest with a smaller (“tiled”) target operation in its body. The target must implement TilingInterface.

Tile sizes may be known at transformation time, in which case they are expected to be provided in the static_size attribute, or not, in which case the tile value must be computed by the payload IR and the handle to the operation computing it must be provided through dynamic_sizes. When the sizes are not known statically, the corresponding entry in the static_sizes attribute must be set to ShapedType::kDynamic. Only the dynamic sizes must be provided in dynamic_sizes, i.e., there should be as many handles as ShapedType::kDynamic values in the static_sizes attribute. A static size of 0 indicates that the dimension should not be tiled. No loop will be generated for such dimensions. If all tile sizes are 0, this transform is effectively a no-op.

This op returns handles to the tiled op (in the generated loop nest) and the generated loops. The number of loops is the number of tile sizes that are statically known to be non-zero.

Return modes 

On success, the resulting handles are associated with co-indexed lists of tiled operations and loops around them.

This operation only supports TilingInterface ops and produces a silenceable failure if the input contains any non-TilingInterface ops. The ops preceding it in the list associated with the target handle will have been tiled.

This operation produces a silenceable failure if the dynamic_sizes handles are associated with lists of payload operations of a size different than that of the list associated with the target handle.

If the internal implementation of tiling for any of the operations fails, produces a definite failure.

Interfaces: MemoryEffectOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
static_sizes::mlir::DenseI64ArrayAttri64 dense array attribute
interchange::mlir::DenseI64ArrayAttri64 dense array attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *
dynamic_sizesPDL handle to an mlir::Operation *

Results: 

ResultDescription
tiled_linalg_opPDL handle to an mlir::Operation *
loopsPDL handle to an mlir::Operation *

transform.structured.vectorize (::mlir::transform::VectorizeOp) 

Syntax:

operation ::= `transform.structured.vectorize` $target attr-dict

Indicates that the given target op all the ops it contains should be vectorized with the configuration specified by the attributes of this op. This vectorization only handles structured ops that operate on shaped types and does not vectorize loops or straight-line. Internally, it applies a set of rewrite patterns, some of which enable vectorization and some of which clean up the results. Therefore, it can only be applied to an op with the “isolated from above property”. If finer granularity is required, it can be achieved by outlining the target part of the payload IR into, e.g., a function, performing the transformation, and inlining it back. This transformation only fails if the entire pattern rewriting failed, i.e., it does not fail when no ops were vectorized.

Note that this transformation is invalidating the handles to any payload IR operation that is contained inside the vectorization target.

This transformation supports the following attributes:

  • vectorize_padding: a UnitAttr to activate the vectorization of tensor.pad ops. Different pipelines may prefer to lower such ops to loops.
  • disable_multi_reduction_to_contract_patterns: a UnitAttr to deactivate the rewrite of vector.multi_reduction to vector.contract. This is intended to be used in tests only.
  • disable_transfer_permutation_map_lowering_patterns: a UnitAttr to deactivate the rewrite of vector.transfer with permutation maps into explicit vector.transpose operations. This is intended to be used in tests only but may be promotoed to a first class attribute in the future.

Return modes: 

This operation produces definiteFailure if vectorization fails for any reason. The operation always returns the handle to the target op that is expected to be isolated from above.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
vectorize_padding::mlir::UnitAttrunit attribute
vectorize_nd_extract::mlir::UnitAttrunit attribute
disable_multi_reduction_to_contract_patterns::mlir::UnitAttrunit attribute
disable_transfer_permutation_map_lowering_patterns::mlir::UnitAttrunit attribute

Operands: 

OperandDescription
targetPDL handle to an mlir::Operation *

Results: 

ResultDescription
transformedPDL handle to an mlir::Operation *

Vector Transform Operations 

transform.vector.apply_rank_reducing_subview_patterns (::mlir::transform::ApplyRankReducingSubviewPatternsOp) 

Syntax:

operation ::= `transform.vector.apply_rank_reducing_subview_patterns` $target
              attr-dict
              `:` functional-type($target, results)

Apply opt-in vector transfer permutation patterns that include:

  • TransferReadDropUnitDimsPattern
  • TransferWriteDropUnitDimsPattern

These patterns have the effect of rewriting a vector.transfer with unit dimensions into a rank-reduced version thanks to subview operations. This is complemented by shape_cast folding patterns.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.apply_transfer_permutation_patterns (::mlir::transform::ApplyTransferPermutationPatternsOp) 

Syntax:

operation ::= `transform.vector.apply_transfer_permutation_patterns` $target
              attr-dict
              `:` functional-type($target, results)

Apply opt-in vector transfer permutation patterns that include:

  • TransferReadPermutationLowering
  • TransferWritePermutationLowering
  • TransferOpReduceRank
  • TransferWriteNonPermutationLowering

These patterns have the effect of rewriting a vector.transfer with an arbitrary permutation_map to a vector.transfer with a permutation_map that is a minor identity followed by a vector.transpose.

In other words, this makes the vector.transfer contiguous on the most minor dimensions and materializes the permutation_map as a vector.transpose.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_broadcast (::mlir::transform::LowerBroadcastOp) 

Syntax:

operation ::= `transform.vector.lower_broadcast` $target
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector outerproduct operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usally a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_contraction (::mlir::transform::LowerContractionOp) 

Syntax:

operation ::= `transform.vector.lower_contraction` $target
              (`lowering_strategy` `=` $lowering_strategy^)?
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector contraction-like operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
lowering_strategy::mlir::vector::VectorContractLoweringAttrcontrol the lowering of vector.contract operations.

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_mask (::mlir::transform::LowerMaskOp) 

Syntax:

operation ::= `transform.vector.lower_mask` $target
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector mask operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_multi_reduction (::mlir::transform::LowerMultiReductionOp) 

Syntax:

operation ::= `transform.vector.lower_multi_reduction` $target
              (`lowering_strategy` `=` $lowering_strategy^)?
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector multi_reduction-like operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
lowering_strategy::mlir::vector::VectorMultiReductionLoweringAttrcontrol the lowering of vector.multi_reduction.

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_outerproduct (::mlir::transform::LowerOuterProductOp) 

Syntax:

operation ::= `transform.vector.lower_outerproduct` $target
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector outerproduct operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_shape_cast (::mlir::transform::LowerShapeCastOp) 

Syntax:

operation ::= `transform.vector.lower_shape_cast` $target
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector shape_cast operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_transfer (::mlir::transform::LowerTransferOp) 

Syntax:

operation ::= `transform.vector.lower_transfer` $target
              (`max_transfer_rank` `=` $max_transfer_rank^)?
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector transfer operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
max_transfer_rank::mlir::IntegerAttr64-bit signless integer attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.lower_transpose (::mlir::transform::LowerTransposeOp) 

Syntax:

operation ::= `transform.vector.lower_transpose` $target
              oilist (
              `lowering_strategy` `=` $lowering_strategy
              | `avx2_lowering_strategy` `=` $avx2_lowering_strategy
              )
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector transpose-like operations nested under the isolated from above op target should be lowered to finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
lowering_strategy::mlir::vector::VectorTransposeLoweringAttrcontrol the lowering of vector.transpose operations.
avx2_lowering_strategy::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.split_transfer_full_partial (::mlir::transform::SplitTransferFullPartialOp) 

Syntax:

operation ::= `transform.vector.split_transfer_full_partial` $target
              (`split_transfer_strategy` `=` $split_transfer_strategy^)?
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector transfer operations nested under the isolated from above op target should be split to full and partial parts.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
split_transfer_strategy::mlir::vector::VectorTransferSplitAttrcontrol the splitting of vector.transfer operations into in-bounds and out-of-bounds variants.

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

transform.vector.transfer_to_scf (::mlir::transform::TransferToScfOp) 

Syntax:

operation ::= `transform.vector.transfer_to_scf` $target
              oilist (
              `max_transfer_rank` `=` $max_transfer_rank
              | `full_unroll` `=` $full_unroll
              )
              attr-dict
              `:` functional-type($target, results)

Indicates that the vector transfer operations nested under the isolated from above op target should be rewritten with scf.for loops over finer-grained vector primitives.

This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.

Traits: FunctionalStyleTransformOpTrait, TransformEachOpTrait, TransformWithPatternsOpTrait

Interfaces: MemoryEffectsOpInterface, TransformOpInterface

Attributes: 

AttributeMLIR TypeDescription
max_transfer_rank::mlir::IntegerAttr64-bit signless integer attribute
full_unroll::mlir::BoolAttrbool attribute

Operands: 

OperandDescription
targetTransformHandleTypeInterface instance

Results: 

ResultDescription
resultsTransformHandleTypeInterface instance

TransformHandleTypeInterface (TransformHandleTypeInterface

Types that can be used for the Transform dialect operation handle values. Such types define the properties of Payload IR operations associated with the handle. A user of such a handle can assume that these properties have been verified for any Payload IR operation associated with it.

Methods: 

checkPayload 

::mlir::DiagnosedSilenceableFailure checkPayload(::mlir::Location loc, ::mlir::ArrayRef<::mlir::Operation *> payload);

Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.

NOTE: This method must be implemented by the user.

TransformParamTypeInterface (TransformParamTypeInterface

Types that can be used for the Transform dialect parameter values. Such types define the structure of the parameters associated with the value, e.g., their underlying type. A user of the value can assume that the parameter has been verified.

Methods: 

checkPayload 

::mlir::DiagnosedSilenceableFailure checkPayload(::mlir::Location loc, ::mlir::ArrayRef<::mlir::Attribute> payload);

Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.

NOTE: This method must be implemented by the user.

TransformValueHandleTypeInterface (TransformValueHandleTypeInterface

Types that can be used for the Transform dialect handle values pointing to Payload IR values. Such types define the properties of Payload IR values associated with the handle. Users of such a handle can assume that these properties have been verified for any Payload IR value associated with it.

Methods: 

checkPayload 

::mlir::DiagnosedSilenceableFailure checkPayload(::mlir::Location loc, ::mlir::ArrayRef<::mlir::Value> payload);

Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.

NOTE: This method must be implemented by the user.

TransformOpInterface (TransformOpInterface

This interface is to be implemented by operations that identify transformations to be performed on other operations. The former are referred to as transform IR operations. The latter are referred to as payload IR operations. Such transform IR operations provide a fine-grain control mechanism over how transformations are applied by using and defining transform IR values, referred to as handles, that correspond to sets of operations in the payload IR. Transformations are applied starting from the operations identified by handles, but may affect other operations as well. Further restrictions may be imposed by flows that rely on transform IR operations to control transformations.

Methods: 

apply 

::mlir::DiagnosedSilenceableFailure apply(::mlir::transform::TransformResults &transformResults, ::mlir::transform::TransformState &state);

Applies the transformation represented by the current operation. This accepts as arguments the object that must be populated with results of the current transformation and a transformation state object that can be used for queries, e.g., to obtain the list of operations on which the transformation represented by the current op is targeted. Returns a special status object indicating whether the transformation succeeded or failed, and, if it failed, whether the failure is recoverable or not.

NOTE: This method must be implemented by the user.

allowsRepeatedHandleOperands 

bool allowsRepeatedHandleOperands();

Indicates whether the op instance allows its handle operands to be associated with the same payload operations.

NOTE: This method must be implemented by the user.