mlir.dialects._memref_transform_ops_gen

Attributes

Classes

ApplyAllocToAllocaOp

Collects patterns to rewrite scoped dynamic allocation (alloc/dealloc

ApplyExpandOpsPatternsOp

Collects patterns to rewrite ops within the memref dialect.

ApplyExpandStridedMetadataPatternsOp

Collects patterns for expanding memref operations that modify the metadata

ApplyExtractAddressComputationsPatternsOp

Collects patterns for extracting address computations from operations

ApplyFoldMemrefAliasOpsPatternsOp

Collects patterns for folding memref aliasing ops (memref.subview) into

ApplyResolveRankedShapedTypeResultDimsPatternsOp

Collects patterns that resolve memref.dim operations with values that are

MemRefAllocaToGlobalOp

Inserts a new memref.global for each provided memref.alloca into the

MemRefEraseDeadAllocAndStoresOp

This applies memory optimization on memref. In particular it does store to

MemRefMakeLoopIndependentOp

Rewrite the targeted ops such that their index-typed operands no longer

MemRefMultiBufferOp

Transformation to do multi-buffering/array expansion to remove

MemrefToLLVMTypeConverterOp

This operation provides an "LLVMTypeConverter" that lowers memref types to

Functions

Module Contents

mlir.dialects._memref_transform_ops_gen._ods_ir
class mlir.dialects._memref_transform_ops_gen.ApplyAllocToAllocaOp(*, size_limit=None, loc=None, ip=None)

Bases: _ods_ir

Collects patterns to rewrite scoped dynamic allocation (alloc/dealloc pairs) into automatic allocation (alloca) in the same scope, for memrefs of static shape.

The size_limit attribute controls the maximum allocated memory (in bytes, subject to data layout) for which the pattern applies.

OPERATION_NAME = 'transform.apply_patterns.memref.alloc_to_alloca'
_ODS_REGIONS = (0, True)
size_limit() _ods_ir | None
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_alloc_to_alloca(*, size_limit=None, loc=None, ip=None) ApplyAllocToAllocaOp
class mlir.dialects._memref_transform_ops_gen.ApplyExpandOpsPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Collects patterns to rewrite ops within the memref dialect.

  • Converts atomic_rmw that cannot be lowered to a simple atomic op with

AtomicRMWOpLowering pattern, e.g. with “minf” or “maxf” attributes, to memref.generic_atomic_rmw with the expanded code. * Converts memref.reshape that has a target shape of a statically-known size to memref.reinterpret_cast.

OPERATION_NAME = 'transform.apply_patterns.memref.expand_ops'
_ODS_REGIONS = (0, True)
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_expand_ops(*, loc=None, ip=None) ApplyExpandOpsPatternsOp
class mlir.dialects._memref_transform_ops_gen.ApplyExpandStridedMetadataPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Collects patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs.

OPERATION_NAME = 'transform.apply_patterns.memref.expand_strided_metadata'
_ODS_REGIONS = (0, True)
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_expand_strided_metadata(*, loc=None, ip=None) ApplyExpandStridedMetadataPatternsOp
class mlir.dialects._memref_transform_ops_gen.ApplyExtractAddressComputationsPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Collects patterns for extracting address computations from operations with memory accesses such that these memory accesses use only a base pointer.

For instance,

memref.load %base[%off0, ...]

Will be rewritten in:

%new_base = memref.subview %base[%off0,...][1,...][1,...]
memref.load %new_base[%c0,...]
OPERATION_NAME = 'transform.apply_patterns.memref.extract_address_computations'
_ODS_REGIONS = (0, True)
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_extract_address_computations(*, loc=None, ip=None) ApplyExtractAddressComputationsPatternsOp
class mlir.dialects._memref_transform_ops_gen.ApplyFoldMemrefAliasOpsPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Collects patterns for folding memref aliasing ops (memref.subview) into consumer load/store ops (affine.load, memref.load, nvgpu.ldmatrix, vector.load, vector.transfer_read, affine.store, memref.store, etc.) and other ops (e.g., memref.subview).

OPERATION_NAME = 'transform.apply_patterns.memref.fold_memref_alias_ops'
_ODS_REGIONS = (0, True)
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_fold_memref_alias_ops(*, loc=None, ip=None) ApplyFoldMemrefAliasOpsPatternsOp
class mlir.dialects._memref_transform_ops_gen.ApplyResolveRankedShapedTypeResultDimsPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Collects patterns that resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.

OPERATION_NAME = 'transform.apply_patterns.memref.resolve_ranked_shaped_type_result_dims'
_ODS_REGIONS = (0, True)
mlir.dialects._memref_transform_ops_gen.apply_patterns_memref_resolve_ranked_shaped_type_result_dims(*, loc=None, ip=None) ApplyResolveRankedShapedTypeResultDimsPatternsOp
class mlir.dialects._memref_transform_ops_gen.MemRefAllocaToGlobalOp(getGlobal, global_, alloca, *, loc=None, ip=None)

Bases: _ods_ir

Inserts a new memref.global for each provided memref.alloca into the nearest symbol table (e.g., a builtin.module) and replaces it with a memref.get_global. This is useful, for example, for allocations that should reside in the shared memory of a GPU, which have to be declared as globals.

Example

Consider the following transform op:

%get_global, %global =
    transform.memref.alloca_to_global %alloca
      : (!transform.op<"memref.alloca">)
        -> (!transform.any_op, !transform.any_op)

and the following input payload:

module {
  func.func @func() {
    %alloca = memref.alloca() : memref<2x32xf32>
    // usages of %alloca...
  }
}

then applying the transform op to the payload would result in the following output IR:

module {
  memref.global "private" @alloc : memref<2x32xf32>
  func.func @func() {
    %alloca = memref.get_global @alloc : memref<2x32xf32>
    // usages of %alloca...
  }
}

Return modes

Succeeds always. The returned handles refer to the memref.get_global and memref.global ops that were inserted by the transformation.

OPERATION_NAME = 'transform.memref.alloca_to_global'
_ODS_REGIONS = (0, True)
alloca() _ods_ir
getGlobal() _ods_ir
global_() _ods_ir
mlir.dialects._memref_transform_ops_gen.memref_alloca_to_global(get_global, global_, alloca, *, loc=None, ip=None) _ods_ir
class mlir.dialects._memref_transform_ops_gen.MemRefEraseDeadAllocAndStoresOp(target, *, loc=None, ip=None)

Bases: _ods_ir

This applies memory optimization on memref. In particular it does store to load forwarding, dead store elimination and dead alloc/alloca elimination.

Return modes

This operation applies a set of memory optimization on the whole region of the operand.

The transformation does not consume the target handle. It modifies the payload. Dead allocations, loads and stores are silently dropped from all mappings.

OPERATION_NAME = 'transform.memref.erase_dead_alloc_and_stores'
_ODS_REGIONS = (0, True)
target() _ods_ir
mlir.dialects._memref_transform_ops_gen.memref_erase_dead_alloc_and_stores(target, *, loc=None, ip=None) MemRefEraseDeadAllocAndStoresOp
class mlir.dialects._memref_transform_ops_gen.MemRefMakeLoopIndependentOp(transformed, target, num_loops, *, loc=None, ip=None)

Bases: _ods_ir

Rewrite the targeted ops such that their index-typed operands no longer depend on any loop induction variable of the num_loop enclosing scf.for loops. I.e., compute an upper bound that is independent of any such loop IV for every tensor dimension. The transformed op could then be hoisted from the num_loop enclosing loops. To preserve the original semantics, place a memref.subview inside the loop.

Currently supported operations are:

  • memref.alloca: Replaced with a new memref.alloca with upper bound sizes,

followed by a memref.subview.

Return modes

This operation fails if at least one induction variable could not be eliminated. In case the targeted op is already independent of induction variables, this transform succeeds and returns the unmodified target op.

Otherwise, the returned handle points to a subset of the produced ops:

  • memref.alloca: The returned handle points to the memref.subview op.

This transform op consumes the target handle and produces a result handle.

OPERATION_NAME = 'transform.memref.make_loop_independent'
_ODS_REGIONS = (0, True)
target() _ods_ir
num_loops() _ods_ir
transformed() _ods_ir
mlir.dialects._memref_transform_ops_gen.memref_make_loop_independent(transformed, target, num_loops, *, loc=None, ip=None) _ods_ir
class mlir.dialects._memref_transform_ops_gen.MemRefMultiBufferOp(transformed, target, factor, *, skip_analysis=None, loc=None, ip=None)

Bases: _ods_ir

Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations. This transform expands the size of an allocation by a given multiplicative factor and fixes up any users of the multibuffered allocation. If skip analysis is not set the transformation will only apply if it can prove that there is no data being carried across loop iterations.

Return modes

This operation returns the new allocation if multi-buffering succeeds, and failure otherwise.

OPERATION_NAME = 'transform.memref.multibuffer'
_ODS_REGIONS = (0, True)
target() _ods_ir
factor() _ods_ir
skip_analysis() bool
transformed() _ods_ir
mlir.dialects._memref_transform_ops_gen.memref_multibuffer(transformed, target, factor, *, skip_analysis=None, loc=None, ip=None) _ods_ir
class mlir.dialects._memref_transform_ops_gen.MemrefToLLVMTypeConverterOp(*, use_aligned_alloc=None, index_bitwidth=None, use_generic_functions=None, use_bare_ptr_call_conv=None, data_layout=None, loc=None, ip=None)

Bases: _ods_ir

This operation provides an “LLVMTypeConverter” that lowers memref types to LLVM types.

The type converter can be customized as follows:

  • use_aligned_alloc: Use aligned_alloc in place of malloc for heap

allocations. * index_bitwidth: Bitwidth of the index type, “0” indicates the size of a machine word. * use_generic_functions: Use generic allocation and deallocation functions instead of the classic “malloc”, “aligned_alloc” and “free” functions. // TODO: the following two options don’t really make sense for // memref_to_llvm_type_converter specifically. // We should have a single to_llvm_type_converter. * use_bare_ptr_call_conv: Replace FuncOp’s MemRef arguments with bare pointers to the MemRef element types. * data-layout: String description (LLVM format) of the data layout that is expected on the produced module.

OPERATION_NAME = 'transform.apply_conversion_patterns.memref.memref_to_llvm_type_converter'
_ODS_REGIONS = (0, True)
use_aligned_alloc() _ods_ir | None
index_bitwidth() _ods_ir | None
use_generic_functions() _ods_ir | None
use_bare_ptr_call_conv() _ods_ir | None
data_layout() _ods_ir | None
mlir.dialects._memref_transform_ops_gen.apply_conversion_patterns_memref_memref_to_llvm_type_converter(*, use_aligned_alloc=None, index_bitwidth=None, use_generic_functions=None, use_bare_ptr_call_conv=None, data_layout=None, loc=None, ip=None) MemrefToLLVMTypeConverterOp