mlir.dialects._tensor_transform_ops_gen

Attributes

Classes

ApplyBubbleUpExtractSlicePatternsOp

Indicates that producers of tensor.extract_slice should swap and operate on

ApplyDecomposeTensorConcatPatternsOp

Indicates that tensor.concat ops should be decomposed into a chain of

ApplyDropRedundantInsertSliceRankExpansionPatternsOp

Indicates that redundant tensor.insert_slice rank reductions should be

ApplyFoldTensorEmptyPatternsOp

Indicates that tensor.extract_slice and reassociative reshapes should be

ApplyFoldTensorSubsetOpsIntoVectorTransfersPatternsOp

Indicates that tensor.extract_slice -> vector.transfer_read and

ApplyFoldTensorSubsetOpsPatternsOp

Indicates that tensor.empty should be folded with tensor.extract_slice,

ApplyMergeConsecutiveInsertExtractSlicePatternsOp

Indicates that consecutive tensor.extract_slice/tensor.insert_slice ops

ApplyReassociativeReshapeFoldingPatternsOp

Indicates that reassociative reshapes (tensor.collapse_shape /

ApplyRewriteTensorOpsAsConstantPatternsOp

Indicates that tensor ops (such as tensor.generate) should be replaced with

MakeLoopIndependentOp

Rewrite the targeted ops such that their index-typed operands no longer

TypeConversionCastShapeDynamicDimsOp

Populates a type converter with conversion materialization functions that

Functions

Module Contents

mlir.dialects._tensor_transform_ops_gen._ods_ir
class mlir.dialects._tensor_transform_ops_gen.ApplyBubbleUpExtractSlicePatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that producers of tensor.extract_slice should swap and operate on the result of the slice.

OPERATION_NAME = 'transform.apply_patterns.tensor.bubble_up_extract_slice'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_bubble_up_extract_slice(*, loc=None, ip=None) ApplyBubbleUpExtractSlicePatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyDecomposeTensorConcatPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that tensor.concat ops should be decomposed into a chain of tensor.insert_slice operations inserting into a materialized destination.

OPERATION_NAME = 'transform.apply_patterns.tensor.decompose_concat'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_decompose_concat(*, loc=None, ip=None) ApplyDecomposeTensorConcatPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyDropRedundantInsertSliceRankExpansionPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that redundant tensor.insert_slice rank reductions should be dropped. E.g., cases where a tensor.extract_slice rank reduction immediately follows an inverse tensor.insert_slice rank expansion.

OPERATION_NAME = 'transform.apply_patterns.tensor.drop_redundant_insert_slice_rank_expansion'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_drop_redundant_insert_slice_rank_expansion(*, loc=None, ip=None) ApplyDropRedundantInsertSliceRankExpansionPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyFoldTensorEmptyPatternsOp(*, fold_single_use_only=None, loc=None, ip=None)

Bases: _ods_ir

Indicates that tensor.extract_slice and reassociative reshapes should be folded into tensor.empty.

If fold_single_use_only is set to “true”, only tensor.empty that have a single use are folded.

OPERATION_NAME = 'transform.apply_patterns.tensor.fold_tensor_empty'
_ODS_REGIONS = (0, True)
fold_single_use_only() _ods_ir
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_fold_tensor_empty(*, fold_single_use_only=None, loc=None, ip=None) ApplyFoldTensorEmptyPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyFoldTensorSubsetOpsIntoVectorTransfersPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that tensor.extract_slice -> vector.transfer_read and vector.transfer_write -> tensor.insert_slice op chains should be folded into vector tranfer read and write ops

OPERATION_NAME = 'transform.apply_patterns.tensor.fold_tensor_subset_ops_into_vector_transfers'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_fold_tensor_subset_ops_into_vector_transfers(*, loc=None, ip=None) ApplyFoldTensorSubsetOpsIntoVectorTransfersPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyFoldTensorSubsetOpsPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that tensor.empty should be folded with tensor.extract_slice, tensor.expand_shape and tensor.collapse_shape.

OPERATION_NAME = 'transform.apply_patterns.tensor.fold_tensor_subset_ops'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_fold_tensor_subset_ops(*, loc=None, ip=None) ApplyFoldTensorSubsetOpsPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyMergeConsecutiveInsertExtractSlicePatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that consecutive tensor.extract_slice/tensor.insert_slice ops should be merged into a single op. These patterns are not canonicalizations because the bufferization is sensitive to IR structure.

OPERATION_NAME = 'transform.apply_patterns.tensor.merge_consecutive_insert_extract_slice'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_merge_consecutive_insert_extract_slice(*, loc=None, ip=None) ApplyMergeConsecutiveInsertExtractSlicePatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyReassociativeReshapeFoldingPatternsOp(*, loc=None, ip=None)

Bases: _ods_ir

Indicates that reassociative reshapes (tensor.collapse_shape / tensor.expand_shape) should be folded with inverse rank expansions / rank reductions (via tensor.insert_slice / tensor.extract_slice).

OPERATION_NAME = 'transform.apply_patterns.tensor.reassociative_reshape_folding'
_ODS_REGIONS = (0, True)
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_reassociative_reshape_folding(*, loc=None, ip=None) ApplyReassociativeReshapeFoldingPatternsOp
class mlir.dialects._tensor_transform_ops_gen.ApplyRewriteTensorOpsAsConstantPatternsOp(*, aggressive=None, loc=None, ip=None)

Bases: _ods_ir

Indicates that tensor ops (such as tensor.generate) should be replaced with constants (arith.constant) when possible.

OPERATION_NAME = 'transform.apply_patterns.tensor.rewrite_as_constant'
_ODS_REGIONS = (0, True)
aggressive() bool
mlir.dialects._tensor_transform_ops_gen.apply_patterns_tensor_rewrite_as_constant(*, aggressive=None, loc=None, ip=None) ApplyRewriteTensorOpsAsConstantPatternsOp
class mlir.dialects._tensor_transform_ops_gen.MakeLoopIndependentOp(transformed, target, num_loops, *, loc=None, ip=None)

Bases: _ods_ir

Rewrite the targeted ops such that their index-typed operands no longer depend on any loop induction variable of the num_loop enclosing scf.for loops. I.e., compute an upper bound that is independent of any such loop IV for every tensor dimension. The transformed op could then be hoisted from the num_loop enclosing loops. To preserve the original semantics, place a tensor.extract_slice inside the loop.

Currently supported operations are:

  • tensor.empty: Replaced with a new tensor.empty with upper bound sizes,

followed by a tensor.extract_slice. * tensor.pad: Replaced by an upper bound padding, followed by a tensor.extract_slice.

Return modes

This operation fails if at least one induction variable could not be eliminated. In case the targeted op is already independent of induction variables, this transform succeeds and returns the unmodified target op.

Otherwise, the returned handle points to a subset of the produced ops:

  • tensor.empty: The returned handle points to the tensor.extract_slice op.

  • tensor.pad: The returned handle points to the tensor.extract_slice op.

This transform op consumes the target handle and produces a result handle.

OPERATION_NAME = 'transform.tensor.make_loop_independent'
_ODS_REGIONS = (0, True)
target() _ods_ir
num_loops() _ods_ir
transformed() _ods_ir
mlir.dialects._tensor_transform_ops_gen.tensor_make_loop_independent(transformed, target, num_loops, *, loc=None, ip=None) _ods_ir
class mlir.dialects._tensor_transform_ops_gen.TypeConversionCastShapeDynamicDimsOp(*, ignore_dynamic_info=None, loc=None, ip=None)

Bases: _ods_ir

Populates a type converter with conversion materialization functions that cast a tensor value between two cast-compatible tensors. See tensor.cast for more information on cast compatibility between tensors.

If ignore_dynamic_info is not set, this will set an additional constraint that source materializations do not cast dynamic dimensions to static ones.

OPERATION_NAME = 'transform.type_conversion.tensor.cast_shape_dynamic_dims'
_ODS_REGIONS = (0, True)
ignore_dynamic_info() bool
mlir.dialects._tensor_transform_ops_gen.type_conversion_tensor_cast_shape_dynamic_dims(*, ignore_dynamic_info=None, loc=None, ip=None) TypeConversionCastShapeDynamicDimsOp