MLIR  16.0.0git
Typedefs | Functions
mlir::tensor Namespace Reference

Typedefs

using ControlConstantExtractSliceFusionFn = std::function< bool(ExtractSliceOp)>
 Function to control the folding of constant and extract slice. More...
 

Functions

bool preservesStaticInformation (Type source, Type target)
 Returns true if target is a ranked tensor type that preserves static information available in the source ranked tensor type. More...
 
bool canFoldIntoConsumerOp (CastOp castOp)
 Determines whether tensor::CastOp casts to a more dynamic version of the source tensor. More...
 
bool canFoldIntoProducerOp (CastOp castOp)
 Determines whether the tensor::CastOp casts to a more static version of the source tensor. More...
 
LogicalResult foldTensorCast (Operation *op)
 Performs folding of any operand of op if it comes from a tensor::CastOp that can be folded. More...
 
Value createCanonicalRankReducingExtractSliceOp (OpBuilder &b, Location loc, Value tensor, RankedTensorType targetType)
 Create a rank-reducing ExtractSliceOp @[0 . More...
 
Value createCanonicalRankReducingInsertSliceOp (OpBuilder &b, Location loc, Value tensor, Value dest)
 Create a rank-reducing InsertSliceOp @[0 . More...
 
void populateFoldConstantExtractSlicePatterns (RewritePatternSet &patterns, const ControlConstantExtractSliceFusionFn &controlFn=[](ExtractSliceOp op) { return false;})
 Patterns to fold the extract slice op with its constant operand. More...
 
void registerInferTypeOpInterfaceExternalModels (mlir::DialectRegistry &registry)
 Registers external models for Infer Type interfaces for tensor ops. More...
 
OperationbubbleUpPadSlice (OpBuilder &b, tensor::PadOp padOp, ArrayRef< OpFoldResult > offsets, ArrayRef< OpFoldResult > sizes, bool generateZeroSliceGuard=true)
 Bubbles up a slice of this pad by taking the slice first and then performing the padding. More...
 
void registerTilingOpInterfaceExternalModels (mlir::DialectRegistry &registry)
 Registers external models for Tiling interface for tensor ops. More...
 
void registerBufferizableOpInterfaceExternalModels (DialectRegistry &registry)
 
void populateSplitPaddingPatterns (RewritePatternSet &patterns, PatternBenefit baseBenefit=1)
 Populates patterns with patterns to wrap a tensor.pad op with an scf.if op to separate the cases where we don't need padding (all pad sizes are actually zeros) and where we indeed need padding. More...
 
FailureOr< ValuereplaceExtractSliceWithTiledProducer (OpBuilder &builder, tensor::ExtractSliceOp sliceOp, OpResult producerOp)
 Pattern to swap an tensor.extract_slice with its producer when the producer implements the TilingInterface. More...
 
PadOp createPadHighOp (RankedTensorType type, Value source, Value pad, bool nofold, Location loc, OpBuilder &builder)
 
PadOp createPadScalarOp (Type type, Value source, Value pad, ArrayRef< OpFoldResult > low, ArrayRef< OpFoldResult > high, bool nofold, Location loc, OpBuilder &builder)
 
SmallVector< ValuecreateDynamicDimValues (OpBuilder &b, Location loc, Value rankedTensor)
 
SmallVector< ValuecreateDimValues (OpBuilder &b, Location loc, Value rankedTensor)
 

Typedef Documentation

◆ ControlConstantExtractSliceFusionFn

using mlir::tensor::ControlConstantExtractSliceFusionFn = typedef std::function<bool(ExtractSliceOp)>

Function to control the folding of constant and extract slice.

Definition at line 128 of file Tensor.h.

Function Documentation

◆ bubbleUpPadSlice()

Operation * mlir::tensor::bubbleUpPadSlice ( OpBuilder b,
tensor::PadOp  padOp,
ArrayRef< OpFoldResult offsets,
ArrayRef< OpFoldResult sizes,
bool  generateZeroSliceGuard = true 
)

Bubbles up a slice of this pad by taking the slice first and then performing the padding.

offsets and strides specifies each dimension's start offset and size for the slice. The slice has unit strides along all dimensions.

Specifically, this function converts:

%0 = tensor.pad %source low[...] high[...] { linalg.yield %cst }
%1 = <extract-slice> %0 offsets=[...], sizes[...]

into

%0 = tensor.extract_slice %source ...
%0 = tensor.pad %0 low[...] high[...] { linalg.yield %cst }

If generateZeroSliceGuard is true, the generated IR will contain logic to guard against the case that we might take a zero-sized slice from the original source. For such cases, we tensor.generate to generate the full tensor.

Definition at line 76 of file TensorTilingInterfaceImpl.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), mlir::OpBuilder::createOrFold(), mlir::dispatchIndexOpFoldResults(), mlir::AffineMap::get(), mlir::getAsOpFoldResult(), mlir::getConstantIntValue(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::getValueOrCreateConstantIndexOp(), max(), and min().

Referenced by mlir::linalg::ExtractSliceOfPadTensorSwapPattern::matchAndRewrite().

◆ canFoldIntoConsumerOp()

bool mlir::tensor::canFoldIntoConsumerOp ( CastOp  castOp)

Determines whether tensor::CastOp casts to a more dynamic version of the source tensor.

This is useful to fold a tensor.cast into a consuming op and implement canonicalization patterns for ops in different dialects that may consume the results of tensor.cast operations. Such foldable tensor.cast operations are typically inserted as extract_slice ops and are canonicalized, to preserve the type compatibility of their uses.

Returns true when all conditions are met:

  1. source and result are ranked tensors with same element type and rank.
  2. the tensor type has more static information than the result

Example:

%1 = tensor.cast %0 : tensor<8x16xf32> to tensor<?x?xf32>
%2 = consumer %1 ... : tensor<?x?xf32> ...

folds into:

%2 = consumer %0 ... : tensor<8x16xf32> ...

This is useful to fold a tensor.cast into a consuming op and implement canonicalization patterns for ops in different dialects that may consume the results of tensor.cast operations. Such foldable tensor.cast operations are typically inserted as slice ops and are canonicalized, to preserve the type compatibility of their uses.

Returns true when all conditions are met:

  1. source and result are ranked tensors with same element type and rank.
  2. the tensor type has more static information than the result

Example:

%1 = tensor.cast %0 : tensor<8x16xf32> to tensor<?x?xf32>
%2 = consumer %1 ... : tensor<?x?xf32> ...

folds into:

%2 = consumer %0 ... : tensor<8x16xf32> ...

Definition at line 93 of file TensorOps.cpp.

References preservesStaticInformation().

Referenced by foldInsertOp(), foldMemRefCast(), foldTensorCast(), foldTensorCast(), mlir::linalg::generateLibraryCallName(), isTrivialSubViewOp(), AllocaScopeHoister::matchAndRewrite(), CollapseShapeOpMemRefCastFolder::matchAndRewrite(), parseInferType(), produceSliceErrorMsg(), and verifyTensorReshapeOp().

◆ canFoldIntoProducerOp()

bool mlir::tensor::canFoldIntoProducerOp ( CastOp  castOp)

Determines whether the tensor::CastOp casts to a more static version of the source tensor.

This is useful to fold into a producing op and implement canonicaliation patterns with the tensor.cast op as the root, but producer being from different dialects. Returns true when all conditions are met:

  1. source and result and ranked tensors with same element type and rank.
  2. the result type has more static information than the source.

Example:

%1 = producer ... : tensor<?x?xf32>
%2 = tensor.cast %1 : tensor<?x?xf32> to tensor<8x16xf32>

can be canonicalized to :

%2 = producer ... : tensor<8x16xf32>

Not all ops might be canonicalizable this way, but for those that can be, this method provides a check that it is worth doing the canonicalization.

Definition at line 123 of file TensorOps.cpp.

References preservesStaticInformation().

Referenced by mlir::linalg::generateLibraryCallName(), getGenericEffectsImpl(), and joinShapes().

◆ createCanonicalRankReducingExtractSliceOp()

Value mlir::tensor::createCanonicalRankReducingExtractSliceOp ( OpBuilder b,
Location  loc,
Value  tensor,
RankedTensorType  targetType 
)

◆ createCanonicalRankReducingInsertSliceOp()

Value mlir::tensor::createCanonicalRankReducingInsertSliceOp ( OpBuilder b,
Location  loc,
Value  tensor,
Value  dest 
)

Create a rank-reducing InsertSliceOp @[0 .

. 0] with strides [1 .. 1] and appropriate sizes (i.e. dest.getSizes()). The result is a new tensor with rank increased to that of dest, obtained by inserting tensor into dest at the canonical [0 .. 0] position.

Definition at line 1815 of file TensorOps.cpp.

References mlir::Type::cast(), mlir::OpBuilder::create(), mlir::OpBuilder::createOrFold(), mlir::Builder::getIndexAttr(), and mlir::Value::getType().

Referenced by mlir::linalg::DownscaleSizeOneWindowed2DConvolution::returningMatchAndRewrite(), and mlir::linalg::DownscaleDepthwiseConv2DNhwcHwcOp::returningMatchAndRewrite().

◆ createDimValues()

SmallVector< Value > mlir::tensor::createDimValues ( OpBuilder b,
Location  loc,
Value  rankedTensor 
)

◆ createDynamicDimValues()

SmallVector< Value > mlir::tensor::createDynamicDimValues ( OpBuilder b,
Location  loc,
Value  rankedTensor 
)

◆ createPadHighOp()

PadOp mlir::tensor::createPadHighOp ( RankedTensorType  type,
Value  source,
Value  pad,
bool  nofold,
Location  loc,
OpBuilder builder 
)

◆ createPadScalarOp()

PadOp mlir::tensor::createPadScalarOp ( Type  type,
Value  source,
Value  pad,
ArrayRef< OpFoldResult low,
ArrayRef< OpFoldResult high,
bool  nofold,
Location  loc,
OpBuilder builder 
)

◆ foldTensorCast()

LogicalResult mlir::tensor::foldTensorCast ( Operation op)

Performs folding of any operand of op if it comes from a tensor::CastOp that can be folded.

Definition at line 132 of file TensorOps.cpp.

References canFoldIntoConsumerOp(), mlir::Type::dyn_cast(), mlir::TensorType::getElementType(), mlir::Operation::getOpOperands(), mlir::succeeded(), mlir::success(), and mlir::verifyCompatibleShape().

Referenced by joinShapes().

◆ populateFoldConstantExtractSlicePatterns()

void mlir::tensor::populateFoldConstantExtractSlicePatterns ( RewritePatternSet patterns,
const ControlConstantExtractSliceFusionFn controlFn = [](ExtractSliceOp op) {   return false; } 
)

Patterns to fold the extract slice op with its constant operand.

Definition at line 1376 of file TensorOps.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateSplitPaddingPatterns()

void mlir::tensor::populateSplitPaddingPatterns ( RewritePatternSet patterns,
PatternBenefit  baseBenefit = 1 
)

Populates patterns with patterns to wrap a tensor.pad op with an scf.if op to separate the cases where we don't need padding (all pad sizes are actually zeros) and where we indeed need padding.

Definition at line 92 of file SplitPadding.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ preservesStaticInformation()

bool mlir::tensor::preservesStaticInformation ( Type  source,
Type  target 
)

Returns true if target is a ranked tensor type that preserves static information available in the source ranked tensor type.

Definition at line 45 of file TensorOps.cpp.

References mlir::Type::dyn_cast().

Referenced by canFoldIntoConsumerOp(), canFoldIntoProducerOp(), foldInsertOp(), and parseInferType().

◆ registerBufferizableOpInterfaceExternalModels()

void mlir::tensor::registerBufferizableOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerInferTypeOpInterfaceExternalModels()

void mlir::tensor::registerInferTypeOpInterfaceExternalModels ( mlir::DialectRegistry registry)

Registers external models for Infer Type interfaces for tensor ops.

Currently, it registers:

  • ReifyRankedShapedTypeOpInterface for tensor.collapse_shape.
  • ReifyRankedShapedTypeOpInterface for tensor.expand_shape.

Unfortunately, a "normal" internal registration is not possible at the moment, because of the dependency of the interface implementation for these ops on affine.apply and Affine dialect already depends on TensorOps. In order to break the cyclic dependency (TensorOps->AffineOps->TensorOps) the implementation is moved to a separate library.

Definition at line 206 of file TensorInferTypeOpInterfaceImpl.cpp.

References mlir::DialectRegistry::addExtension().

Referenced by mlir::registerAllDialects().

◆ registerTilingOpInterfaceExternalModels()

void mlir::tensor::registerTilingOpInterfaceExternalModels ( mlir::DialectRegistry registry)

Registers external models for Tiling interface for tensor ops.

Currently, it registers:

  • TilingInterface for tensor.pad.

Unfortunately, a "normal" internal registration is not possible at the moment, because of the dependency of the interface implementation for these ops on affine.apply and Affine dialect already depends on TensorOps. In order to break the cyclic dependency (TensorOps->AffineOps->TensorOps) the implementation is moved to a separate library.

Definition at line 284 of file TensorTilingInterfaceImpl.cpp.

References mlir::DialectRegistry::addExtension().

Referenced by mlir::registerAllDialects().

◆ replaceExtractSliceWithTiledProducer()

FailureOr< Value > mlir::tensor::replaceExtractSliceWithTiledProducer ( OpBuilder builder,
tensor::ExtractSliceOp  sliceOp,
OpResult  producerOp 
)

Pattern to swap an tensor.extract_slice with its producer when the producer implements the TilingInterface.

The pattern itself does not provide a mechanism to control where the application happens. With use of transform dialect that control is done within the transform dialect. Other use cases can inherit from this pattern and add necessary controls.

Definition at line 23 of file SwapExtractSliceWithProducer.cpp.

References mlir::failed(), mlir::failure(), mlir::OpResult::getOwner(), mlir::OpResult::getResultNumber(), and mlir::isConstantIntValue().

Referenced by mlir::scf::TileConsumerAndFuseProducersUsingSCFForOp::returningMatchAndRewrite().