MLIR  15.0.0git
Namespaces | Classes | Typedefs | Enumerations | Functions
mlir::linalg Namespace Reference

Namespaces

 detail
 

Classes

class  Aliases
 A very primitive alias analysis which just records for each view, either: More...
 
struct  CodegenStrategy
 Codegen strategy controls how a Linalg op is progressively lowered. More...
 
struct  CopyVectorizationPattern
 filter controls LinalgTransformMarker matching and update when specified. More...
 
struct  Decompose
 Represent one application of createLinalgStrategyDecomposePass. More...
 
struct  ExtractSliceOfPadTensorSwapPattern
 Rewrite extract_slice(pad_tensor(x)) into pad_tensor(extract_slice(x)). More...
 
struct  FusionInfo
 A struct containing the Linalg producer before and after fusion. More...
 
struct  Generalize
 Represent one application of createLinalgStrategyGeneralizePass. More...
 
struct  GeneralizePadOpPattern
 Rewrite a tensor::PadOp into a sequence of InitTensorOp, FillOp and InsertSliceOp. More...
 
struct  GenerateLoopNest
 Utility class used to generate nested loops with ranges described by loopRanges and loop type described by the iteratorTypes. More...
 
struct  GenericOpInterchangePattern
 Linalg generic interchange pattern. More...
 
struct  Interchange
 Represent one application of createLinalgStrategyInterchangePass. More...
 
struct  LinalgBasePromotionPattern
 Linalg promotion patterns. More...
 
struct  LinalgBaseTileAndFusePattern
 
struct  LinalgCopyVTRForwardingPattern
 Match and rewrite for the pattern: ``` alloc = ... More...
 
struct  LinalgCopyVTWForwardingPattern
 Match and rewrite for the pattern: ``` alloc = ... More...
 
class  LinalgDependenceGraph
 Data structure for holding a dependence graph that operates on LinalgOp and views as SSA values. More...
 
struct  LinalgEnablingOptions
 Options to control the application of enabling transformations. More...
 
struct  LinalgFusionOptions
 
struct  LinalgGeneralizationPattern
 Linalg generalization pattern. More...
 
struct  LinalgLoopDistributionOptions
 Options that allow distribution of loops generated in Linalg transforms to processors while generating the loops. More...
 
struct  LinalgLoweringPattern
 
class  LinalgOpToLibraryCallRewrite
 
struct  LinalgPaddingOptions
 
struct  LinalgPaddingPattern
 Linalg padding pattern. More...
 
struct  LinalgPromotionOptions
 
struct  LinalgPromotionPattern
 
struct  LinalgTileAndFusePattern
 
struct  LinalgTileAndFuseTensorOpsPattern
 Linalg tile and fuse tensor ops pattern. More...
 
struct  LinalgTilingAndFusionOptions
 
struct  LinalgTilingOptions
 
struct  LinalgTilingPattern
 Linalg tiling pattern. More...
 
struct  LinalgTransformationFilter
 Helper class to control application of linalg transformation patterns. More...
 
struct  LinalgTransforms
 
struct  LinalgVectorizationOptions
 Linalg vectorization patterns. More...
 
struct  LinalgVectorizationPattern
 filter controls LinalgTransformMarker matching and update when specified. More...
 
struct  LinalgVectorLoweringOptions
 Vector lowering options control how ops are lowered down to 1-D and scf.for form. More...
 
struct  OpOperandVector
 OpOperand vector that implicitly converts to a Value vector. More...
 
struct  Pad
 Represent one application of LinalgStrategyPadPass. More...
 
struct  PadOpTransformationPattern
 tensor::PadOp is not canonicalized away yet, so we provide a transformation to linalg.generic. More...
 
struct  ProcInfo
 Callback function type used to get processor ID, and number of processors used for distribution for all parallel loops generated. More...
 
struct  Promote
 Represent one application of createLinalgStrategyPromotePass. More...
 
struct  PromotionInfo
 Create a new buffer using the allocationFn provided. More...
 
struct  RegionMatcher
 A struct containing common matchers over linalg op's region. More...
 
struct  Tile
 Represent one application of LinalgStrategyTilePass. More...
 
struct  TileAndFuse
 Represent one application of LinalgStrategyTileAndFusePass. More...
 
struct  TiledAndFusedLinalgOps
 Fuse a sequence of linalg operations (ops) using tile-and-fuse. More...
 
struct  TiledLinalgOp
 Perform standalone tiling of a single LinalgOp by tileSizes. More...
 
class  TileLoopNest
 A struct to manage the tile loop nest specific information. More...
 
class  TilingPatterns
 
class  TilingPatterns< OpTy, OpTypes... >
 
class  TilingPatterns<>
 
struct  Transformation
 Abstract Transformation class applied in a sequence that also handles state through markers. More...
 
class  VectorizationPatterns
 
class  VectorizationPatterns< OpTy, OpTypes... >
 
class  VectorizationPatterns<>
 
struct  Vectorize
 Represent one application of createLinalgStrategyVectorizePass. More...
 
struct  VectorLowering
 Represent one application of createLinalgStrategyLowerVectorsPass. More...
 

Typedefs

using LoopRangeBuilder = std::function< SmallVector< Range, 4 >(ImplicitLocOpBuilder)>
 
using LinalgLoops = SmallVector< Operation *, 4 >
 
using ControlFusionFn = std::function< bool(const OpResult &producer, OpOperand &consumer)>
 Function type which is used to control when to stop fusion. More...
 
using AllocBufferCallbackFn = std::function< Optional< Value >(OpBuilder &b, memref::SubViewOp subView, ArrayRef< Value > boundingSubViewSize, DataLayout &layout)>
 Callback function type used to perform the allocation for the promoted subView. More...
 
using DeallocBufferCallbackFn = std::function< LogicalResult(OpBuilder &b, Value buffer)>
 Callback function type used to deallocate the buffers used to hold the promoted subview. More...
 
using CopyCallbackFn = std::function< LogicalResult(OpBuilder &b, Value src, Value dst)>
 Callback function type used to insert copy from original subview to subview of the promoted region for the read operands/subview of promoted region to original subview for the results. More...
 
using TileSizeComputationFunction = std::function< SmallVector< Value, 4 >(OpBuilder &, Operation *)>
 
using LoopIndexToRangeIndexMap = DenseMap< int, int >
 Creates a number of ranges equal to the number of non-zero in tileSizes. More...
 
using OptimizeCopyFn = std::function< LogicalResult(PatternRewriter &, tensor::PadOp, Value)>
 
using ControlSplitReductionFn = std::function< std::pair< int64_t, unsigned >(LinalgOp op)>
 Function signature to control reduction splitting. More...
 
using FusableOpDependencesTy = llvm::MapVector< Operation *, SmallVector< LinalgDependenceGraph::LinalgDependenceGraphElem, 1 > >
 
using ProcInfoCallBackFn = std::function< SmallVector< ProcInfo, 2 >(OpBuilder &b, Location loc, ArrayRef< Range > parallelLoopRanges)>
 
using OneDimProcInfoCallBackFn = std::function< ProcInfo(OpBuilder &b, Location loc)>
 

Enumerations

enum  LinalgLoweringType { LinalgLoweringType::LibraryCall = 0, LinalgLoweringType::Loops = 1, LinalgLoweringType::AffineLoops = 2, LinalgLoweringType::ParallelLoops = 3 }
 Linalg lowering patterns. More...
 
enum  LinalgTilingLoopType { LinalgTilingLoopType::Loops = 0, LinalgTilingLoopType::AffineLoops = 1, LinalgTilingLoopType::ParallelLoops = 2, LinalgTilingLoopType::TiledLoops = 3 }
 The type of loops to be generated during tiling. More...
 
enum  DistributionMethod { DistributionMethod::Cyclic = 0, DistributionMethod::CyclicNumProcsGeNumIters = 1, DistributionMethod::CyclicNumProcsEqNumIters = 2 }
 Scheme used to distribute loops to processors. More...
 

Functions

void populateLinalgToStandardConversionPatterns (RewritePatternSet &patterns)
 Populate the given list with patterns that convert from Linalg to Standard. More...
 
LoopRangeBuilder defaultLoopRangesBuilder (LinalgOp op)
 Provide a very simple inference procedure to build the loop ranges from the op and its operands. More...
 
std::string generateLibraryCallName (Operation *op)
 Returns the name mangled library call name to disambiguate between different overloads at the C level. More...
 
SmallVector< AffineExpr, 4 > makeAffineDimExprs (unsigned num, unsigned &startIdx, MLIRContext *context)
 Returns num AffineDimExpr dimensions at positions [startIdx, startIdx + num) and increments startIdx to startIdx + num. More...
 
AffineMap extractOrIdentityMap (Optional< AffineMap > maybeMap, unsigned rank, MLIRContext *context)
 Returns maybeMap.get() if maybeMap is set, otherwise returns the symbol-less identity map of rank. More...
 
SmallVector< AffineExpr, 4 > concat (ArrayRef< AffineExpr > a, ArrayRef< AffineExpr > b)
 Return the vector that is the concatenation of a and b. More...
 
void getDimsOfType (Operation *op, StringRef iteratorTypeName, SmallVectorImpl< unsigned > &res)
 Return the dims that are iteratorTypeName loops in the LinalgOp op. More...
 
bool isaContractionOpInterface (LinalgOp linalgOp)
 Checks whether linalgOp conforms to ContractionOpInterface. More...
 
void registerTransformDialectExtension (DialectRegistry &registry)
 
void registerBufferizableOpInterfaceExternalModels (DialectRegistry &registry)
 
void hoistRedundantVectorTransfers (func::FuncOp func)
 Hoist vector.transfer_read/vector.transfer_write on buffers pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true: More...
 
void hoistRedundantVectorTransfersOnTensor (func::FuncOp func)
 Same behavior as hoistRedundantVectorTransfers but works on tensors instead of buffers. More...
 
FailureOr< ValuehoistPaddingOnTensors (tensor::PadOp opToHoist, int numLoops, ArrayRef< int64_t > transposeVector, tensor::PadOp &hoistedOp, SmallVectorImpl< GenericOp > &transposeOps)
 Mechanically hoist padding operations on tensors by numLoops into a new, generally larger tensor. More...
 
void populatePadTensorTilingPatterns (RewritePatternSet &patterns, const LinalgTilingOptions &options)
 
void populateConvolutionVectorizationPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1)
 Populate patterns for vectorizing low-D convolution ops. More...
 
void populateElementwiseToLinalgConversionPatterns (RewritePatternSet &patterns)
 Populate patterns that convert ElementwiseMappable ops to linalg parallel loops. More...
 
void populateSparseTensorRewriting (RewritePatternSet &patterns)
 Populate patterns that are only useful in the context of sparse tensors. More...
 
void populateElementwiseOpsFusionPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlElementwiseOpFusion)
 Patterns for fusing linalg operation on tensors. More...
 
void populateFoldReshapeOpsByExpansionPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlFoldingReshapes)
 Patterns to fold an expanding (collapsing) tensor_reshape operation with its producer (consumer) generic operation by expanding the dimensionality of the loop in the generic op. More...
 
void populateFoldReshapeOpsByCollapsingPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlFoldingReshapes)
 Patterns to fold an expanding tensor.expand_shape operation with its producer generic operation by collapsing the dimensions of the generic op. More...
 
void populateConstantFoldLinalgOperations (RewritePatternSet &patterns, const ControlFusionFn &controlFn)
 Patterns to constant fold Linalg operations. More...
 
void populateFuseTensorPadWithProducerLinalgOpPatterns (RewritePatternSet &patterns)
 Pattern to fuse a tensor.pad operation with the producer of its source, if the producer is a linalg operation with all parallel iterator types. More...
 
void populateLinalgNamedOpConversionPatterns (RewritePatternSet &patterns)
 Patterns to convert from one named op to another. More...
 
void populateFoldUnitExtentDimsPatterns (RewritePatternSet &patterns)
 Patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors. More...
 
void populateInlineConstantOperandsPatterns (RewritePatternSet &patterns)
 Patterns that are used to inline constant operands into linalg generic ops. More...
 
void populateBubbleUpExtractSliceOpPatterns (RewritePatternSet &patterns)
 Patterns that are used to bubble up extract slice op above linalg op. More...
 
FailureOr< TiledLinalgOptileLinalgOp (RewriterBase &b, LinalgOp op, const LinalgTilingOptions &options)
 
void peelTiledLinalgOp (RewriterBase &rewriter, TiledLinalgOp &res, ArrayRef< int64_t > peeledLoops, LinalgTilingLoopType loopType)
 Peel the loops of a TiledLinalgOp. More...
 
FailureOr< TiledAndFusedLinalgOpstileAndFuseLinalgOps (OpBuilder &builder, ArrayRef< LinalgOp > ops, const LinalgDependenceGraph &dependenceGraph, const LinalgTilingOptions &tilingOptions)
 
FailureOr< GenericOp > interchangeGenericOp (RewriterBase &rewriter, GenericOp genericOp, ArrayRef< unsigned > interchangeVector)
 Interchange the iterator_types and iterator_maps dimensions and adapts the index accesses of op. More...
 
FailureOr< GenericOp > generalizeNamedOp (RewriterBase &rewriter, LinalgOp namedOp)
 Create a GenericOp from the given named operation namedOp and replace namedOp. More...
 
FailureOr< PromotionInfopromoteSubviewAsNewBuffer (OpBuilder &b, Location loc, memref::SubViewOp subView, const AllocBufferCallbackFn &allocationFn, DataLayout &layout)
 
FailureOr< LinalgOp > promoteSubViews (OpBuilder &b, LinalgOp op, const LinalgPromotionOptions &options)
 Promote the subViews into a new buffer allocated at the insertion point b. More...
 
LogicalResult vectorize (RewriterBase &builder, LinalgOp linalgOp)
 Emit a suitable vector form for a Linalg op with fully static shape. More...
 
LogicalResult vectorizeCopy (RewriterBase &builder, memref::CopyOp copyOp)
 Emit a suitable vector form for a Copy op with fully static shape. More...
 
FailureOr< LinalgLoopslinalgOpToLoops (PatternRewriter &rewriter, LinalgOp linalgOp)
 Emit a loop nest of scf.for with the proper body for linalgOp. More...
 
FailureOr< LinalgLoopslinalgOpToParallelLoops (PatternRewriter &rewriter, LinalgOp linalgOp)
 Emit a loop nest of scf.parallel with the proper body for linalgOp. More...
 
FailureOr< LinalgLoopslinalgOpToAffineLoops (PatternRewriter &rewriter, LinalgOp linalgOp)
 Emit a loop nest of affine.for with the proper body for linalgOp. More...
 
LogicalResult promoteSubviewsPrecondition (Operation *op, LinalgPromotionOptions options)
 Promote memref.subviews feeding linalg-on-buffers operations. More...
 
std::tuple< SmallVector< Range, 4 >, LoopIndexToRangeIndexMapmakeTiledLoopRanges (RewriterBase &b, Location loc, AffineMap map, ValueRange allShapeSizes, ValueRange allTileSizes)
 
void transformIndexOps (RewriterBase &b, LinalgOp op, SmallVectorImpl< Value > &ivs, const LoopIndexToRangeIndexMap &loopIndexToRangeIndex)
 All indices returned by IndexOp should be invariant with respect to tiling. More...
 
RewritePatternSet getLinalgTilingCanonicalizationPatterns (MLIRContext *ctx)
 Canonicalization patterns relevant to apply after tiling patterns. More...
 
void populateLinalgTilingCanonicalizationPatterns (RewritePatternSet &patterns)
 
llvm::Optional< vector::CombiningKind > getCombinerOpKind (Operation *combinerOp)
 Return vector::CombiningKind for the given op. More...
 
void populateLinalgNamedOpsGeneralizationPatterns (RewritePatternSet &patterns, const LinalgTransformationFilter &filter=LinalgTransformationFilter())
 Linalg generalization patterns. More...
 
void populateDecomposeConvolutionPatterns (RewritePatternSet &patterns, const LinalgTransformationFilter &filter=LinalgTransformationFilter(), PatternBenefit benefit=1)
 Linalg decompose convolutions patterns. More...
 
FailureOr< SmallVector< Value > > rewriteAsPaddedOp (OpBuilder &b, LinalgOp opToPad, ArrayRef< int64_t > paddingDimensions, ArrayRef< Attribute > paddingValues, ArrayRef< bool > packPaddings, LinalgOp &paddedOp)
 Pad the iterator dimensions paddingDimensions of all opToPad operands to a static bounding box. More...
 
void populatePadOpVectorizationPatterns (RewritePatternSet &patterns, PatternBenefit baseBenefit=1)
 Populates patterns with patterns that vectorize linalg.pad_tensor. More...
 
LogicalResult applyStagedPatterns (Operation *op, ArrayRef< FrozenRewritePatternSet > stage1Patterns, const FrozenRewritePatternSet &stage2Patterns, function_ref< LogicalResult(Operation *)> stage3Lambda=nullptr)
 Helper function to allow applying rewrite patterns, interleaved with more global transformations, in a staged fashion: More...
 
void populateSplitReductionPattern (RewritePatternSet &patterns, const ControlSplitReductionFn &controlSplitReductionFn, const LinalgTransformationFilter &f=LinalgTransformationFilter())
 Patterns to apply splitReduction below. More...
 
FailureOr< LinalgOp > splitReduction (PatternRewriter &b, LinalgOp op, const ControlSplitReductionFn &controlSplitReductionFn, const LinalgTransformationFilter &f)
 Apply transformation to split the single linalg op reduction into a parallel and reduction dimension. More...
 
bool isPermutation (ArrayRef< int64_t > permutation)
 Check if permutation is a permutation of the range [0, permutation.size()). More...
 
Value createOrFoldDimOp (OpBuilder &b, Location loc, Value source, int64_t dim)
 Helper function that creates a memref::DimOp or tensor::DimOp depending on the type of source. More...
 
SmallVector< Value, 4 > getDynOperands (Location loc, Value val, OpBuilder &b)
 Given an operation, retrieves the value of each dynamic dimension through constructing the necessary DimOp operators. More...
 
void getUpperBoundForIndex (Value value, AffineMap &boundMap, SmallVectorImpl< Value > &boundOperands, bool constantRequired=false)
 Computes an upper bound for the result value of an index computation. More...
 
FailureOr< int64_t > getConstantUpperBoundForIndex (Value value)
 Returns a constant upper bound for the result value of an index computation. More...
 
tensor::ExtractSliceOp makeComposedExtractSliceOp (OpBuilder &b, Location loc, Value source, ArrayRef< OpFoldResult > offsets, ArrayRef< OpFoldResult > sizes, ArrayRef< OpFoldResult > strides)
 Create an ExtractSliceOp and, if source is defined by an ExtractSliceOp, fold it by adding the offsets. More...
 
Value makeComposedPadHighOp (OpBuilder &b, Location loc, RankedTensorType type, Value source, Value pad, bool nofold)
 Create a tensor::PadOp that pads source to the size of the statically sized type whose static sizes are assumed to be greater than the dynamic source size. More...
 
GenericOp makeTransposeOp (OpBuilder &b, Location loc, Value inputTensor, Value outputTensor, ArrayRef< int64_t > transposeVector)
 Returns a GenericOp that tansposes inputTensor into outputTensor using transposeVector to permute the inputTensor dimensions. More...
 
GenericOp makeMemRefCopyOp (OpBuilder &b, Location loc, Value from, Value to)
 Returns GenericOp that copies an n-D memref. More...
 
bool isProducerLastWriteOfView (const LinalgDependenceGraph &graph, LinalgOp consumer, Value consumedView, LinalgOp producer)
 Checks whether the specific producer is the last write to exactly the whole consumedView. More...
 
bool isFusableInto (const LinalgDependenceGraph &graph, LinalgOp consumer, Value consumedView, LinalgOp producer)
 Checks whether fusing the specific producer of the consumedView is feasible. More...
 
SmallVector< ValuecomputeTileOffsets (OpBuilder &b, Location loc, ValueRange ivs, ValueRange tileSizes)
 Compute tile offsets, given a list of loop ivs and tileSizes. More...
 
SmallVector< ValuecomputeTileSizes (OpBuilder &b, Location loc, ValueRange ivs, ValueRange tileSizes, ArrayRef< Value > sizeBounds)
 Compute tile sizes, given a list of loop ivs, tileSizes and dimension sizes (sizeBounds). More...
 
Value makeTiledShape (OpBuilder &builder, Location loc, Value valueToTile, ValueRange tileSizes, AffineMap map, ValueRange lbs, ValueRange ubs, ValueRange subShapeSizes, bool omitPartialTileCheck)
 Creates an extract_slice/subview op for a single valueToTile with builder. More...
 
SmallVector< Value, 4 > makeTiledShapes (OpBuilder &builder, Location loc, LinalgOp linalgOp, ArrayRef< Value > valuesToTile, ValueRange ivs, ValueRange tileSizes, ArrayRef< Value > sizeBounds, bool omitPartialTileCheck)
 Creates extract_slice/subview ops for all valuesToTile of the given linalgOp with builder, assuming linalgOp is being fused into a loop nest for tiling with the given induction variables ivs and tile sizes tileSizes. More...
 
void addTileLoopIvsToIndexOpResults (OpBuilder &b, LinalgOp tiledOp, ArrayRef< Value > ivs)
 Add the tile loop induction variables ivs to the IndexOp results found in the body of the tiledOp to account for the tile offset. More...
 
FusableOpDependencesTy findAllFusableDependences (ArrayRef< LinalgOp > ops, const LinalgDependenceGraph &dependenceGraph)
 Find all dependences that are fusable. More...
 
FailureOr< FusionInfofuseProducerOfBuffer (OpBuilder &b, OpOperand &consumerOpOperand, const LinalgDependenceGraph &graph)
 Fuses producer into consumer if the producer is structurally feasible and the fusion would not violate dependencies. More...
 
FailureOr< FusionInfofuseProducerOfTensor (OpBuilder &b, OpOperand &consumerOpOperand)
 Tensor counterpart of fuseProducerOfBuffer. More...
 
FailureOr< FusionInfofuseProducerOfTensor (OpBuilder &b, OpResult producerOpResult, OpOperand &consumerOpOperand)
 Tensor counterpart of fuseProducerOfBuffer. More...
 
void updateBoundsForCyclicDistribution (OpBuilder &builder, Location loc, Value procId, Value nprocs, Value &lb, Value &ub, Value &step)
 Update the lb, ub and step to get per processor lb, ub and step. More...
 
FailureOr< TileLoopNesttileConsumerAndFuseProducers (OpBuilder &b, LinalgOp consumerOp, ArrayRef< int64_t > tileSizes, ArrayRef< int64_t > tileInterchange, const Optional< LinalgLoopDistributionOptions > &tileDistribution)
 Tiles consumerOp and fuses its dependencies if possible. More...
 
static void generateParallelLoopNest (OpBuilder &b, Location loc, ValueRange lbs, ValueRange ubs, ValueRange steps, ArrayRef< Attribute > iteratorTypes, function_ref< void(OpBuilder &, Location, ValueRange)> bodyBuilderFn, SmallVectorImpl< Value > &ivStorage, ArrayRef< DistributionMethod > distributionMethod={})
 Generates a loop nest consisting of scf.parallel and scf.for, depending on the `iteratorTypes. More...
 
static Value fullyComposeAndAffineApply (OpBuilder &b, Location loc, AffineExpr expr, ValueRange operands)
 

Typedef Documentation

◆ AllocBufferCallbackFn

using mlir::linalg::AllocBufferCallbackFn = typedef std::function<Optional<Value>( OpBuilder &b, memref::SubViewOp subView, ArrayRef<Value> boundingSubViewSize, DataLayout &layout)>

Callback function type used to perform the allocation for the promoted subView.

In boundingSubViewsize a best attempt is made to find the smallest constant value for the size of the buffer needed for each dimension. If that is not possible, contains the dynamic size of the subview. The call back should return the buffer to use.

Definition at line 228 of file Transforms.h.

◆ ControlFusionFn

using mlir::linalg::ControlFusionFn = typedef std::function<bool(const OpResult &producer, OpOperand &consumer)>

Function type which is used to control when to stop fusion.

It is expected that OpOperand is not modified in the callback. The OpOperand is not marked as const to allow callers to use non-const methods.

Definition at line 65 of file Transforms.h.

◆ ControlSplitReductionFn

using mlir::linalg::ControlSplitReductionFn = typedef std::function<std::pair<int64_t, unsigned>(LinalgOp op)>

Function signature to control reduction splitting.

This returns a pair containing a ratio and a dimension index. The ratio is used to split the reduction dimension. The dimension index is used to control where the extra dimension is added to the intermediate tensor shape. If the ratio value is less or equal to 1 then nothing will be done.

Definition at line 1374 of file Transforms.h.

◆ CopyCallbackFn

using mlir::linalg::CopyCallbackFn = typedef std::function<LogicalResult(OpBuilder &b, Value src, Value dst)>

Callback function type used to insert copy from original subview to subview of the promoted region for the read operands/subview of promoted region to original subview for the results.

The copy has to happen from src to dst.

Definition at line 240 of file Transforms.h.

◆ DeallocBufferCallbackFn

using mlir::linalg::DeallocBufferCallbackFn = typedef std::function<LogicalResult(OpBuilder &b, Value buffer)>

Callback function type used to deallocate the buffers used to hold the promoted subview.

Definition at line 233 of file Transforms.h.

◆ FusableOpDependencesTy

Definition at line 210 of file Utils.h.

◆ LinalgLoops

Definition at line 43 of file Transforms.h.

◆ LoopIndexToRangeIndexMap

Creates a number of ranges equal to the number of non-zero in tileSizes.

One for each loop of the LinalgOp that is tiled. The tileSizes argument has one entry per surrounding loop. It uses zero as the convention that a particular loop is not tiled. This convention simplifies implementations by avoiding affine map manipulations. The returned ranges correspond to the loop ranges, in the proper order, that are tiled and for which new loops will be created. Also the function returns a map from loop indices of the LinalgOp to the corresponding non-empty range indices of newly created loops.

Definition at line 444 of file Transforms.h.

◆ LoopRangeBuilder

Definition at line 36 of file Linalg.h.

◆ OneDimProcInfoCallBackFn

using mlir::linalg::OneDimProcInfoCallBackFn = typedef std::function<ProcInfo(OpBuilder &b, Location loc)>

Definition at line 302 of file Utils.h.

◆ OptimizeCopyFn

using mlir::linalg::OptimizeCopyFn = typedef std::function<LogicalResult(PatternRewriter &, tensor::PadOp, Value)>

Definition at line 1187 of file Transforms.h.

◆ ProcInfoCallBackFn

using mlir::linalg::ProcInfoCallBackFn = typedef std::function<SmallVector<ProcInfo, 2>( OpBuilder &b, Location loc, ArrayRef<Range> parallelLoopRanges)>

Definition at line 300 of file Utils.h.

◆ TileSizeComputationFunction

Definition at line 433 of file Transforms.h.

Enumeration Type Documentation

◆ DistributionMethod

Scheme used to distribute loops to processors.

Enumerator
Cyclic 

Cyclic distribution where no assumption is made about the dynamic relationship between number of processors and number of iterations of the distributed loop.

Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

scf.parallel(iv)= (lb + procId * step) to (ub) step (step * nprocs)

CyclicNumProcsGeNumIters 

Cyclic distribution where the number of processors can be assumed to be more than or equal to the number of iterations of the distributed loop.

In such cases, a simple in-bounds check is enough (instead of materializing a loop). Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

iv = lb + procId * step cond = arith.cmpi "slt", iv, ub scf.if cond { ... }

CyclicNumProcsEqNumIters 

Cyclic distribution where the number of processors can be assumed to be equal to the number of iterations of the distributed loop.

In such cases, no bounds check is needed. Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

iv = lb + procId * step

Definition at line 253 of file Utils.h.

◆ LinalgLoweringType

Linalg lowering patterns.

Apply the linalgLowerOpToLoops transformation as a pattern. filter controls LinalgTransformMarker matching and update when specified. See linalgLowerOpToLoops for more details.

Enumerator
LibraryCall 
Loops 
AffineLoops 
ParallelLoops 

Definition at line 1088 of file Transforms.h.

◆ LinalgTilingLoopType

The type of loops to be generated during tiling.

Enumerator
Loops 
AffineLoops 
ParallelLoops 
TiledLoops 

Definition at line 142 of file Utils.h.

Function Documentation

◆ addTileLoopIvsToIndexOpResults()

void mlir::linalg::addTileLoopIvsToIndexOpResults ( OpBuilder b,
LinalgOp  tiledOp,
ArrayRef< Value ivs 
)

Add the tile loop induction variables ivs to the IndexOp results found in the body of the tiledOp to account for the tile offset.

Definition at line 958 of file Utils.cpp.

References mlir::bindDims(), mlir::Builder::getContext(), mlir::makeComposedAffineApply(), and mlir::OpBuilder::setInsertionPointAfter().

Referenced by fuse(), getTiledProducer(), and transformIndexOps().

◆ applyStagedPatterns()

LogicalResult mlir::linalg::applyStagedPatterns ( Operation op,
ArrayRef< FrozenRewritePatternSet stage1Patterns,
const FrozenRewritePatternSet stage2Patterns,
function_ref< LogicalResult(Operation *)>  stage3Lambda = nullptr 
)

Helper function to allow applying rewrite patterns, interleaved with more global transformations, in a staged fashion:

  1. the first stage consists of a list of FrozenRewritePatternSet. Each FrozenRewritePatternSet in this list is applied once, in order.
  2. the second stage consists of a single RewritePattern that is applied greedily until convergence.
  3. the third stage consists of applying a lambda, generally used for non-local transformation effects. This allows creating custom fused transformations where patterns can be ordered and applied at a finer granularity than a sequence of traditional compiler passes.

Definition at line 743 of file Transforms.cpp.

References mlir::applyPatternsAndFoldGreedily(), DBGS, mlir::failed(), mlir::failure(), and mlir::success().

◆ computeTileOffsets()

SmallVector< Value > mlir::linalg::computeTileOffsets ( OpBuilder b,
Location  loc,
ValueRange  ivs,
ValueRange  tileSizes 
)

Compute tile offsets, given a list of loop ivs and tileSizes.

In case a tile size is zero (i.e., no tiling), the corresponding offset is also zero.

Definition at line 881 of file Utils.cpp.

References mlir::OpBuilder::create(), isTiled(), and isZero().

Referenced by makeTiledShapes(), and tilePadOp().

◆ computeTileSizes()

SmallVector< Value > mlir::linalg::computeTileSizes ( OpBuilder b,
Location  loc,
ValueRange  ivs,
ValueRange  tileSizes,
ArrayRef< Value sizeBounds 
)

Compute tile sizes, given a list of loop ivs, tileSizes and dimension sizes (sizeBounds).

In case a tile size is zero (i.e., no tiling), the corresponding result size is the corresponding value from sizeBounds. Note: The returned tile sizes are closed intervals.

Definition at line 896 of file Utils.cpp.

References fullyComposeAndAffineApply(), mlir::getAffineDimExpr(), mlir::Builder::getContext(), isTiled(), and isZero().

Referenced by makeTiledShapes(), and tilePadOp().

◆ concat()

SmallVector< AffineExpr, 4 > mlir::linalg::concat ( ArrayRef< AffineExpr a,
ArrayRef< AffineExpr b 
)

Return the vector that is the concatenation of a and b.

Definition at line 1491 of file LinalgOps.cpp.

Referenced by convertOmpThreadprivate(), and mlir::presburger::Simplex::makeProduct().

◆ createOrFoldDimOp()

Value mlir::linalg::createOrFoldDimOp ( OpBuilder b,
Location  loc,
Value  source,
int64_t  dim 
)

Helper function that creates a memref::DimOp or tensor::DimOp depending on the type of source.

Definition at line 159 of file Utils.cpp.

References mlir::OpBuilder::createOrFold(), mlir::Value::getType(), and mlir::Type::isa().

Referenced by createInBoundsCond(), createSubViewIntersection(), fuse(), genBuffers(), MaterializeTransferMask< ConcreteOp >::matchAndRewrite(), and HasAffineDimExprVisitor::visitSymbolExpr().

◆ defaultLoopRangesBuilder()

LoopRangeBuilder mlir::linalg::defaultLoopRangesBuilder ( LinalgOp  op)

Provide a very simple inference procedure to build the loop ranges from the op and its operands.

This only works with permutation affine maps and patterns of the form (m, n)[s] -> (m + n - s floordiv 2). A more advanced Tensor-Comprehension like inference is possible but has proven to be ambiguous in unfavorable case. As a consequence, we relax the default behavior very conservatively and provide an op-specified hook so that Linalg ops may override the behavior.

◆ extractOrIdentityMap()

AffineMap mlir::linalg::extractOrIdentityMap ( Optional< AffineMap maybeMap,
unsigned  rank,
MLIRContext context 
)

Returns maybeMap.get() if maybeMap is set, otherwise returns the symbol-less identity map of rank.

Definition at line 1471 of file LinalgOps.cpp.

References mlir::AffineMap::get(), and mlir::AffineMap::getMultiDimIdentityMap().

◆ findAllFusableDependences()

FusableOpDependencesTy mlir::linalg::findAllFusableDependences ( ArrayRef< LinalgOp >  ops,
const LinalgDependenceGraph dependenceGraph 
)

Find all dependences that are fusable.

Definition at line 664 of file Fusion.cpp.

References findFusableProducer().

Referenced by tileAndFuseLinalgOpsImpl().

◆ fullyComposeAndAffineApply()

static Value mlir::linalg::fullyComposeAndAffineApply ( OpBuilder b,
Location  loc,
AffineExpr  expr,
ValueRange  operands 
)
static

◆ fuseProducerOfBuffer()

FailureOr< FusionInfo > mlir::linalg::fuseProducerOfBuffer ( OpBuilder b,
OpOperand consumerOpOperand,
const LinalgDependenceGraph graph 
)

Fuses producer into consumer if the producer is structurally feasible and the fusion would not violate dependencies.

Implements the fusion part of the "tileAndFuse on buffers" transformation and thus requires the consumerOpOperand to be a subview op (generally obtained by applying the tiling transformation).

Definition at line 335 of file Fusion.cpp.

References mlir::failure(), findFusableProducer(), fuse(), mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::getDefiningOp(), mlir::detail::IROperandBase::getOwner(), mlir::Value::getParentBlock(), and mlir::OpBuilder::setInsertionPoint().

◆ fuseProducerOfTensor() [1/2]

FailureOr< FusionInfo > mlir::linalg::fuseProducerOfTensor ( OpBuilder b,
OpOperand consumerOpOperand 
)

Tensor counterpart of fuseProducerOfBuffer.

This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation).

Definition at line 405 of file Fusion.cpp.

References mlir::failure(), mlir::IROperand< DerivedT, IRValueT >::get(), and getProducerOfTensor().

◆ fuseProducerOfTensor() [2/2]

FailureOr< FusionInfo > mlir::linalg::fuseProducerOfTensor ( OpBuilder b,
OpResult  producerOpResult,
OpOperand consumerOpOperand 
)

Tensor counterpart of fuseProducerOfBuffer.

This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation). Assumes producerOfTensor is a Linalg op that produces consumerOpOperand.

Definition at line 417 of file Fusion.cpp.

References mlir::OpBuilder::create(), mlir::failure(), fuse(), mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::getDefiningOp(), mlir::detail::IROperandBase::getOwner(), mlir::OpResult::getOwner(), mlir::Value::getParentBlock(), mlir::OpResult::getResultNumber(), mlir::Value::getType(), mlir::IROperand< DerivedT, IRValueT >::set(), and mlir::OpBuilder::setInsertionPoint().

◆ generalizeNamedOp()

FailureOr< GenericOp > mlir::linalg::generalizeNamedOp ( RewriterBase rewriter,
LinalgOp  namedOp 
)

◆ generateLibraryCallName()

std::string mlir::linalg::generateLibraryCallName ( Operation op)

Returns the name mangled library call name to disambiguate between different overloads at the C level.

The name mangling scheme is basic and uses MLIR type names:

  1. form a string which is the concatenation of the linalg op name with all the operand type names, separate by underscores;
  2. drop the linalg. prefix, and the <, >, ? symbols from the type. Assumes op is a LinalgOp.

Examples:

  1. linalg.fill(f, A) : f32, memref<f32> name mangles into linalg_fill_f32_viewf32
  2. linalg.dot A, B, C : (memref<?xf32, stride_specification>, memref<?xf32, stride_specification>, memref<f32>) name mangles into linalg_dot_viewxf32_viewxf32_viewf32
  3. linalg.matmul(...) : memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification> name mangles into linalg_matmul_viewxxf32_viewxxf32_viewxxf32

Definition at line 1520 of file LinalgOps.cpp.

References mlir::RewritePatternSet::add(), appendMangledType(), mlir::tensor::canFoldIntoConsumerOp(), mlir::tensor::canFoldIntoProducerOp(), mlir::Type::cast(), mlir::Value::cast(), mlir::Operation::clone(), mlir::OpBuilder::create(), mlir::RewriterBase::eraseOp(), mlir::failure(), mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::getDefiningOp(), mlir::Operation::getName(), mlir::Operation::getNumResults(), mlir::OpOperand::getOperandNumber(), mlir::Operation::getOperandTypes(), mlir::AffineMap::getResult(), mlir::Operation::getResult(), mlir::OpResult::getResultNumber(), mlir::Operation::getResults(), mlir::OperationName::getStringRef(), mlir::Value::getType(), materializeConstant(), mlir::RewriterBase::replaceOp(), mlir::Operation::result_begin(), mlir::Operation::result_end(), mlir::OpBuilder::setInsertionPoint(), mlir::success(), and value.

◆ generateParallelLoopNest()

static void mlir::linalg::generateParallelLoopNest ( OpBuilder b,
Location  loc,
ValueRange  lbs,
ValueRange  ubs,
ValueRange  steps,
ArrayRef< Attribute iteratorTypes,
function_ref< void(OpBuilder &, Location, ValueRange)>  bodyBuilderFn,
SmallVectorImpl< Value > &  ivStorage,
ArrayRef< DistributionMethod distributionMethod = {} 
)
static

Generates a loop nest consisting of scf.parallel and scf.for, depending on the `iteratorTypes.

` Consecutive parallel loops create a single scf.parallel operation; each sequential loop creates a new scf.for operation. The body of the innermost loop is populated by bodyBuilderFn that accepts a range of induction variables for all loops. ivStorage is used to store the partial list of induction variables.

Definition at line 575 of file Utils.cpp.

References mlir::ArithBuilder::_and(), mlir::scf::buildLoopNest(), mlir::OpBuilder::create(), mlir::isParallelIterator(), and mlir::ArithBuilder::slt().

Referenced by mlir::linalg::GenerateLoopNest< LoopTy >::doit().

◆ getCombinerOpKind()

llvm::Optional< vector::CombiningKind > mlir::linalg::getCombinerOpKind ( Operation combinerOp)

Return vector::CombiningKind for the given op.

Definition at line 116 of file Vectorization.cpp.

Referenced by bindShapeDims(), buildMultiDimReduce(), and reductionPreconditions().

◆ getConstantUpperBoundForIndex()

FailureOr< int64_t > mlir::linalg::getConstantUpperBoundForIndex ( Value  value)

Returns a constant upper bound for the result value of an index computation.

Calls getUpperBoundForIndex and returns a constant upper bound if the result of boundMap is a constant expression and failure otherwise.

Example:

%0 = affine.min affine.map<(d0) -> (40, d0)> (%d0)
%1 = affine.apply affine.map<(d0) -> (d0 + 2)> (%0)

getConstantUpperBoundForIndex(%1) returns 42 (boundsMap = affine.map<() -> (42)>)

Definition at line 280 of file Utils.cpp.

References mlir::failure(), mlir::AffineMap::getResults(), and getUpperBoundForIndex().

Referenced by padOperandToSmallestStaticBoundingBox(), and promoteSubviewAsNewBuffer().

◆ getDimsOfType()

void mlir::linalg::getDimsOfType ( Operation op,
StringRef  iteratorTypeName,
SmallVectorImpl< unsigned > &  res 
)

Return the dims that are iteratorTypeName loops in the LinalgOp op.

Assumes op is a LinalgOp.

Definition at line 1457 of file LinalgOps.cpp.

◆ getDynOperands()

SmallVector< Value, 4 > mlir::linalg::getDynOperands ( Location  loc,
Value  val,
OpBuilder b 
)

Given an operation, retrieves the value of each dynamic dimension through constructing the necessary DimOp operators.

Definition at line 169 of file Utils.cpp.

References mlir::Type::cast(), createOrFoldDimOp(), mlir::detail::enumerate(), and mlir::Value::getType().

◆ getLinalgTilingCanonicalizationPatterns()

RewritePatternSet mlir::linalg::getLinalgTilingCanonicalizationPatterns ( MLIRContext ctx)

Canonicalization patterns relevant to apply after tiling patterns.

These are applied automatically by the tiling pass but need to be applied manually when tiling is called programmatically.

Definition at line 390 of file Tiling.cpp.

References populateLinalgTilingCanonicalizationPatterns().

Referenced by applyExtractSliceOfPadTensorSwapPattern(), and mlir::linalg::LinalgTilingOptions::setPeeledLoops().

◆ getUpperBoundForIndex()

void mlir::linalg::getUpperBoundForIndex ( Value  value,
AffineMap boundMap,
SmallVectorImpl< Value > &  boundOperands,
bool  constantRequired = false 
)

Computes an upper bound for the result value of an index computation.

Translates AffineMinOps and AffineApplyOps along the use-def chains of the index computation to affine constraints and projects out intermediate values. The method sets boundMap to an affine map that given boundOperands evaluates to an upper bound for the index computation.

If constantRequired is true, only returns the constant bounds (potentially over-approximating) and fails when not possible.

Example:

%dim0 = dim %tensor, %c0
%dim1 = dim %tensor, %c1
%0 = affine.min affine.map<(d0) -> (40, d0)> (%dim0)
%1 = affine.apply affine.map<(d0, d1) -> (d0 + d1)> (%0, %dim1)

getUpperBoundForIndex(%1, boundMap, boundOperands) set the output parameters to:

  • boundMap = affine.map<(d0) -> (d0 + 40)>
  • boundOperands = [dim1]

Definition at line 179 of file Utils.cpp.

References mlir::FlatAffineValueConstraints::addBound(), mlir::FlatAffineValueConstraints::appendDimId(), mlir::canonicalizeMapAndOperands(), mlir::FlatAffineValueConstraints::computeAlignedMap(), mlir::FlatAffineValueConstraints::containsId(), mlir::presburger::IntegerRelation::EQ, mlir::failed(), mlir::FlatAffineValueConstraints::findId(), mlir::FlatAffineValueConstraints::getAllValues(), mlir::getBackwardSlice(), mlir::presburger::IntegerRelation::getConstantBound(), mlir::AffineMap::getConstantMap(), mlir::Value::getContext(), mlir::Value::getDefiningOp(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::presburger::IntegerRelation::getNumDimIds(), mlir::Operation::getOperands(), mlir::Operation::getResults(), mlir::FlatAffineValueConstraints::getSliceBounds(), mlir::Value::getType(), mlir::Type::isIndex(), mlir::presburger::IntegerRelation::UB, and value.

Referenced by getConstantUpperBoundForIndex(), and HoistingAnalysis::getPackedTensorSizes().

◆ hoistPaddingOnTensors()

FailureOr< Value > mlir::linalg::hoistPaddingOnTensors ( tensor::PadOp  opToHoist,
int  numLoops,
ArrayRef< int64_t >  transposeVector,
tensor::PadOp &  hoistedOp,
SmallVectorImpl< GenericOp > &  transposeOps 
)

Mechanically hoist padding operations on tensors by numLoops into a new, generally larger tensor.

This achieves packing of multiple padding ops into a larger tensor. On success, opToHoist is replaced by the cloned version in the packing loop so the caller can continue reasoning about the padding operation. If transposeVector is non-empty, hoist padding introduces a GenericOp to transpose the padded tensor before inserting it into the packed tensor. A transposeVector can change the storage order of the padded tensor but does not change the order of the pack or compute loops.

Example in pseudo-mlir:

If hoistPaddingOnTensors is called with nLoops = 2 on the following IR.

scf.for (%i, %j, %k)
%st0 = tensor.extract_slice f(%i, %k) : ... to tensor<?x?xf32>
%0 = tensor.pad %st0 low[0, 0] high[...] {
^bb0( ... ):
linalg.yield %pad
} : tensor<?x?xf32> to tensor<4x8xf32>
compute(%0)

IR resembling the following is produced:

scf.for (%i) {
%packed_init = linalg.init_tensor range(%j) : tensor<?x4x8xf32>
%packed = scf.for (%k) iter_args(%p : %packed_init) {
%st0 = tensor.extract_slice f(%i, %k) : ... to tensor<?x?xf32>
%0 = tensor.pad %st0 low[0, 0] high[...] {
^bb0( ... ):
linalg.yield %pad
} : tensor<?x?xf32> to tensor<4x8xf32>
%1 = tensor.insert_slice %0 ...
: tensor<4x8xf32> to tensor<?x4x8xf32>
scf.yield %1: tensor<?x4x8xf32>
} -> tensor<?x4x8xf32>
scf.for (%j, %k) {
%st0 = tensor.extract_slice %packed [%k, 0, 0][1, 4, 8][1, 1, 1] :
tensor<?x4x8xf32> to tensor<4x8xf32>
compute(%st0)
}
}

Definition at line 397 of file HoistPadding.cpp.

References HoistingAnalysis::backwardSlice, buildLoopIterationCount(), computeTransposedType(), DBGS, mlir::failed(), mlir::failure(), mlir::scf::getForInductionVarOwner(), HoistingAnalysis::getPackedTensorSizes(), HoistingAnalysis::isValid(), mlir::BlockAndValueMapping::lookup(), mlir::BlockAndValueMapping::lookupOrDefault(), makeTransposeOp(), mlir::BlockAndValueMapping::map(), HoistingAnalysis::outermostEnclosingForOp, and HoistingAnalysis::packingLoops.

Referenced by mlir::linalg::LinalgPaddingPattern::returningMatchAndRewrite().

◆ hoistRedundantVectorTransfers()

void mlir::linalg::hoistRedundantVectorTransfers ( func::FuncOp  func)

Hoist vector.transfer_read/vector.transfer_write on buffers pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true:

  1. The two ops access the same memref with the same indices.
  2. All operands are invariant under the enclosing scf::ForOp.
  3. No uses of the memref either dominate the transfer_read or are dominated by the transfer_write (i.e. no aliasing between the write and the read across the loop) To improve hoisting opportunities, call the moveLoopInvariantCode helper function on the candidate loop above which to hoist. Hoisting the transfers results in scf::ForOp yielding the value that originally transited through memory.

Definition at line 400 of file Hoisting.cpp.

References mlir::WalkResult::advance(), DBGS, mlir::getForwardSlice(), mlir::WalkResult::interrupt(), mlir::vector::isDisjointTransferSet(), mlir::moveLoopInvariantCode(), mlir::DominanceInfo::properlyDominates(), and mlir::replaceLoopWithNewYields().

◆ hoistRedundantVectorTransfersOnTensor()

void mlir::linalg::hoistRedundantVectorTransfersOnTensor ( func::FuncOp  func)

◆ interchangeGenericOp()

FailureOr< GenericOp > mlir::linalg::interchangeGenericOp ( RewriterBase rewriter,
GenericOp  genericOp,
ArrayRef< unsigned interchangeVector 
)

Interchange the iterator_types and iterator_maps dimensions and adapts the index accesses of op.

This is an in-place transformation controlled by interchangeVector. An empty vector is interpreted as the identity permutation and the transformation returns early.

E.g. the permutation (i,j,k) -> (j,k,i) is expressed with interchangeVector = [1,2,0]. All values in interchangeVector must be integers, in the range 0..op.rank without duplications (i.e. [1,1,2] is an invalid permutation).

Definition at line 51 of file Interchange.cpp.

References mlir::applyPermutationToVector(), mlir::AffineMap::compose(), mlir::OpBuilder::create(), mlir::failed(), mlir::RewriterBase::finalizeRootUpdate(), mlir::Builder::getAffineMapArrayAttr(), mlir::getIndexingMapsAttrName(), mlir::getIteratorTypesAttrName(), mlir::AffineMap::getPermutationMap(), mlir::AffineMap::getSubMap(), interchangeGenericOpPrecondition(), mlir::inversePermutation(), mlir::AffineMap::isEmpty(), mlir::RewriterBase::notifyMatchFailure(), mlir::RewriterBase::replaceOpWithNewOp(), mlir::OpBuilder::setInsertionPoint(), and mlir::RewriterBase::startRootUpdate().

Referenced by mlir::linalg::GenericOpInterchangePattern::returningMatchAndRewrite().

◆ isaContractionOpInterface()

bool mlir::linalg::isaContractionOpInterface ( LinalgOp  linalgOp)

Checks whether linalgOp conforms to ContractionOpInterface.

Definition at line 137 of file LinalgInterfaces.cpp.

References isContractionInterfaceImpl(), and Success.

◆ isFusableInto()

bool mlir::linalg::isFusableInto ( const LinalgDependenceGraph graph,
LinalgOp  consumer,
Value  consumedView,
LinalgOp  producer 
)

Checks whether fusing the specific producer of the consumedView is feasible.

This checks producer is the last write of consumedView and that no interleaved dependence would be violated (RAW, WAR or WAW).

Definition at line 253 of file Fusion.cpp.

References mlir::linalg::LinalgDependenceGraph::findCoveringDependences(), and isProducerLastWriteOfView().

Referenced by findFusableProducer().

◆ isPermutation()

bool mlir::linalg::isPermutation ( ArrayRef< int64_t >  permutation)

Check if permutation is a permutation of the range [0, permutation.size()).

Definition at line 144 of file Utils.cpp.

Referenced by computeTransposedType(), mlir::linalg::LinalgTileAndFuseTensorOpsPattern::returningMatchAndRewrite(), and tileConsumerAndFuseProducers().

◆ isProducerLastWriteOfView()

bool mlir::linalg::isProducerLastWriteOfView ( const LinalgDependenceGraph graph,
LinalgOp  consumer,
Value  consumedView,
LinalgOp  producer 
)

Checks whether the specific producer is the last write to exactly the whole consumedView.

This checks structural dominance, that the dependence is a RAW without any interleaved write to any piece of consumedView.

Definition at line 229 of file Fusion.cpp.

References mlir::linalg::LinalgDependenceGraph::findCoveringWrites(), and isStructurallyFusableProducer().

Referenced by isFusableInto().

◆ linalgOpToAffineLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToAffineLoops ( PatternRewriter rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of affine.for with the proper body for linalgOp.

Emits a loop nest of affine.for with the proper body for linalgOp.

Definition at line 358 of file Loops.cpp.

Referenced by mlir::linalg::LinalgLoweringPattern< OpTy >::matchAndRewrite().

◆ linalgOpToLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToLoops ( PatternRewriter rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of scf.for with the proper body for linalgOp.

Emits a loop nest of scf.for with the proper body for linalgOp.

Definition at line 364 of file Loops.cpp.

Referenced by mlir::linalg::LinalgLoweringPattern< OpTy >::matchAndRewrite().

◆ linalgOpToParallelLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToParallelLoops ( PatternRewriter rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of scf.parallel with the proper body for linalgOp.

Emits a loop nest of scf.parallel with the proper body for linalgOp.

Definition at line 371 of file Loops.cpp.

Referenced by mlir::linalg::LinalgLoweringPattern< OpTy >::matchAndRewrite().

◆ makeAffineDimExprs()

SmallVector< AffineExpr, 4 > mlir::linalg::makeAffineDimExprs ( unsigned  num,
unsigned startIdx,
MLIRContext context 
)

Returns num AffineDimExpr dimensions at positions [startIdx, startIdx + num) and increments startIdx to startIdx + num.

Definition at line 1482 of file LinalgOps.cpp.

◆ makeComposedExtractSliceOp()

tensor::ExtractSliceOp mlir::linalg::makeComposedExtractSliceOp ( OpBuilder b,
Location  loc,
Value  source,
ArrayRef< OpFoldResult offsets,
ArrayRef< OpFoldResult sizes,
ArrayRef< OpFoldResult strides 
)

Create an ExtractSliceOp and, if source is defined by an ExtractSliceOp, fold it by adding the offsets.

Example:

%0 = tensor.extract_slice %arg0[3, 4][3, 32][1, 1] : tensor<64x64xf32> to
tensor<3x32xf32>
%1 = tensor.extract_slice %0[0, 5][3, 4][1, 1] : tensor<3x32xf32> to
tensor<3x4xf32>

folds into:

%1 = tensor.extract_slice %arg0[3, 9][3, 4][1, 1] : tensor<64x64xf32> to
tensor<3x4xf32>

Definition at line 299 of file Utils.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), mlir::detail::enumerate(), mlir::getConstantIntValue(), mlir::Builder::getContext(), mlir::Value::getDefiningOp(), mlir::getValueOrCreateConstantIndexOp(), and mlir::makeComposedAffineApply().

Referenced by makeTiledShape().

◆ makeComposedPadHighOp()

Value mlir::linalg::makeComposedPadHighOp ( OpBuilder b,
Location  loc,
RankedTensorType  type,
Value  source,
Value  pad,
bool  nofold 
)

Create a tensor::PadOp that pads source to the size of the statically sized type whose static sizes are assumed to be greater than the dynamic source size.

The padding introduces trailing pad values until the target size is met. If source is defined by one or more LinalgOps that have been padded with the same value and sizes, return their padded result instead of creating a tensor::PadOp.

Example:

%0 = tensor.extract_slice %arg0 [%iv0, %iv1] [%sz0, %sz1]
%1 = tensor.pad %0 low[0, 0] high[...] { tensor.yield %cst }
%2 = linalg.matmul ins(...) outs(%1)
%3 = tensor.extract_slice %2 [0, 0] [%sz0, %sz1]

makeComposedPadHighOp(source=%3, pad=cst) returns %2 makeComposedPadHighOp(source=%3, pad=other_cst) returns %4

%4 = tensor.pad %3 low[0, 0] high[...] { tensor.yield %other_cst }

Definition at line 341 of file Utils.cpp.

References mlir::Value::cast(), mlir::tensor::createPadHighOp(), mlir::getConstantIntValue(), mlir::Value::getDefiningOp(), mlir::OpResult::getResultNumber(), mlir::isEqualConstantIntOrValue(), mlir::m_Constant(), and mlir::matchPattern().

Referenced by padOperandToSmallestStaticBoundingBox().

◆ makeMemRefCopyOp()

GenericOp mlir::linalg::makeMemRefCopyOp ( OpBuilder b,
Location  loc,
Value  from,
Value  to 
)

Returns GenericOp that copies an n-D memref.

Unlike the current implementation of memref::CopyOp, this op can further tile, lower to loops or vectorize.

Definition at line 440 of file Utils.cpp.

References mlir::Type::cast(), mlir::OpBuilder::create(), mlir::Builder::getContext(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::getParallelIteratorTypeName(), and mlir::Value::getType().

◆ makeTiledLoopRanges()

std::tuple< SmallVector< Range, 4 >, LoopIndexToRangeIndexMap > mlir::linalg::makeTiledLoopRanges ( RewriterBase b,
Location  loc,
AffineMap  map,
ValueRange  allShapeSizes,
ValueRange  allTileSizes 
)

Definition at line 44 of file Tiling.cpp.

References mlir::applyMapToValues(), mlir::AffineMap::getNumResults(), and isZero().

Referenced by tileLinalgOpImpl().

◆ makeTiledShape()

Value mlir::linalg::makeTiledShape ( OpBuilder builder,
Location  loc,
Value  valueToTile,
ValueRange  tileSizes,
AffineMap  map,
ValueRange  lbs,
ValueRange  ubs,
ValueRange  subShapeSizes,
bool  omitPartialTileCheck 
)

Creates an extract_slice/subview op for a single valueToTile with builder.

This new operation extracts a tile of valueToTile, starting at offsets lbs and with sizes subShapeSizes. omitPartialTileCheck controls whether to omit the partial/boundary tile condition check in cases where we statically know that it is unnecessary.

Definition at line 764 of file Utils.cpp.

References mlir::applyMapToValues(), mlir::bindDims(), mlir::canonicalizeMapAndOperands(), mlir::OpBuilder::create(), createOrFoldDimOp(), mlir::Type::dyn_cast(), mlir::fullyComposeAffineMapAndOperands(), fullyComposeAndAffineApply(), mlir::getAffineSymbolExpr(), mlir::getAsOpFoldResult(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), mlir::Builder::getIndexType(), mlir::AffineMap::getSubMap(), mlir::Value::getType(), mlir::AffineMap::inferFromExprList(), isTiled(), mlir::makeComposedAffineApply(), makeComposedExtractSliceOp(), and mlir::arith::ConstantIndexOp::value().

Referenced by makeTiledShapes(), and tilePadOp().

◆ makeTiledShapes()

SmallVector< Value, 4 > mlir::linalg::makeTiledShapes ( OpBuilder builder,
Location  loc,
LinalgOp  linalgOp,
ArrayRef< Value valuesToTile,
ValueRange  ivs,
ValueRange  tileSizes,
ArrayRef< Value sizeBounds,
bool  omitPartialTileCheck 
)

Creates extract_slice/subview ops for all valuesToTile of the given linalgOp with builder, assuming linalgOp is being fused into a loop nest for tiling with the given induction variables ivs and tile sizes tileSizes.

sizeBounds are the iteration space bounds for all the implicit loops in linalgOp. omitPartialTileCheck controls whether to omit the partial/boundary tile condition check in cases where we statically know that it is unnecessary.

Note that a constant zero in tileSizes means no tiling at that implicit loop. The number of non-zero values in tileSizes should be equal to the number of values in ivs.

Definition at line 911 of file Utils.cpp.

References computeTileOffsets(), computeTileSizes(), isTiled(), isZero(), and makeTiledShape().

Referenced by fuse(), and getTiledProducer().

◆ makeTransposeOp()

GenericOp mlir::linalg::makeTransposeOp ( OpBuilder b,
Location  loc,
Value  inputTensor,
Value  outputTensor,
ArrayRef< int64_t >  transposeVector 
)

◆ peelTiledLinalgOp()

void mlir::linalg::peelTiledLinalgOp ( RewriterBase rewriter,
TiledLinalgOp res,
ArrayRef< int64_t >  peeledLoops,
LinalgTilingLoopType  loopType 
)

Peel the loops of a TiledLinalgOp.

Peel loops after tiling.

Definition at line 327 of file Transforms.cpp.

◆ populateBubbleUpExtractSliceOpPatterns()

void mlir::linalg::populateBubbleUpExtractSliceOpPatterns ( RewritePatternSet patterns)

Patterns that are used to bubble up extract slice op above linalg op.

Definition at line 139 of file BubbleUpExtractSlice.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateConstantFoldLinalgOperations()

void mlir::linalg::populateConstantFoldLinalgOperations ( RewritePatternSet patterns,
const ControlFusionFn controlFn 
)

Patterns to constant fold Linalg operations.

Definition at line 304 of file ConstantFold.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

Referenced by populateElementwiseOpsFusionPatterns().

◆ populateConvolutionVectorizationPatterns()

void mlir::linalg::populateConvolutionVectorizationPatterns ( RewritePatternSet patterns,
PatternBenefit  benefit = 1 
)

Populate patterns for vectorizing low-D convolution ops.

This is a step in progressive lowering for convolution ops, it assume high-D convolution ops were decomposed previously.

Definition at line 1704 of file Vectorization.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateDecomposeConvolutionPatterns()

void mlir::linalg::populateDecomposeConvolutionPatterns ( RewritePatternSet patterns,
const LinalgTransformationFilter filter = LinalgTransformationFilter(),
PatternBenefit  benefit = 1 
)

Linalg decompose convolutions patterns.

Populates patterns to decompose high-D convolution ops into low-D ones. This is a step in progressive lowering for convolution ops, afterwards we can vectorize the low-D convolution ops.

Definition at line 1131 of file Transforms.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by mlir::linalg::LinalgLoweringPattern< OpTy >::matchAndRewrite().

◆ populateElementwiseOpsFusionPatterns()

void mlir::linalg::populateElementwiseOpsFusionPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlElementwiseOpFusion 
)

◆ populateElementwiseToLinalgConversionPatterns()

void mlir::linalg::populateElementwiseToLinalgConversionPatterns ( RewritePatternSet patterns)

◆ populateFoldReshapeOpsByCollapsingPatterns()

void mlir::linalg::populateFoldReshapeOpsByCollapsingPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlFoldingReshapes 
)

Patterns to fold an expanding tensor.expand_shape operation with its producer generic operation by collapsing the dimensions of the generic op.

Definition at line 1677 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateFoldReshapeOpsByExpansionPatterns()

void mlir::linalg::populateFoldReshapeOpsByExpansionPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlFoldingReshapes 
)

Patterns to fold an expanding (collapsing) tensor_reshape operation with its producer (consumer) generic operation by expanding the dimensionality of the loop in the generic op.

Definition at line 1668 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by populateElementwiseOpsFusionPatterns().

◆ populateFoldUnitExtentDimsPatterns()

void mlir::linalg::populateFoldUnitExtentDimsPatterns ( RewritePatternSet patterns)

Patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors.

Patterns that are used to canonicalize the use of unit-extent dims for broadcasting.

Definition at line 544 of file DropUnitDims.cpp.

References mlir::RewritePatternSet::add(), mlir::applyPatternsAndFoldGreedily(), mlir::Operation::getContext(), and mlir::RewritePatternSet::getContext().

◆ populateFuseTensorPadWithProducerLinalgOpPatterns()

void mlir::linalg::populateFuseTensorPadWithProducerLinalgOpPatterns ( RewritePatternSet patterns)

Pattern to fuse a tensor.pad operation with the producer of its source, if the producer is a linalg operation with all parallel iterator types.

Definition at line 124 of file PadOpInterchange.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateInlineConstantOperandsPatterns()

void mlir::linalg::populateInlineConstantOperandsPatterns ( RewritePatternSet patterns)

Patterns that are used to inline constant operands into linalg generic ops.

Definition at line 90 of file InlineScalarOperands.cpp.

References mlir::RewritePatternSet::add(), mlir::applyPatternsAndFoldGreedily(), and mlir::RewritePatternSet::getContext().

◆ populateLinalgNamedOpConversionPatterns()

void mlir::linalg::populateLinalgNamedOpConversionPatterns ( RewritePatternSet patterns)

Patterns to convert from one named op to another.

These can be seen as canonicalizations of named ops into another named op.

Definition at line 152 of file NamedOpConversions.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by matchAndReplaceDepthwiseConv().

◆ populateLinalgNamedOpsGeneralizationPatterns()

void mlir::linalg::populateLinalgNamedOpsGeneralizationPatterns ( RewritePatternSet patterns,
const LinalgTransformationFilter filter = LinalgTransformationFilter() 
)

Linalg generalization patterns.

Populates patterns with patterns to convert spec-generated named ops to linalg.generic ops.

Definition at line 83 of file Generalization.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by generalizeNamedOp(), and mlir::linalg::LinalgLoweringPattern< OpTy >::matchAndRewrite().

◆ populateLinalgTilingCanonicalizationPatterns()

void mlir::linalg::populateLinalgTilingCanonicalizationPatterns ( RewritePatternSet patterns)

◆ populateLinalgToStandardConversionPatterns()

void mlir::linalg::populateLinalgToStandardConversionPatterns ( RewritePatternSet patterns)

◆ populatePadOpVectorizationPatterns()

void mlir::linalg::populatePadOpVectorizationPatterns ( RewritePatternSet patterns,
PatternBenefit  baseBenefit = 1 
)

Populates patterns with patterns that vectorize linalg.pad_tensor.

These patterns are meant to apply in a complementary fashion. Benefits are used to encode a certain ordering of pattern application. To avoid scattering magic constants throughout the code base, the patterns must be added with this function. baseBenefit can be used to offset the benefit of all tensor::PadOp vectorization patterns by a certain value.

Definition at line 1114 of file Vectorization.cpp.

References mlir::RewritePatternSet::add(), mlir::PatternBenefit::getBenefit(), and mlir::RewritePatternSet::getContext().

◆ populatePadTensorTilingPatterns()

void mlir::linalg::populatePadTensorTilingPatterns ( RewritePatternSet patterns,
const LinalgTilingOptions options 
)

◆ populateSparseTensorRewriting()

void mlir::linalg::populateSparseTensorRewriting ( RewritePatternSet patterns)

Populate patterns that are only useful in the context of sparse tensors.

Definition at line 200 of file SparseTensorRewriting.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by populateElementwiseOpsFusionPatterns().

◆ populateSplitReductionPattern()

void mlir::linalg::populateSplitReductionPattern ( RewritePatternSet patterns,
const ControlSplitReductionFn controlSplitReductionFn,
const LinalgTransformationFilter f = LinalgTransformationFilter() 
)

Patterns to apply splitReduction below.

Definition at line 228 of file SplitReduction.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ promoteSubviewAsNewBuffer()

FailureOr< PromotionInfo > mlir::linalg::promoteSubviewAsNewBuffer ( OpBuilder b,
Location  loc,
memref::SubViewOp  subView,
const AllocBufferCallbackFn allocationFn,
DataLayout layout 
)

◆ promoteSubViews()

FailureOr< LinalgOp > mlir::linalg::promoteSubViews ( OpBuilder b,
LinalgOp  op,
const LinalgPromotionOptions options 
)

Promote the subViews into a new buffer allocated at the insertion point b.

Promotion occurs in 3 steps:

  1. Create a new buffer for a full tile (i.e. not clipped at the boundary).
  2. Take a full view on the buffer.
  3. Take a partial slice of the full view in step 2. and copy into it. Infers statically sized buffers from subViews unless dynamicBuffers is true.

Return the modified linalg op (the modification happens in place) as well as all the copy ops created.

Definition at line 381 of file Promotion.cpp.

References mlir::DataLayout::closest().

Referenced by mlir::linalg::LinalgBasePromotionPattern::matchAndRewrite(), and promoteSubViews().

◆ promoteSubviewsPrecondition()

LogicalResult mlir::linalg::promoteSubviewsPrecondition ( Operation op,
LinalgPromotionOptions  options 
)

Promote memref.subviews feeding linalg-on-buffers operations.

Definition at line 359 of file Promotion.cpp.

References mlir::failure(), mlir::linalg::LinalgPromotionOptions::operandsToPromote, and mlir::success().

Referenced by mlir::linalg::LinalgBasePromotionPattern::matchAndRewrite().

◆ registerBufferizableOpInterfaceExternalModels()

void mlir::linalg::registerBufferizableOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerTransformDialectExtension()

void mlir::linalg::registerTransformDialectExtension ( DialectRegistry registry)

Definition at line 193 of file LinalgTransformOps.cpp.

References mlir::DialectRegistry::addExtensions().

Referenced by mlir::registerAllDialects().

◆ rewriteAsPaddedOp()

FailureOr< SmallVector< Value > > mlir::linalg::rewriteAsPaddedOp ( OpBuilder b,
LinalgOp  opToPad,
ArrayRef< int64_t >  paddingDimensions,
ArrayRef< Attribute paddingValues,
ArrayRef< bool >  packPaddings,
LinalgOp &  paddedOp 
)

Pad the iterator dimensions paddingDimensions of all opToPad operands to a static bounding box.

Use paddingValues and packPaddings to set padding value and nofold attribute of the created tensor::PadOps, respectively. Update paddedOp to the cloned operation with statically shaped paddingDimensions and return the extracted dynamically shaped results. If padding fails, return failure.

Definition at line 255 of file Transforms.cpp.

References mlir::Type::cast(), mlir::OpBuilder::create(), mlir::detail::enumerate(), mlir::failed(), mlir::failure(), mlir::getAsOpFoldResult(), mlir::Builder::getIndexAttr(), mlir::Value::getType(), padOperandToSmallestStaticBoundingBox(), and mlir::OpBuilder::setInsertionPointAfter().

Referenced by mlir::linalg::LinalgPaddingPattern::returningMatchAndRewrite().

◆ splitReduction()

FailureOr< LinalgOp > mlir::linalg::splitReduction ( PatternRewriter b,
LinalgOp  op,
const ControlSplitReductionFn controlSplitReductionFn,
const LinalgTransformationFilter f 
)

Apply transformation to split the single linalg op reduction into a parallel and reduction dimension.

Then create a new linalg.generic op doing the rest of the reduction. Return the new linalg op with an extra parallel dimension or failure if the transformation didn't happen. Example:

%r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>,
affine_map<(d0) -> ()>],
iterator_types = ["reduction"]}
ins(%in : tensor<32xf32>)
outs(%out : tensor<f32>) {
^bb0(%arg1: f32, %arg2: f32):
%y = arith.addf %arg1, %arg2 : f32
linalg.yield %y : f32
} -> tensor<f32>

To:

%cst = arith.constant 0.000000e+00 : f32
%0 = tensor.expand_shape %in [[0, 1]] : tensor<32xf32> into tensor<4x8xf32>
%1 = linalg.init_tensor [4] : tensor<4xf32>
%2 = linalg.fill ins(%cst : f32) outs(%1 : tensor<4xf32>) -> tensor<4xf32>
%3 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
affine_map<(d0, d1) -> (d0)>],
iterator_types = ["parallel", "reduction"]}
ins(%0 : tensor<4x8xf32>) outs(%2 : tensor<4xf32>) {
^bb0(%arg3: f32, %arg5: f32):
%5 = arith.addf %arg3, %arg4 : f32
linalg.yield %5 : f32
} -> tensor<4xf32>
%r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>,
affine_map<(d0) -> ()>],
iterator_types = ["reduction"]}
ins(%3 : tensor<4xf32>) outs(%out : tensor<f32>) {
^bb0(%arg3: f32, %arg4: f32):
%5 = arith.addf %arg3, %arg4 : f32
linalg.yield %5 : f32
} -> tensor<f32>

Definition at line 59 of file SplitReduction.cpp.

References mlir::linalg::LinalgTransformationFilter::checkAndNotify(), mlir::OpBuilder::clone(), mlir::OpBuilder::create(), mlir::detail::enumerate(), mlir::failed(), mlir::AffineMap::get(), mlir::Builder::getAffineDimExpr(), mlir::AffineMap::getDimPosition(), getElementType(), getIdentity(), mlir::Builder::getMultiDimIdentityMap(), mlir::AffineMap::getNumDims(), mlir::AffineMap::getNumResults(), mlir::getParallelIteratorTypeName(), mlir::getReductionIteratorTypeName(), mlir::Operation::getResult(), mlir::Value::getType(), mlir::RewriterBase::inlineRegionBefore(), mlir::matchReduction(), mlir::RewriterBase::notifyMatchFailure(), mlir::linalg::LinalgTransformationFilter::replaceLinalgTransformationFilter(), and mlir::Operation::setOperand().

◆ tileAndFuseLinalgOps()

FailureOr< TiledAndFusedLinalgOps > mlir::linalg::tileAndFuseLinalgOps ( OpBuilder builder,
ArrayRef< LinalgOp >  ops,
const LinalgDependenceGraph dependenceGraph,
const LinalgTilingOptions tilingOptions 
)

◆ tileConsumerAndFuseProducers()

FailureOr< TileLoopNest > mlir::linalg::tileConsumerAndFuseProducers ( OpBuilder b,
LinalgOp  consumerOp,
ArrayRef< int64_t >  tileSizes,
ArrayRef< int64_t >  tileInterchange,
const Optional< LinalgLoopDistributionOptions > &  tileDistribution 
)

◆ tileLinalgOp()

FailureOr< TiledLinalgOp > mlir::linalg::tileLinalgOp ( RewriterBase b,
LinalgOp  op,
const LinalgTilingOptions options 
)

◆ transformIndexOps()

void mlir::linalg::transformIndexOps ( RewriterBase b,
LinalgOp  op,
SmallVectorImpl< Value > &  ivs,
const LoopIndexToRangeIndexMap loopIndexToRangeIndex 
)

All indices returned by IndexOp should be invariant with respect to tiling.

Therefore, if an operation is tiled, we have to transform the indices accordingly, i.e. offset them by the values of the corresponding induction variables that are captured implicitly in the body of the op.

Example. linalg.generic before tiling:

#id_2d = (i, j) -> (i, j) #pointwise_2d_trait = { indexing_maps = [#id_2d, #id_2d], iterator_types = ["parallel", "parallel"] } linalg.generic #pointwise_2d_trait operand, result { ^bb0(operand_in: f32, result_in: f32): i = linalg.index 0 : index j = linalg.index 1 : index <some operations that use i, j> }: memref<50x100xf32>, memref<50x100xf32>

After tiling pass with tiles sizes 10 and 25:

#strided = (i, j)[s0, s1, s2] -> (i * s1 + s0 + j * s2)

c1 = arith.constant 1 : index c0 = arith.constant 0 : index c25 = arith.constant 25 : index c10 = arith.constant 10 : index operand_dim_0 = dim operand, 0 : memref<50x100xf32> operand_dim_1 = dim operand, 1 : memref<50x100xf32> scf.for k = c0 to operand_dim_0 step c10 { scf.for l = c0 to operand_dim_1 step c25 { %4 = memref.subview operand[k, l][c10, c25][c1, c1] : memref<50x100xf32> to memref<?x?xf32, #strided> %5 = memref.subview result[k, l][c10, c25][c1, c1] : memref<50x100xf32> to memref<?x?xf32, #strided> linalg.generic pointwise_2d_trait %4, %5 { ^bb0(operand_in: f32, result_in: f32): i = linalg.index 0 : index j = linalg.index 1 : index // Indices k and l are implicitly captured in the body. transformed_i = arith.addi i, k : index // index i is offset by k transformed_j = arith.addi j, l : index // index j is offset by l // Every use of i, j is replaced with transformed_i, transformed_j <some operations that use transformed_i, transformed_j> }: memref<?x?xf32, #strided>, memref<?x?xf32, #strided> } }

TODO: Investigate whether mixing implicit and explicit indices does not lead to losing information.

Definition at line 72 of file Tiling.cpp.

References addTileLoopIvsToIndexOpResults(), and mlir::detail::enumerate().

◆ updateBoundsForCyclicDistribution()

void mlir::linalg::updateBoundsForCyclicDistribution ( OpBuilder builder,
Location  loc,
Value  procId,
Value  nprocs,
Value lb,
Value ub,
Value step 
)

Update the lb, ub and step to get per processor lb, ub and step.

Definition at line 555 of file Utils.cpp.

References mlir::bindDims(), mlir::getAffineSymbolExpr(), mlir::Builder::getContext(), and mlir::makeComposedAffineApply().

Referenced by mlir::linalg::GenerateLoopNest< LoopTy >::doit().

◆ vectorize()

LogicalResult mlir::linalg::vectorize ( RewriterBase builder,
LinalgOp  linalgOp 
)

◆ vectorizeCopy()

LogicalResult mlir::linalg::vectorizeCopy ( RewriterBase builder,
memref::CopyOp  copyOp 
)