MLIR  20.0.0git
Namespaces | Classes | Typedefs | Enumerations | Functions
mlir::linalg Namespace Reference

Namespaces

 detail
 

Classes

class  LinalgOpToLibraryCallRewrite
 
struct  ContractionDimensions
 Positions of a Linalg op loops that correspond to different kinds of a contraction dimension. More...
 
struct  ConvolutionDimensions
 Positions of a Linalg op loops that correspond to different kinds of a convolution dimension. More...
 
struct  BufferizeToAllocationOptions
 
struct  LinalgTilingOptions
 
struct  LinalgTilingAndFusionOptions
 
struct  LinalgPaddingOptions
 
struct  LinalgPromotionOptions
 
struct  SplitReductionOptions
 Split Reduction options. More...
 
struct  ControlDropUnitDims
 Transformation to drop unit-extent dimensions from linalg.generic operations. More...
 
struct  DropUnitDimsResult
 
struct  ElementwiseOpFusionResult
 Fuse two linalg.generic operations that have a producer-consumer relationship captured through fusedOperand. More...
 
struct  TiledLinalgOp
 Perform standalone tiling of a single LinalgOp by tileSizes. More...
 
struct  PromotionInfo
 Create a new buffer using the allocationFn provided. More...
 
struct  MultiSizeSpecification
 A description of a multi-size tiling comprising tile sizes and numbers of tiles, expressed as Values which may or may not be constant. More...
 
struct  StaticMultiSizeSpecification
 
struct  ContinuousTileSizeSpecification
 
struct  StaticContinuousTileSizeSpecification
 
struct  ForallReductionTilingResult
 Transformation information returned after reduction tiling. More...
 
struct  SplitReductionResult
 Apply transformation to split the single linalg op reduction into a parallel and reduction dimension. More...
 
struct  CollapseResult
 
struct  LowerPackResult
 
struct  LowerUnPackOpResult
 
struct  PackResult
 Struct to hold the result of a pack call. More...
 
struct  PackTransposeResult
 Struct to hold the result of a packTranspose call. More...
 
struct  BlockPackMatmulOptions
 
struct  DownscaleSizeOneWindowed2DConvolution
 Rewrites 2-D convolution ops with size-1 window dimensions into 1-D convolution ops. More...
 
struct  DownscaleDepthwiseConv2DNhwcHwcOp
 Rewrites 2-D depthwise convolution ops with size-1 (w, kw) or (h, kh) dimensions into 1-D depthwise convolution ops. More...
 
struct  DownscaleConv2DOp
 
struct  LinalgGeneralizationPattern
 Linalg generalization pattern. More...
 
struct  LinalgSpecializationPattern
 
struct  CopyVectorizationPattern
 Vectorization pattern for memref::CopyOp. More...
 
struct  GeneralizePadOpPattern
 Rewrite a tensor::PadOp into a sequence of EmptyOp, FillOp and InsertSliceOp. More...
 
struct  DecomposeOuterUnitDimsPackOpPattern
 Rewrites a tensor::PackOp into a sequence of: More...
 
struct  DecomposeOuterUnitDimsUnPackOpPattern
 Rewrites a tensor::UnPackOp into a sequence of rank-reduced extract_slice op. More...
 
struct  LinalgCopyVTRForwardingPattern
 Match and rewrite for the pattern: More...
 
struct  LinalgCopyVTWForwardingPattern
 Match and rewrite for the pattern: More...
 
struct  ExtractSliceOfPadTensorSwapPattern
 Rewrite extract_slice(tensor.pad(x)) into tensor.pad(extract_slice(x)). More...
 
struct  SliceParameters
 A struct containg offsets-sizes-strides arguments of the tiled shape. More...
 
struct  FusionInfo
 A struct containing the Linalg producer before and after fusion. More...
 
struct  ProcInfo
 Callback function type used to get processor ID, and number of processors used for distribution for all parallel loops generated. More...
 
struct  LinalgLoopDistributionOptions
 Options that allow distribution of loops generated in Linalg transforms to processors while generating the loops. More...
 
struct  RegionMatcher
 A struct containing common matchers over linalg op's region. More...
 
struct  GenerateLoopNest
 Utility class used to generate nested loops with ranges described by loopRanges and loop type described by the iteratorTypes. More...
 

Typedefs

using TileSizeComputationFunction = std::function< SmallVector< Value, 4 >(OpBuilder &, Operation *)>
 
using AllocBufferCallbackFn = std::function< std::optional< Value >(OpBuilder &b, memref::SubViewOp subView, ArrayRef< Value > boundingSubViewSize, DataLayout &layout)>
 Callback function type used to perform the allocation for the promoted subView. More...
 
using DeallocBufferCallbackFn = std::function< LogicalResult(OpBuilder &b, Value buffer)>
 Callback function type used to deallocate the buffers used to hold the promoted subview. More...
 
using CopyCallbackFn = std::function< LogicalResult(OpBuilder &b, Value src, Value dst)>
 Callback function type used to insert copy from original subview to subview of the promoted region for the read operands/subview of promoted region to original subview for the results. More...
 
using ControlSplitReductionFn = std::function< SplitReductionOptions(LinalgOp op)>
 Function signature to control reduction splitting. More...
 
using LinalgLoops = SmallVector< Operation *, 4 >
 
using LoopIndexToRangeIndexMap = DenseMap< int, int >
 Creates a number of ranges equal to the number of non-zero in tileSizes. More...
 
using ControlBlockPackMatmulFn = std::function< std::optional< BlockPackMatmulOptions >(linalg::LinalgOp)>
 Function type which is used to control matmul packing. More...
 
using OptimizeCopyFn = std::function< LogicalResult(RewriterBase &, tensor::PadOp, Value)>
 
using ControlFusionFn = std::function< bool(OpOperand *fusedOperand)>
 Function type which is used to control when to stop fusion. More...
 
using ControlPropagationFn = std::function< bool(OpOperand *opOperand)>
 Function type which is used to control propagation of tensor.pack/unpack ops. More...
 
using GetCollapsableDimensionsFn = std::function< SmallVector< ReassociationIndices >(linalg::LinalgOp)>
 Function type to control generic op dimension collapsing. More...
 
using ProcInfoCallBackFn = std::function< SmallVector< ProcInfo >(OpBuilder &b, Location loc, ArrayRef< Range > parallelLoopRanges)>
 
using MeshAxis = mesh::MeshAxis
 
using ReductionKind = mesh::ReductionKind
 
using MeshSharding = mesh::MeshSharding
 
using ShardingArray = mesh::ShardingArray
 
using MeshOp = mesh::MeshOp
 

Enumerations

enum class  LinalgTilingLoopType { Loops = 0 , AffineLoops = 1 , ParallelLoops = 2 }
 The type of loops to be generated during tiling. More...
 
enum class  DistributionMethod { Cyclic = 0 , CyclicNumProcsGeNumIters = 1 , CyclicNumProcsEqNumIters = 2 , None = 3 }
 Scheme used to distribute loops to processors. More...
 

Functions

void populateLinalgToStandardConversionPatterns (RewritePatternSet &patterns)
 Populate the given list with patterns that convert from Linalg to Standard. More...
 
std::string generateLibraryCallName (Operation *op)
 Returns the name mangled library call name to disambiguate between different overloads at the C level. More...
 
SmallVector< AffineExpr, 4 > makeAffineDimExprs (unsigned num, unsigned &startIdx, MLIRContext *context)
 Returns num AffineDimExpr dimensions at positions [startIdx, startIdx + num) and increments startIdx to startIdx + num. More...
 
AffineMap extractOrIdentityMap (std::optional< AffineMap > maybeMap, unsigned rank, MLIRContext *context)
 Returns maybeMap.get() if maybeMap is set, otherwise returns the symbol-less identity map of rank. More...
 
SmallVector< AffineExpr, 4 > concat (ArrayRef< AffineExpr > a, ArrayRef< AffineExpr > b)
 Return the vector that is the concatenation of a and b. More...
 
Value createOrFoldDimOp (OpBuilder &b, Location loc, Value val, int64_t dim)
 Create one memref::DimOp or tensor::DimOp depending on the type of val. More...
 
OpFoldResult createFoldedDimOp (OpBuilder &b, Location loc, Value val, int64_t dim)
 Create one memref::DimOp or tensor::DimOp depending on the type of val. More...
 
FailureOr< ContractionDimensionsinferContractionDims (LinalgOp linalgOp)
 Find at least 2 parallel (m and n) and 1 reduction (k) dimension candidates that form a matmul subcomputation within linalgOp. More...
 
FailureOr< ContractionDimensionsinferContractionDims (ArrayRef< AffineMap > indexingMaps)
 
bool isaContractionOpInterface (LinalgOp linalgOp)
 Checks whether linalgOp conforms to ContractionOpInterface. More...
 
FailureOr< ConvolutionDimensionsinferConvolutionDims (LinalgOp linalgOp)
 Find at least 1 parallel (output_image) and reduction (filter_loop) dimension candidates that form a convolution subcomputation within linalgOp. More...
 
bool isaConvolutionOpInterface (LinalgOp linalgOp, bool allowEmptyConvolvedDims=false)
 Checks whether linalgOp conforms to ConvolutionOpInterface. More...
 
bool isaCopyOpInterface (LinalgOp linalgOp)
 Checks whether linalgOp is semantically equivalent to a linalg.copyOp. More...
 
std::optional< SmallVector< int64_t > > isaBroadcastOpInterface (GenericOp genericOp)
 Checks whether genericOp is semantically equivalent to a linalg.broadcast. More...
 
std::optional< SmallVector< int64_t > > isaTransposeOpInterface (GenericOp genericOp)
 Checks whether genericOp is semantically equivalent to a linalg.transpose. More...
 
bool isaElemwiseSingleUnaryOpInterface (GenericOp genericOp)
 Checks whether a given genericOp is semantically equivalent to a single linalgelementwise unary op. More...
 
bool isaElemwiseSingleBinaryOpInterface (GenericOp genericOp)
 Checks whether genericOp is semantically equivalent to a single linalg elementwise binary op e.g. More...
 
std::optional< ValueisaFillOpInterface (GenericOp genericOp)
 Checks whether genericOp is semantically equivalent to a linalg.fill. More...
 
void registerValueBoundsOpInterfaceExternalModels (DialectRegistry &registry)
 
void registerTransformDialectExtension (DialectRegistry &registry)
 
void registerAllDialectInterfaceImplementations (DialectRegistry &registry)
 
void registerBufferizableOpInterfaceExternalModels (DialectRegistry &registry)
 
void hoistRedundantVectorTransfers (Operation *root, bool verifyNonZeroTrip=false)
 Hoist vector.transfer_read/vector.transfer_write on buffers pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true: More...
 
void hoistRedundantVectorBroadcasts (RewriterBase &rewriter, Operation *root)
 Hoist vector.extract/vector.broadcast pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are met: More...
 
void registerMeshShardingInterfaceExternalModels (DialectRegistry &registry)
 
void registerRuntimeVerifiableOpInterfaceExternalModels (DialectRegistry &registry)
 
void registerSubsetOpInterfaceExternalModels (DialectRegistry &registry)
 
void registerTilingInterfaceExternalModels (DialectRegistry &registry)
 
std::optional< vector::CombiningKind > getCombinerOpKind (Operation *combinerOp)
 Return vector::CombiningKind for the given op. More...
 
Value bufferizeToAllocation (RewriterBase &rewriter, const BufferizeToAllocationOptions &options, tensor::PadOp padOp, Attribute memorySpace={}, Operation *insertionPoint=nullptr)
 Materialize a buffer allocation for the given tensor.pad op and lower the op to linalg.fill/linalg.generic + bufferization.materialize_in_destination. More...
 
Value bufferizeToAllocation (RewriterBase &rewriter, const BufferizeToAllocationOptions &options, vector::MaskOp maskOp, Attribute memorySpace={}, Operation *insertionPoint=nullptr)
 Materialize a buffer allocation for the given vector.mask op and bufferize the op, including its region. More...
 
Value bufferizeToAllocation (RewriterBase &rewriter, const BufferizeToAllocationOptions &options, bufferization::AllocTensorOp allocTensorOp, Attribute memorySpace={}, Operation *insertionPoint=nullptr)
 Materialize a buffer allocation for the given bufferization.alloc_tensor op and lower the op to memref.alloc + memref.tensor_store. More...
 
Value bufferizeToAllocation (RewriterBase &rewriter, const BufferizeToAllocationOptions &options, Operation *op, Attribute memorySpace={}, Operation *insertionPoint=nullptr)
 Bufferize the given op with tensor semantics and materialize the result in a newly allocated buffer. More...
 
LogicalResult linalgOpAnchoredEmptyTensorEliminationStep (RewriterBase &rewriter, Operation *op, bufferization::OneShotAnalysisState &state)
 Try to eliminate tensor::EmptyOps inside op that are anchored on a LinalgOp. More...
 
bool areElementwiseOpsFusable (OpOperand *fusedOperand)
 Return true if two linalg.generic operations with producer/consumer relationship through fusedOperand can be fused using elementwise op fusion. More...
 
LogicalResult promoteSubviewsPrecondition (Operation *op, LinalgPromotionOptions options)
 Promote memref.subviews feeding linalg-on-buffers operations. More...
 
LogicalResult vectorizeOpPrecondition (Operation *op, ArrayRef< int64_t > inputVectorSizes={}, ArrayRef< bool > inputScalableVecDims={}, bool vectorizeNDExtract=false, bool flatten1DDepthwiseConv=false)
 Return success if the operation can be vectorized. More...
 
FailureOr< DropUnitDimsResultdropUnitDims (RewriterBase &rewriter, GenericOp genericOp, const ControlDropUnitDims &options)
 
FailureOr< ElementwiseOpFusionResultfuseElementwiseOps (RewriterBase &rewriter, OpOperand *fusedOperand)
 
llvm::SmallDenseSet< int > getPreservedProducerResults (GenericOp producer, GenericOp consumer, OpOperand *fusedOperand)
 Returns a set of indices of the producer's results which would be preserved after the fusion. More...
 
SmallVector< ValuepeelLoop (RewriterBase &rewriter, Operation *op)
 Try to peel and canonicalize loop op and return the new result. More...
 
void peelLoops (RewriterBase &rewriter, ArrayRef< scf::ForOp > loops)
 Peel 'loops' and applies affine_min/max bounds simplification on the fly where relevant. More...
 
LogicalResult rewriteAsPaddedOp (RewriterBase &rewriter, LinalgOp opToPad, const LinalgPaddingOptions &options, LinalgOp &paddedOp, SmallVector< Value > &replacements, SmallVector< tensor::PadOp > &padOps)
 Pad the iterator dimensions paddingDimensions of all opToPad operands to a static bounding box. More...
 
FailureOr< ValuehoistPaddingOnTensors (RewriterBase &rewriter, tensor::PadOp opToHoist, int64_t numLoops, ArrayRef< int64_t > transposeVector, tensor::PadOp &hoistedOp, SmallVectorImpl< TransposeOp > &transposeOps)
 Mechanically hoist padding operations on tensors by numLoops into a new, generally larger tensor. More...
 
FailureOr< ValuehoistPaddingOnTensors (tensor::PadOp opToHoist, int64_t numLoops, ArrayRef< int64_t > transposeVector, tensor::PadOp &hoistedOp, SmallVectorImpl< TransposeOp > &transposeOps)
 Calls into hoistPaddingOnTensors with a local IRRewriter. More...
 
FailureOr< LinalgOp > padAndHoistLinalgOp (RewriterBase &rewriter, LinalgOp linalgOp, const LinalgPaddingOptions &options)
 Apply padding and hoisting to linalgOp according to the configuration specified in options. More...
 
std::pair< TilingInterface, TilingInterface > splitOp (RewriterBase &rewriter, TilingInterface op, unsigned dimension, OpFoldResult splitPoint)
 Split the given op into two parts along the given iteration space dimension at the specified splitPoint, and return the two parts. More...
 
FailureOr< TiledLinalgOptileLinalgOp (RewriterBase &b, LinalgOp op, const LinalgTilingOptions &options)
 
FailureOr< GenericOp > interchangeGenericOp (RewriterBase &rewriter, GenericOp genericOp, ArrayRef< unsigned > interchangeVector)
 Interchange the iterator_types and iterator_maps dimensions and adapts the index accesses of op. More...
 
FailureOr< GenericOp > generalizeNamedOp (RewriterBase &rewriter, LinalgOp namedOp)
 Create a GenericOp from the given named operation namedOp and replace namedOp. More...
 
FailureOr< LinalgOp > specializeGenericOp (RewriterBase &rewriter, GenericOp genericOp)
 Create a namedOp from the given GenericOp and replace the GenericOp. More...
 
FailureOr< PromotionInfopromoteSubviewAsNewBuffer (OpBuilder &b, Location loc, memref::SubViewOp subView, const AllocBufferCallbackFn &allocationFn, DataLayout &layout)
 
FailureOr< LinalgOp > promoteSubViews (OpBuilder &b, LinalgOp op, const LinalgPromotionOptions &options)
 Promote the subViews into a new buffer allocated at the insertion point b. More...
 
std::optional< ValueallocateWorkgroupMemory (OpBuilder &builder, memref::SubViewOp subview, ArrayRef< Value > sizeBounds, DataLayout &)
 Allocate the subview in the GPU workgroup memory. More...
 
LogicalResult deallocateWorkgroupMemory (OpBuilder &, Value)
 In case of GPU group memory there is no need to deallocate. More...
 
LogicalResult copyToWorkgroupMemory (OpBuilder &b, Value src, Value dst)
 Create Memref copy operations and add gpu barrier guards before and after the copy operation to ensure data integrity. More...
 
std::optional< ValueallocateGPUPrivateMemory (OpBuilder &builder, memref::SubViewOp subview, ArrayRef< Value > sizeBounds, DataLayout &)
 Allocate the subview in the GPU private memory. More...
 
LogicalResult copyToGPUPrivateMemory (OpBuilder &b, Value src, Value dst)
 Normal copy to between src and dst. More...
 
LogicalResult deallocateGPUPrivateMemory (OpBuilder &, Value)
 In case of GPU private memory there is no need to deallocate since the memory is freed when going outside of the scope. More...
 
bool hasVectorizationImpl (Operation *)
 Return true if there's dedicated logic in the Linalg Vectorizer to vectorize this Op, false otherwise. More...
 
LogicalResult vectorize (RewriterBase &rewriter, Operation *op, ArrayRef< int64_t > inputVectorSizes={}, ArrayRef< bool > inputScalableVecDims={}, bool vectorizeNDExtract=false, bool flatten1DDepthwiseConv=false)
 Emit a suitable vector form for an operation. More...
 
LogicalResult vectorizeCopy (RewriterBase &builder, memref::CopyOp copyOp)
 Emit a suitable vector form for a Copy op with fully static shape. More...
 
FailureOr< LinalgLoopslinalgOpToLoops (RewriterBase &rewriter, LinalgOp linalgOp)
 Emit a loop nest of scf.for with the proper body for linalgOp. More...
 
FailureOr< LinalgLoopslinalgOpToParallelLoops (RewriterBase &rewriter, LinalgOp linalgOp)
 Emit a loop nest of scf.parallel with the proper body for linalgOp. More...
 
FailureOr< LinalgLoopslinalgOpToAffineLoops (RewriterBase &rewriter, LinalgOp linalgOp)
 Emit a loop nest of affine.for with the proper body for linalgOp. More...
 
std::tuple< SmallVector< Range, 4 >, LoopIndexToRangeIndexMapmakeTiledLoopRanges (RewriterBase &b, Location loc, AffineMap map, ArrayRef< OpFoldResult > allShapeSizes, ArrayRef< OpFoldResult > allTileSizes)
 
FailureOr< MultiSizeSpecificationcomputeMultiTileSizes (OpBuilder &builder, LinalgOp op, unsigned dimension, OpFoldResult targetSize, OpFoldResult divisor, bool emitAssertions=true)
 Emits the IR computing the multi-sized tiling specification with two tile sizes not exceeding targetSize, each divisible by sizeDivisor, such that there exist numbers of tiles with these sizes that fully cover the given iteration space dimension of the structured op. More...
 
FailureOr< StaticMultiSizeSpecificationcomputeStaticMultiTileSizes (LinalgOp op, unsigned dimension, int64_t targetSize, int64_t divisor)
 
FailureOr< StaticContinuousTileSizeSpecificationcomputeStaticContinuousTileSizes (LinalgOp op, unsigned dimension, unsigned targetSize)
 
FailureOr< ContinuousTileSizeSpecificationcomputeContinuousTileSizes (OpBuilder &builder, TilingInterface op, unsigned dimension, OpFoldResult targetSize, bool emitAssertions)
 
FailureOr< ForallReductionTilingResulttileReductionUsingForall (RewriterBase &b, PartialReductionOpInterface op, ArrayRef< OpFoldResult > numThreads, ArrayRef< OpFoldResult > tileSizes={}, std::optional< ArrayAttr > mapping=std::nullopt)
 Method to tile a reduction to parallel iterations computing partial reductions. More...
 
void transformIndexOps (RewriterBase &b, LinalgOp op, SmallVectorImpl< Value > &ivs, const LoopIndexToRangeIndexMap &loopIndexToRangeIndex)
 All indices returned by IndexOp should be invariant with respect to tiling. More...
 
FailureOr< SplitReductionResultsplitReduction (RewriterBase &b, LinalgOp op, const ControlSplitReductionFn &controlSplitReductionFn, bool useAlloc=false)
 
FailureOr< SplitReductionResultsplitReductionByScaling (RewriterBase &b, LinalgOp op, const ControlSplitReductionFn &controlSplitReductionFn, bool useAlloc=false)
 Scaling-based implementation of the split reduction transformation. More...
 
bool isDimSequencePreserved (AffineMap map, ReassociationIndicesRef dimSequence)
 Return true if a given sequence of dimensions are contiguous in the range of the specified indexing map. More...
 
bool areDimSequencesPreserved (ArrayRef< AffineMap > maps, ArrayRef< ReassociationIndices > dimSequences)
 Return true if all sequences of dimensions specified in dimSequences are contiguous in all the ranges of the maps. More...
 
FailureOr< CollapseResultcollapseOpIterationDims (LinalgOp op, ArrayRef< ReassociationIndices > foldedIterationDims, RewriterBase &rewriter)
 Collapses dimensions of linalg.generic/linalg.copy operation. More...
 
FailureOr< LowerPackResultlowerPack (RewriterBase &rewriter, tensor::PackOp packOp)
 Rewrite pack as pad + reshape + transpose. More...
 
FailureOr< LowerUnPackOpResultlowerUnPack (RewriterBase &rewriter, tensor::UnPackOp unPackOp)
 Rewrite pack as empty + transpose + reshape + extract_slice. More...
 
FailureOr< PackResultpack (RewriterBase &rewriter, linalg::LinalgOp linalgOp, ArrayRef< OpFoldResult > packedSizes)
 Implement packing of a single LinalgOp by packedSizes. More...
 
FailureOr< PackTransposeResultpackTranspose (RewriterBase &rewriter, tensor::PackOp packOp, linalg::LinalgOp linalgOp, tensor::UnPackOp maybeUnPackOp, ArrayRef< int64_t > outerPerm, ArrayRef< int64_t > innerPerm)
 Transpose a single PackOp -> LinalgOp -> UnPackOp chain and return the transposed PackOp -> LinalgOp -> UnPackOp chain after replacements. More...
 
FailureOr< PackResultpackMatmulGreedily (RewriterBase &rewriter, LinalgOp linalgOp, ArrayRef< OpFoldResult > mnkPackedSizes, ArrayRef< int64_t > mnkPaddedSizesNextMultipleOf, ArrayRef< int64_t > mnkOrder)
 Pack a LinalgOp by greedily inferring matmul dimensions (m, n, k) where m and n are proper parallel dimensions and k is a proper reduction dimension. More...
 
FailureOr< PackResultblockPackMatmul (RewriterBase &rewriter, linalg::LinalgOp linalgOp, const ControlBlockPackMatmulFn &controlPackMatmul)
 Pack a matmul operation into blocked 4D layout. More...
 
FailureOr< Operation * > rewriteInDestinationPassingStyle (RewriterBase &rewriter, tensor::FromElementsOp fromElementsOp)
 Rewrite tensor.from_elements to linalg.generic. More...
 
FailureOr< Operation * > rewriteInDestinationPassingStyle (RewriterBase &rewriter, tensor::GenerateOp generateOp)
 Rewrite tensor.generate to linalg.generic. More...
 
FailureOr< Operation * > rewriteInDestinationPassingStyle (RewriterBase &rewriter, tensor::PadOp padOp)
 Rewrite tensor.pad to linalg.generic + tensor.insert_slice. More...
 
FailureOr< std::pair< Operation *, Operation * > > rewriteInIm2Col (RewriterBase &rewriter, linalg::Conv2DNhwcHwcfOp convOp)
 Convert linalg.conv_2d_nhwc_hwcf into linalg.generic (for img2col packing) and linalg.matmul. More...
 
FailureOr< std::pair< Operation *, Operation * > > rewriteInIm2Col (RewriterBase &rewriter, linalg::Conv2DNhwcFhwcOp convOp)
 Same as the above but for Fhwc channel orderings in the filter. More...
 
FailureOr< std::pair< Operation *, Operation * > > rewriteInIm2Col (RewriterBase &rewriter, linalg::DepthwiseConv2DNhwcHwcOp convOp)
 Similar to rewriteInIm2Col with linalg::Conv2DNhwcHwcfOp except there is no reduction among the input channels so each convolution can be a matrix-vector product and by transposing both input filter so channels are outer most the computation is a batched matrix-vector product. More...
 
FailureOr< std::pair< Operation *, Operation * > > rewriteInIm2Col (RewriterBase &rewriter, linalg::Conv2DNchwFchwOp convOp)
 Similar to rewriteInIm2Col with linalg::Conv2DNhwcHwcfOp except because the channels are to the left of the image shape dimensions, the position of the contraction dimension in the resulting matmul is reversed. More...
 
FailureOr< Operation * > transposeConv2D (RewriterBase &rewriter, linalg::Conv2DNhwcFhwcOp op)
 Convert linalg.conv_2d_nhwc_fhwc(_q) to linalg.conv_2d_nhwc_hwcf(_q) by materializing transpose. More...
 
FailureOr< Operation * > transposeConv2D (RewriterBase &rewriter, linalg::Conv2DNhwcFhwcQOp op)
 
FailureOr< Operation * > transposeMatmul (RewriterBase &rewriter, linalg::MatmulOp op, bool transposeLHS=true)
 Convert Linalg matmul ops to transposed variants. More...
 
FailureOr< Operation * > transposeBatchMatmul (RewriterBase &rewriter, linalg::BatchMatmulOp op, bool transposeLHS=true)
 Pattern to replace. More...
 
FailureOr< Operation * > winogradConv2D (RewriterBase &rewriter, linalg::Conv2DNhwcFhwcOp op, int64_t m, int64_t r)
 Convert linalg.conv_2d_nhwc_fhwc to Winograd Conv2D algorithm F(m x m, r x r). More...
 
FailureOr< Operation * > decomposeWinogradFilterTransformOp (RewriterBase &rewriter, linalg::WinogradFilterTransformOp op)
 Rewrite linalg.winograd_filter_transform. More...
 
FailureOr< Operation * > decomposeWinogradInputTransformOp (RewriterBase &rewriter, linalg::WinogradInputTransformOp op)
 Rewrite linalg.winograd_input_transform. More...
 
FailureOr< Operation * > decomposeWinogradOutputTransformOp (RewriterBase &rewriter, linalg::WinogradOutputTransformOp op)
 Rewrite linalg.winograd_output_transform. More...
 
RewritePatternSet getLinalgTilingCanonicalizationPatterns (MLIRContext *ctx)
 Canonicalization patterns relevant to apply after tiling patterns. More...
 
void populateLinalgTilingCanonicalizationPatterns (RewritePatternSet &patterns)
 
void populateLinalgNamedOpsGeneralizationPatterns (RewritePatternSet &patterns)
 Linalg generalization patterns. More...
 
void populateLinalgGenericOpsSpecializationPatterns (RewritePatternSet &patterns)
 Populates patterns with patterns to convert linalg.generic ops to named ops where possible. More...
 
void populateDecomposeConvolutionPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1)
 Linalg decompose convolutions patterns. More...
 
void populateDecomposePackUnpackPatterns (RewritePatternSet &patterns)
 Populates patterns to decompose tensor.pack and tensor.unpack Ops into e.g. More...
 
void populateConvertConv2DToImg2ColPatterns (RewritePatternSet &patterns)
 Populates patterns to transform linalg.conv_2d_xxx operations into linalg.generic (for img2col packing) and linalg.matmul. More...
 
void populateInsertSliceVectorizationPatterns (RewritePatternSet &patterns)
 Populates patterns with vectorisation patterns for tensor.insert_slice. More...
 
void populatePadOpVectorizationPatterns (RewritePatternSet &patterns, PatternBenefit baseBenefit=1)
 Populates patterns with patterns that vectorize tensor.pad. More...
 
void populateDecomposeLinalgOpsPattern (RewritePatternSet &patterns, bool removeDeadArgsAndResults=true)
 Populate patterns for splitting a LinalgOp with multiple statements within its payload into multiple GenericOp that have a single statement. More...
 
void populateConvertToDestinationStylePatterns (RewritePatternSet &patterns)
 Populate patterns that convert non-destination-style ops to destination style ops. More...
 
void populateConvolutionVectorizationPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1)
 Populate patterns for vectorizing low-D convolution ops. More...
 
void populateElementwiseToLinalgConversionPatterns (RewritePatternSet &patterns)
 Populate patterns that convert ElementwiseMappable ops to linalg parallel loops. More...
 
void populateSparseTensorRewriting (RewritePatternSet &patterns)
 Populate patterns that are only useful in the context of sparse tensors. More...
 
void populateElementwiseOpsFusionPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlElementwiseOpFusion)
 Patterns for fusing linalg operation on tensors. More...
 
void populateDataLayoutPropagationPatterns (RewritePatternSet &patterns, const ControlPropagationFn &controlPackUnPackPropagation)
 Patterns to bubble up or down data layout ops across other operations. More...
 
void populateEraseUnusedOperandsAndResultsPatterns (RewritePatternSet &patterns)
 Pattern to remove dead operands and results of linalg.generic operations. More...
 
void populateEraseUnnecessaryInputsPatterns (RewritePatternSet &patterns)
 Patterns to promote inputs to outputs and remove unused inputs of linalg.generic ops. More...
 
void populateCollapseDimensions (RewritePatternSet &patterns, const GetCollapsableDimensionsFn &controlCollapseDimensions)
 Pattern to collapse dimensions in a linalg.generic op. More...
 
void populateFoldReshapeOpsByExpansionPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlFoldingReshapes)
 Patterns to fold an expanding (collapsing) tensor_reshape operation with its producer (consumer) generic operation by expanding the dimensionality of the loop in the generic op. More...
 
void populateFoldReshapeOpsByCollapsingPatterns (RewritePatternSet &patterns, const ControlFusionFn &controlFoldingReshapes)
 Patterns to fold an expanding tensor.expand_shape operation with its producer generic operation by collapsing the dimensions of the generic op. More...
 
void populateConstantFoldLinalgOperations (RewritePatternSet &patterns, const ControlFusionFn &controlFn)
 Patterns to constant fold Linalg operations. More...
 
void populateFoldAddIntoDestPatterns (RewritePatternSet &patterns)
 Pattern to replace linalg.add when destination passing on a contraction op suffices for achieving the sum. More...
 
void populateFuseTensorPadWithProducerLinalgOpPatterns (RewritePatternSet &patterns)
 Pattern to fuse a tensor.pad operation with the producer of its source, if the producer is a linalg operation with all parallel iterator types. More...
 
void populateLinalgNamedOpConversionPatterns (RewritePatternSet &patterns)
 Patterns to convert from one named op to another. More...
 
void populateFoldUnitExtentDimsPatterns (RewritePatternSet &patterns, ControlDropUnitDims &options)
 Patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors via reassociative reshape ops. More...
 
void populateMoveInitOperandsToInputPattern (RewritePatternSet &patterns)
 A pattern that converts init operands to input operands. More...
 
void populateInlineConstantOperandsPatterns (RewritePatternSet &patterns)
 Patterns that are used to inline constant operands into linalg generic ops. More...
 
void populateBubbleUpExtractSliceOpPatterns (RewritePatternSet &patterns)
 Patterns that are used to bubble up extract slice op above linalg op. More...
 
void populateSwapExtractSliceWithFillPatterns (RewritePatternSet &patterns)
 Adds patterns that waps tensor.extract_slice(linalg.fill(cst, init)) into linalg.fill(cst, tensor.extract_slice(init)). More...
 
void populateDecomposeProjectedPermutationPatterns (RewritePatternSet &patterns)
 Add patterns to make explicit broadcasts and transforms in the input operands of a genericOp. More...
 
void populateSplitReductionPattern (RewritePatternSet &patterns, const ControlSplitReductionFn &controlSplitReductionFn, bool useAlloc=false)
 Patterns to apply splitReduction below. More...
 
void populateTransposeMatmulPatterns (RewritePatternSet &patterns, bool transposeLHS=true)
 Patterns to convert Linalg matmul ops to transposed variants. More...
 
void populateBlockPackMatmulPatterns (RewritePatternSet &patterns, const ControlBlockPackMatmulFn &controlFn)
 Patterns to block pack Linalg matmul ops. More...
 
void populateWinogradConv2DPatterns (RewritePatternSet &patterns, int64_t m, int64_t r)
 Patterns to apply Winograd Conv2D algorithm F(m x m, r x r). More...
 
void populateDecomposeWinogradOpsPatterns (RewritePatternSet &patterns)
 Patterns to decompose Winograd operators. More...
 
void populateContractionOpRankReducingPatterns (RewritePatternSet &patterns)
 Adds patterns that reduce the rank of named contraction ops that have unit dimensions in the operand(s) by converting to a sequence of collapse_shape, <corresponding linalg named op>, expand_shape (if on tensors). More...
 
bool allIndexingsAreProjectedPermutation (LinalgOp op)
 Check if all indexing maps are projected permutations. More...
 
bool hasOnlyScalarElementwiseOp (Region &r)
 Detect whether r has only ConstantOp, ElementwiseMappable and YieldOp. More...
 
bool isElementwise (LinalgOp op)
 Check if a LinalgOp is an element-wise operation. More...
 
bool isParallelIterator (utils::IteratorType iteratorType)
 Check if iterator type has "parallel" semantics. More...
 
bool isReductionIterator (utils::IteratorType iteratorType)
 Check if iterator type has "reduction" semantics. More...
 
Value makeComposedPadHighOp (OpBuilder &b, Location loc, RankedTensorType type, Value source, Value pad, bool nofold)
 Create a tensor::PadOp that pads source to the size of the statically sized type whose static sizes are assumed to be greater than the dynamic source size. More...
 
GenericOp makeMemRefCopyOp (OpBuilder &b, Location loc, Value from, Value to)
 Returns GenericOp that copies an n-D memref. More...
 
std::optional< SmallVector< ReassociationIndices > > getReassociationMapForFoldingUnitDims (ArrayRef< OpFoldResult > mixedSizes)
 Get the reassociation maps to fold the result of a extract_slice (or source of a insert_slice) operation with given offsets, and sizes to its rank-reduced version. More...
 
SmallVector< OpFoldResultcomputeTileOffsets (OpBuilder &b, Location loc, ArrayRef< OpFoldResult > ivs, ArrayRef< OpFoldResult > tileSizes)
 Computes tile offsets, given a list of loop ivs and tileSizes. More...
 
SmallVector< OpFoldResultcomputeTileSizes (OpBuilder &b, Location loc, ArrayRef< OpFoldResult > tileSizes, ArrayRef< OpFoldResult > sizeBounds)
 Computes tile sizes, given a list of tileSizes and dimension sizes (sizeBounds). More...
 
SmallVector< TypegetTensorOutputTypes (LinalgOp op, ValueRange operands)
 Returns the list of tensor output types produced when the given structured operation op is applied to the given operands. More...
 
SmallVector< ValueinsertSlicesBack (OpBuilder &builder, Location loc, LinalgOp op, ValueRange operands, ValueRange results)
 Creates insert_slice ops that insert results back into larger tensors they were originally extracted from with extract_slice before being passed as operands to the given structured operation op or its clone. More...
 
SliceParameters computeSliceParameters (OpBuilder &builder, Location loc, Value valueToTile, ArrayRef< OpFoldResult > tileSizes, AffineMap map, ArrayRef< OpFoldResult > lbs, ArrayRef< OpFoldResult > ubs, ArrayRef< OpFoldResult > subShapeSizes, bool omitPartialTileCheck)
 Computes SliceParameters for a single valueToTile assuming that its user is being tiled with the given loop bounds lbs and ubs and the tile sizes tileSizes. More...
 
SmallVector< std::optional< SliceParameters > > computeAllSliceParameters (OpBuilder &builder, Location loc, LinalgOp linalgOp, ValueRange valuesToTile, ArrayRef< OpFoldResult > ivs, ArrayRef< OpFoldResult > tileSizes, ArrayRef< OpFoldResult > sizeBounds, bool omitPartialTileCheck)
 Computes SliceParamaters for all valuesToTile of the given linalgOp, assuming linalgOp is being fused into a loop nest. More...
 
OperationmakeTiledShape (OpBuilder &builder, Location loc, Value valueToTile, ArrayRef< OpFoldResult > tileSizes, AffineMap map, ArrayRef< OpFoldResult > lbs, ArrayRef< OpFoldResult > ubs, ArrayRef< OpFoldResult > subShapeSizes, bool omitPartialTileCheck)
 Creates an extract_slice/subview op for a single valueToTile with builder. More...
 
SmallVector< ValuemakeTiledShapes (OpBuilder &builder, Location loc, LinalgOp linalgOp, ValueRange valuesToTile, ArrayRef< OpFoldResult > ivs, ArrayRef< OpFoldResult > tileSizes, ArrayRef< OpFoldResult > sizeBounds, bool omitPartialTileCheck)
 Creates extract_slice/subview ops for all valuesToTile of the given linalgOp with builder, assuming linalgOp is being fused into a loop nest for tiling with the given induction variables ivs and tile sizes tileSizes. More...
 
void offsetIndices (OpBuilder &b, LinalgOp linalgOp, ArrayRef< OpFoldResult > offests)
 Add the specified offsets to any linalg.index ops contained in the given linalgOp. More...
 
void offsetIndices (RewriterBase &b, LinalgOp linalgOp, ArrayRef< OpFoldResult > offests)
 
FailureOr< FusionInfofuseProducerOfTensor (OpBuilder &b, OpOperand &consumerOpOperand)
 This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation). More...
 
FailureOr< FusionInfofuseProducerOfTensor (OpBuilder &b, OpResult producerOpResult, OpOperand &consumerOpOperand)
 This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation). More...
 
void updateBoundsForCyclicDistribution (OpBuilder &builder, Location loc, Value procId, Value nprocs, Value &lb, Value &ub, Value &step)
 Update the lb, ub and step to get per processor lb, ub and step. More...
 
template<typename OpTy >
SmallVector< NamedAttributegetPrunedAttributeList (OpTy op)
 Returns an attribute list that excludes pre-defined attributes. More...
 
static bool hasAllOneValues (DenseIntElementsAttr attr)
 
static Value createAdd (Location loc, Value x, Value y, OpBuilder &builder)
 
static Value createMul (Location loc, Value x, Value y, Type accType, OpBuilder &builder)
 
static SmallVector< ValueunrollIndex (OpBuilder &b, Location loc, Value index, ArrayRef< int64_t > factors)
 
static Value getConvolvedIndex (OpBuilder &b, Location loc, Value oIndex, Value fIndex, int64_t stride)
 
static ReductionKind getReductionKind (Operation *op)
 
static std::optional< Operation * > getCombinerOp (LinalgOp op)
 
static ReductionKind getReductionKindOfLinalgOp (LinalgOp op)
 
static MeshOp getMesh (Operation *op, ArrayRef< MeshSharding > operandShardings, ArrayRef< MeshSharding > resultShardings, SymbolTableCollection &symbolTable)
 
static Value createDestinationPassingStyleInitOperand (LinalgOp op, Value spmdizedOperand, ArrayRef< MeshAxis > reductionMeshAxes, MeshOp meshOp, ImplicitLocOpBuilder &builder)
 
static SmallVector< ValuecreateDestinationPassingStyleInitOperands (LinalgOp op, MeshOp meshOp, ArrayRef< Value > spmdizedOperands, ArrayRef< MeshAxis > reductionMeshAxes, IRMapping &spmdizationMap, ImplicitLocOpBuilder &builder)
 
static void createAllReduceForResultWithoutPartialSharding (Value unshardedLinalgOpResult, ArrayRef< MeshAxis > opReductionMeshAxes, MeshSharding resultSharding, ReductionKind reductionKind, IRMapping &spmdizationMap, ImplicitLocOpBuilder &builder)
 
static void createAllReduceForResultsWithoutPartialShardings (LinalgOp unshardedOp, ArrayRef< MeshAxis > opReductionMeshAxes, ArrayRef< MeshSharding > resultShardings, IRMapping &spmdizationMap, ImplicitLocOpBuilder &builder)
 
static void spmdizeLinalgOpWithShardedReduction (LinalgOp op, ArrayRef< Value > spmdizedOperands, ArrayRef< MeshSharding > operandShardings, ArrayRef< MeshSharding > resultShardings, ArrayRef< utils::IteratorType > loopIteratorTypes, ArrayRef< SmallVector< MeshAxis >> meshAxisAssignmentForLoopIterators, IRMapping &spmdizationMap, SymbolTableCollection &symbolTable, ImplicitLocOpBuilder &builder)
 
template<typename OpType >
static void registerOne (MLIRContext *ctx)
 
template<typename... OpTypes>
static void registerAll (MLIRContext *ctx)
 Variadic helper function. More...
 
void populateTranposeConv2DPatterns (RewritePatternSet &patterns)
 
static void generateParallelLoopNest (OpBuilder &b, Location loc, ValueRange lbs, ValueRange ubs, ValueRange steps, ArrayRef< utils::IteratorType > iteratorTypes, ArrayRef< linalg::ProcInfo > procInfo, function_ref< void(OpBuilder &, Location, ValueRange)> bodyBuilderFn, SmallVectorImpl< Value > &ivStorage)
 Generates a loop nest consisting of scf.parallel and scf.for, depending on the iteratorTypes. More...
 
static OperationmaterializeTiledShape (OpBuilder &builder, Location loc, Value valueToTile, const SliceParameters &sliceParams)
 

Typedef Documentation

◆ AllocBufferCallbackFn

using mlir::linalg::AllocBufferCallbackFn = typedef std::function<std::optional<Value>( OpBuilder &b, memref::SubViewOp subView, ArrayRef<Value> boundingSubViewSize, DataLayout &layout)>

Callback function type used to perform the allocation for the promoted subView.

In boundingSubViewsize a best attempt is made to find the smallest constant value for the size of the buffer needed for each dimension. If that is not possible, contains the dynamic size of the subview. The call back should return the buffer to use.

Definition at line 337 of file Transforms.h.

◆ ControlBlockPackMatmulFn

using mlir::linalg::ControlBlockPackMatmulFn = typedef std::function<std::optional<BlockPackMatmulOptions>(linalg::LinalgOp)>

Function type which is used to control matmul packing.

It is expected to return valid packing configuration for each operation. Lack of packing options indicates that no valid configuration could be assigned and the operation will not be packed.

Definition at line 1214 of file Transforms.h.

◆ ControlFusionFn

using mlir::linalg::ControlFusionFn = typedef std::function<bool(OpOperand *fusedOperand)>

Function type which is used to control when to stop fusion.

It is expected that OpOperand is not modified in the callback. The OpOperand is not marked as const to allow callers to use non-const methods.

Definition at line 1739 of file Transforms.h.

◆ ControlPropagationFn

using mlir::linalg::ControlPropagationFn = typedef std::function<bool(OpOperand *opOperand)>

Function type which is used to control propagation of tensor.pack/unpack ops.

Definition at line 1751 of file Transforms.h.

◆ ControlSplitReductionFn

using mlir::linalg::ControlSplitReductionFn = typedef std::function<SplitReductionOptions(LinalgOp op)>

Function signature to control reduction splitting.

This returns SplitReductionOptions.

Definition at line 441 of file Transforms.h.

◆ CopyCallbackFn

using mlir::linalg::CopyCallbackFn = typedef std::function<LogicalResult(OpBuilder &b, Value src, Value dst)>

Callback function type used to insert copy from original subview to subview of the promoted region for the read operands/subview of promoted region to original subview for the results.

The copy has to happen from src to dst.

Definition at line 350 of file Transforms.h.

◆ DeallocBufferCallbackFn

using mlir::linalg::DeallocBufferCallbackFn = typedef std::function<LogicalResult(OpBuilder &b, Value buffer)>

Callback function type used to deallocate the buffers used to hold the promoted subview.

Definition at line 343 of file Transforms.h.

◆ GetCollapsableDimensionsFn

using mlir::linalg::GetCollapsableDimensionsFn = typedef std::function<SmallVector<ReassociationIndices>(linalg::LinalgOp)>

Function type to control generic op dimension collapsing.

It is expected to return an array of ReassociationIndices representing dimensions that should be merged.

Definition at line 1769 of file Transforms.h.

◆ LinalgLoops

Definition at line 469 of file Transforms.h.

◆ LoopIndexToRangeIndexMap

Creates a number of ranges equal to the number of non-zero in tileSizes.

One for each loop of the LinalgOp that is tiled. The tileSizes argument has one entry per surrounding loop. It uses zero as the convention that a particular loop is not tiled. This convention simplifies implementations by avoiding affine map manipulations. The returned ranges correspond to the loop ranges, in the proper order, that are tiled and for which new loops will be created. Also the function returns a map from loop indices of the LinalgOp to the corresponding non-empty range indices of newly created loops.

Definition at line 809 of file Transforms.h.

◆ MeshAxis

Definition at line 44 of file MeshShardingInterfaceImpl.cpp.

◆ MeshOp

using mlir::linalg::MeshOp = typedef mesh::MeshOp

Definition at line 48 of file MeshShardingInterfaceImpl.cpp.

◆ MeshSharding

Definition at line 46 of file MeshShardingInterfaceImpl.cpp.

◆ OptimizeCopyFn

using mlir::linalg::OptimizeCopyFn = typedef std::function<LogicalResult(RewriterBase &, tensor::PadOp, Value)>

Definition at line 1501 of file Transforms.h.

◆ ProcInfoCallBackFn

using mlir::linalg::ProcInfoCallBackFn = typedef std::function<SmallVector<ProcInfo>( OpBuilder &b, Location loc, ArrayRef<Range> parallelLoopRanges)>

Definition at line 293 of file Utils.h.

◆ ReductionKind

using mlir::linalg::ReductionKind = typedef mesh::ReductionKind

Definition at line 45 of file MeshShardingInterfaceImpl.cpp.

◆ ShardingArray

Definition at line 47 of file MeshShardingInterfaceImpl.cpp.

◆ TileSizeComputationFunction

Definition at line 185 of file Transforms.h.

Enumeration Type Documentation

◆ DistributionMethod

Scheme used to distribute loops to processors.

Enumerator
Cyclic 

Cyclic distribution where no assumption is made about the dynamic relationship between number of processors and number of iterations of the distributed loop.

Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

scf.parallel(iv)= (lb + procId * step) to (ub) step (step * nprocs)

CyclicNumProcsGeNumIters 

Cyclic distribution where the number of processors can be assumed to be more than or equal to the number of iterations of the distributed loop.

In such cases, a simple in-bounds check is enough (instead of materializing a loop). Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

iv = lb + procId * step cond = arith.cmpi "slt", iv, ub scf.if cond { ... }

CyclicNumProcsEqNumIters 

Cyclic distribution where the number of processors can be assumed to be equal to the number of iterations of the distributed loop.

In such cases, no bounds check is needed. Distributes the following loop

scf.parallel (iv) = (lb) to (ub) step (step)

to

iv = lb + procId * step

None 

No Distribution.

Definition at line 238 of file Utils.h.

◆ LinalgTilingLoopType

The type of loops to be generated during tiling.

Enumerator
Loops 
AffineLoops 
ParallelLoops 

Definition at line 97 of file Utils.h.

Function Documentation

◆ allIndexingsAreProjectedPermutation()

bool mlir::linalg::allIndexingsAreProjectedPermutation ( LinalgOp  op)

Check if all indexing maps are projected permutations.

Definition at line 149 of file Utils.cpp.

Referenced by isElementwise(), and vectorizeLinalgOpPrecondition().

◆ allocateGPUPrivateMemory()

std::optional< Value > mlir::linalg::allocateGPUPrivateMemory ( OpBuilder builder,
memref::SubViewOp  subview,
ArrayRef< Value sizeBounds,
DataLayout  
)

Allocate the subview in the GPU private memory.

Definition at line 495 of file Promotion.cpp.

References allocateSubviewGPUMemoryInAddressSpace().

◆ allocateWorkgroupMemory()

std::optional< Value > mlir::linalg::allocateWorkgroupMemory ( OpBuilder builder,
memref::SubViewOp  subview,
ArrayRef< Value sizeBounds,
DataLayout  
)

Allocate the subview in the GPU workgroup memory.

Definition at line 470 of file Promotion.cpp.

References allocateSubviewGPUMemoryInAddressSpace().

◆ areDimSequencesPreserved()

bool mlir::linalg::areDimSequencesPreserved ( ArrayRef< AffineMap maps,
ArrayRef< ReassociationIndices dimSequences 
)

Return true if all sequences of dimensions specified in dimSequences are contiguous in all the ranges of the maps.

Definition at line 1198 of file ElementwiseOpFusion.cpp.

References isDimSequencePreserved().

◆ areElementwiseOpsFusable()

bool mlir::linalg::areElementwiseOpsFusable ( OpOperand fusedOperand)

Return true if two linalg.generic operations with producer/consumer relationship through fusedOperand can be fused using elementwise op fusion.

Conditions for elementwise fusion of generic operations.

Definition at line 131 of file ElementwiseOpFusion.cpp.

References mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::getDefiningOp(), getIndexingMapOfProducerOperandsInCoordinatesOfFusedOp(), mlir::AffineMap::getNumResults(), mlir::detail::IROperandBase::getOwner(), mlir::Value::getType(), and mlir::AffineMap::isPermutation().

Referenced by fuseElementwiseOps().

◆ blockPackMatmul()

FailureOr< PackResult > mlir::linalg::blockPackMatmul ( RewriterBase rewriter,
linalg::LinalgOp  linalgOp,
const ControlBlockPackMatmulFn controlPackMatmul 
)

Pack a matmul operation into blocked 4D layout.

Relayout a matmul operation into blocked layout with two levels of subdivision:

  • major 2D blocks - outer dimensions, consist of minor blocks
  • minor 2D blocks - inner dimensions, consist of scalar elements

A 2D matmul MxNxK gets reshaped into blocked 4D representation as: [MB][NB][mb][nb] += [MB][KB][mb][kb] * [NB][KB][nb][kb] where the (MB, NB, KB) dimensions represent the major blocks, and the (mb, nb, kb) are the minor blocks of their respective original 2D dimensions (M, N, K).

Depending on the initial operands' data layout and the specified packing options, the major blocks dimensions might get transposed e.g., [MB][KB] -> [KB][MB]. The minor blocks can also be transposed e.g., [mb][kb] -> [kb][mb]. Any present batch dimensions remain unchanged. The final result is unpacked back to the original shape.

Return failure if no valid packing options are provided.

Definition at line 139 of file BlockPackMatmul.cpp.

References mlir::getAsOpFoldResult(), mlir::Builder::getI64ArrayAttr(), inferContractionDims(), mlir::RewriterBase::notifyMatchFailure(), options, packMatmulGreedily(), mlir::OpBuilder::setInsertionPointAfter(), transposePackedMatmul(), and validateFullTilesOnDims().

◆ bufferizeToAllocation() [1/4]

Value mlir::linalg::bufferizeToAllocation ( RewriterBase rewriter,
const BufferizeToAllocationOptions options,
bufferization::AllocTensorOp  allocTensorOp,
Attribute  memorySpace = {},
Operation insertionPoint = nullptr 
)

Materialize a buffer allocation for the given bufferization.alloc_tensor op and lower the op to memref.alloc + memref.tensor_store.

In addition to rewriting the IR, this function returns the newly allocated buffer. The insertionPoint parameter can be used to specify a custom insertion point for the buffer allocation.

Definition at line 323 of file ConvertToDestinationStyle.cpp.

References mlir::OpBuilder::create(), createAllocationForTensor(), options, mlir::RewriterBase::replaceOp(), and mlir::OpBuilder::setInsertionPoint().

◆ bufferizeToAllocation() [2/4]

Value mlir::linalg::bufferizeToAllocation ( RewriterBase rewriter,
const BufferizeToAllocationOptions options,
Operation op,
Attribute  memorySpace = {},
Operation insertionPoint = nullptr 
)

Bufferize the given op with tensor semantics and materialize the result in a newly allocated buffer.

Only bufferizable ops that bufferize to a memory write or have an aliasing OpOperand (and do not themselves bufferize to an allocation) are supported. They are bufferized using their BufferizableOpInterface implementation.

Selected ops that bufferize to an allocation (or need special handling) are also supported:

  • tensor.pad
  • vector.mask

This function returns the newly allocated buffer. The insertionPoint parameter can be used to specify a custom insertion point for the buffer allocation.

Definition at line 471 of file ConvertToDestinationStyle.cpp.

References bufferizeToAllocation(), and options.

◆ bufferizeToAllocation() [3/4]

Value mlir::linalg::bufferizeToAllocation ( RewriterBase rewriter,
const BufferizeToAllocationOptions options,
tensor::PadOp  padOp,
Attribute  memorySpace = {},
Operation insertionPoint = nullptr 
)

Materialize a buffer allocation for the given tensor.pad op and lower the op to linalg.fill/linalg.generic + bufferization.materialize_in_destination.

E.g.:

%0 = tensor.pad low[l] high[h] t ...

is lowered to:

alloc = memref.alloc linalg.fill ... outs(alloc) subview = memref.subview alloc [l] [...] [1] bufferization.materialize_in_destination t in subview %0 = bufferization.to_tensor alloc restrict writable

In addition to rewriting the IR as shown above, this function returns the newly allocated buffer. The insertionPoint parameter can be used to specify a custom insertion point for the buffer allocation.

Referenced by bufferizeToAllocation().

◆ bufferizeToAllocation() [4/4]

Value mlir::linalg::bufferizeToAllocation ( RewriterBase rewriter,
const BufferizeToAllocationOptions options,
vector::MaskOp  maskOp,
Attribute  memorySpace = {},
Operation insertionPoint = nullptr 
)

Materialize a buffer allocation for the given vector.mask op and bufferize the op, including its region.

E.g.:

%0 = vector.mask { vector.transfer_write v, t : vector<16xf32>, tensor<?xf32> } : vector<16xi1> -> tensor<?xf32>

is lowered to:

alloc = memref.alloc bufferization.materialize_in_destination t in subview vector.mask { vector.transfer_write arg0, alloc : vector<16xf32>, memref<?xf32> } : vector<16xi1> %0 = bufferization.to_tensor alloc restrict writable

In addition to rewriting the IR as shown above, this function returns the newly allocated buffer. The insertionPoint parameter can be used to specify a custom insertion point for the buffer allocation.

Definition at line 261 of file ConvertToDestinationStyle.cpp.

References bufferizeToAllocation(), mlir::RewriterBase::eraseOp(), mlir::RewriterBase::modifyOpInPlace(), options, and mlir::OpBuilder::setInsertionPoint().

◆ collapseOpIterationDims()

FailureOr< CollapseResult > mlir::linalg::collapseOpIterationDims ( LinalgOp  op,
ArrayRef< ReassociationIndices foldedIterationDims,
RewriterBase rewriter 
)

Collapses dimensions of linalg.generic/linalg.copy operation.

Implementation of fusion with reshape operation by collapsing dimensions.

A precondition to calling this method is that for each list in foldedIterationDim, the sequence of dimensions is contiguous in domains of all indexing_maps of the linalgOp. This can be checked using areDimSequencePreserved method. When valid, the method also collapses the operands of the op. Returns replacement values of the results of the original linalgOp by inserting reshapes to get back values of compatible types.

Definition at line 1668 of file ElementwiseOpFusion.cpp.

References mlir::OpBuilder::create(), createCollapsedOp(), mlir::detail::enumerate(), generateCollapsedIndexingRegion(), mlir::get(), getOperandReassociation(), mlir::Value::getType(), mlir::getValueOrCreateConstantIndexOp(), mlir::m_ConstantInt(), mlir::matchPattern(), mlir::RewriterBase::notifyMatchFailure(), mlir::Range::offset, mlir::OpBuilder::setInsertionPoint(), mlir::Range::size, and mlir::Range::stride.

◆ computeAllSliceParameters()

SmallVector< std::optional< SliceParameters > > mlir::linalg::computeAllSliceParameters ( OpBuilder builder,
Location  loc,
LinalgOp  linalgOp,
ValueRange  valuesToTile,
ArrayRef< OpFoldResult ivs,
ArrayRef< OpFoldResult tileSizes,
ArrayRef< OpFoldResult sizeBounds,
bool  omitPartialTileCheck 
)

Computes SliceParamaters for all valuesToTile of the given linalgOp, assuming linalgOp is being fused into a loop nest.

Calls computeSliceParameters for every individual value.

Note that a constant zero in tileSizes means no tiling at that implicit loop. The number of non-zero values in tileSizes should be equal to the number of values in ivs.

Some of the valuesToTile won't be affected by tiling. For these values, std::nullopt will be returned.

Definition at line 743 of file Utils.cpp.

References computeSliceParameters(), computeTileOffsets(), computeTileSizes(), and isTiled().

Referenced by makeTiledShapes().

◆ computeContinuousTileSizes()

FailureOr< ContinuousTileSizeSpecification > mlir::linalg::computeContinuousTileSizes ( OpBuilder builder,
TilingInterface  op,
unsigned  dimension,
OpFoldResult  targetSize,
bool  emitAssertions 
)

◆ computeMultiTileSizes()

FailureOr< MultiSizeSpecification > mlir::linalg::computeMultiTileSizes ( OpBuilder builder,
LinalgOp  op,
unsigned  dimension,
OpFoldResult  targetSize,
OpFoldResult  divisor,
bool  emitAssertions = true 
)

Emits the IR computing the multi-sized tiling specification with two tile sizes not exceeding targetSize, each divisible by sizeDivisor, such that there exist numbers of tiles with these sizes that fully cover the given iteration space dimension of the structured op.

The computation is as follows:

b = originalTripCount floordiv sizeDivisor t = (targetSize + sizeDivisor - 1) floordiv sizeDivisor d = (b + t - 1) floordiv t s = (b floordiv d) * sizeDivisor v = b % d u = d - v

where the tile sizes are s and s + sizeDivisor, and the numbers of the corresponding tiles are u and v, respectively. Alternatively,

s * u + (s + sizeDivisor) * v == original size, where s mod sizeDivisor = 0.

Expects all values to be positive. In some cases with the target tile size sufficiently close to the dimension shape and non-unit divisor, it is impossible to compute such sizes. If emitAssertion is set, also emit the assertion that size computation succeeded.

Returns the specification consisting of both tile values and the number of tiles of each size.

Definition at line 268 of file Tiling.cpp.

References mlir::ImplicitLocOpBuilder::create(), emitIsPositiveIndexAssertion(), mlir::AffineExpr::floorDiv(), mlir::Builder::getAffineSymbolExpr(), mlir::ImplicitLocOpBuilder::getLoc(), mlir::Builder::getStringAttr(), mlir::getValueOrCreateConstantIndexOp(), mlir::linalg::detail::MultiSizeSpecificationBase< T >::lowTileSize, mlir::affine::makeComposedAffineApply(), and mlir::affine::makeComposedFoldedMultiResultAffineApply().

◆ computeSliceParameters()

SliceParameters mlir::linalg::computeSliceParameters ( OpBuilder builder,
Location  loc,
Value  valueToTile,
ArrayRef< OpFoldResult tileSizes,
AffineMap  map,
ArrayRef< OpFoldResult lbs,
ArrayRef< OpFoldResult ubs,
ArrayRef< OpFoldResult subShapeSizes,
bool  omitPartialTileCheck 
)

Computes SliceParameters for a single valueToTile assuming that its user is being tiled with the given loop bounds lbs and ubs and the tile sizes tileSizes.

omitPartialTileCheck controls whether to omit the partial/boundary tile condition check in cases where we statically know that it is unnecessary.

Definition at line 567 of file Utils.cpp.

References mlir::Value::getType(), mlir::linalg::SliceParameters::offsets, mlir::linalg::SliceParameters::sizes, and mlir::linalg::SliceParameters::strides.

Referenced by computeAllSliceParameters(), and makeTiledShape().

◆ computeStaticContinuousTileSizes()

FailureOr< StaticContinuousTileSizeSpecification > mlir::linalg::computeStaticContinuousTileSizes ( LinalgOp  op,
unsigned  dimension,
unsigned  targetSize 
)

◆ computeStaticMultiTileSizes()

FailureOr< StaticMultiSizeSpecification > mlir::linalg::computeStaticMultiTileSizes ( LinalgOp  op,
unsigned  dimension,
int64_t  targetSize,
int64_t  divisor 
)

◆ computeTileOffsets()

SmallVector< OpFoldResult > mlir::linalg::computeTileOffsets ( OpBuilder b,
Location  loc,
ArrayRef< OpFoldResult ivs,
ArrayRef< OpFoldResult tileSizes 
)

Computes tile offsets, given a list of loop ivs and tileSizes.

In case a tile size is zero (i.e., no tiling), the corresponding offset is also zero.

Definition at line 675 of file Utils.cpp.

References mlir::Builder::getIndexAttr(), isTiled(), and mlir::isZeroIndex().

Referenced by computeAllSliceParameters().

◆ computeTileSizes()

SmallVector< OpFoldResult > mlir::linalg::computeTileSizes ( OpBuilder b,
Location  loc,
ArrayRef< OpFoldResult tileSizes,
ArrayRef< OpFoldResult sizeBounds 
)

Computes tile sizes, given a list of tileSizes and dimension sizes (sizeBounds).

In case a tile size is zero (i.e., no tiling), the corresponding result size is the corresponding value from sizeBounds. Note: The returned tile sizes are closed intervals.

Definition at line 689 of file Utils.cpp.

References mlir::getAffineDimExpr(), mlir::Builder::getContext(), isTiled(), mlir::isZeroIndex(), and mlir::affine::makeComposedFoldedAffineApply().

Referenced by computeAllSliceParameters().

◆ concat()

SmallVector< AffineExpr, 4 > mlir::linalg::concat ( ArrayRef< AffineExpr a,
ArrayRef< AffineExpr b 
)

Return the vector that is the concatenation of a and b.

Definition at line 2299 of file LinalgOps.cpp.

Referenced by mlir::presburger::Simplex::makeProduct().

◆ copyToGPUPrivateMemory()

LogicalResult mlir::linalg::copyToGPUPrivateMemory ( OpBuilder b,
Value  src,
Value  dst 
)

Normal copy to between src and dst.

Definition at line 503 of file Promotion.cpp.

References mlir::OpBuilder::create(), and mlir::Value::getLoc().

◆ copyToWorkgroupMemory()

LogicalResult mlir::linalg::copyToWorkgroupMemory ( OpBuilder b,
Value  src,
Value  dst 
)

Create Memref copy operations and add gpu barrier guards before and after the copy operation to ensure data integrity.

Definition at line 486 of file Promotion.cpp.

References mlir::OpBuilder::create(), and mlir::Value::getLoc().

◆ createAdd()

static Value mlir::linalg::createAdd ( Location  loc,
Value  x,
Value  y,
OpBuilder builder 
)
static

Definition at line 32 of file ConvertConv2DToImg2Col.cpp.

References mlir::OpBuilder::create(), and mlir::Value::getType().

Referenced by rewriteInIm2Col().

◆ createAllReduceForResultsWithoutPartialShardings()

static void mlir::linalg::createAllReduceForResultsWithoutPartialShardings ( LinalgOp  unshardedOp,
ArrayRef< MeshAxis opReductionMeshAxes,
ArrayRef< MeshSharding resultShardings,
IRMapping spmdizationMap,
ImplicitLocOpBuilder builder 
)
static

◆ createAllReduceForResultWithoutPartialSharding()

static void mlir::linalg::createAllReduceForResultWithoutPartialSharding ( Value  unshardedLinalgOpResult,
ArrayRef< MeshAxis opReductionMeshAxes,
MeshSharding  resultSharding,
ReductionKind  reductionKind,
IRMapping spmdizationMap,
ImplicitLocOpBuilder builder 
)
static

◆ createDestinationPassingStyleInitOperand()

static Value mlir::linalg::createDestinationPassingStyleInitOperand ( LinalgOp  op,
Value  spmdizedOperand,
ArrayRef< MeshAxis reductionMeshAxes,
MeshOp  meshOp,
ImplicitLocOpBuilder builder 
)
static

◆ createDestinationPassingStyleInitOperands()

static SmallVector<Value> mlir::linalg::createDestinationPassingStyleInitOperands ( LinalgOp  op,
MeshOp  meshOp,
ArrayRef< Value spmdizedOperands,
ArrayRef< MeshAxis reductionMeshAxes,
IRMapping spmdizationMap,
ImplicitLocOpBuilder builder 
)
static

◆ createFoldedDimOp()

OpFoldResult mlir::linalg::createFoldedDimOp ( OpBuilder b,
Location  loc,
Value  val,
int64_t  dim 
)

Create one memref::DimOp or tensor::DimOp depending on the type of val.

This is a polymorphic convenience function to abstract away the rank and concrete type of val. Asserts that val is a memref or tensor type.

Definition at line 105 of file LinalgOps.cpp.

References createOrFoldDimOp(), mlir::Builder::getIndexAttr(), and mlir::Value::getType().

Referenced by fuse().

◆ createMul()

static Value mlir::linalg::createMul ( Location  loc,
Value  x,
Value  y,
Type  accType,
OpBuilder builder 
)
static

Definition at line 40 of file ConvertConv2DToImg2Col.cpp.

References mlir::convertScalarToDtype(), and mlir::OpBuilder::create().

Referenced by rewriteInIm2Col().

◆ createOrFoldDimOp()

Value mlir::linalg::createOrFoldDimOp ( OpBuilder b,
Location  loc,
Value  val,
int64_t  dim 
)

Create one memref::DimOp or tensor::DimOp depending on the type of val.

This is a polymorphic convenience function to abstract away the rank and concrete type of val. Asserts that val is a memref or tensor type.

Definition at line 96 of file LinalgOps.cpp.

References mlir::OpBuilder::createOrFold(), and mlir::Value::getType().

Referenced by concatSizesFromInputs(), createFoldedDimOp(), and mlir::sparse_tensor::sizesFromSrc().

◆ deallocateGPUPrivateMemory()

LogicalResult mlir::linalg::deallocateGPUPrivateMemory ( OpBuilder ,
Value   
)

In case of GPU private memory there is no need to deallocate since the memory is freed when going outside of the scope.

Definition at line 511 of file Promotion.cpp.

◆ deallocateWorkgroupMemory()

LogicalResult mlir::linalg::deallocateWorkgroupMemory ( OpBuilder ,
Value   
)

In case of GPU group memory there is no need to deallocate.

Definition at line 479 of file Promotion.cpp.

◆ decomposeWinogradFilterTransformOp()

FailureOr< Operation * > mlir::linalg::decomposeWinogradFilterTransformOp ( RewriterBase rewriter,
linalg::WinogradFilterTransformOp  op 
)

Rewrite linalg.winograd_filter_transform.

The data layout of the filter is FHWC. The transformation matrix is 2-dimension. We need to extract H x W from FHWC first. We generate 2 levels of loops to iterate on F and C. After the rewriting, we get

scf.for f = lo_f to hi_f step 1 scf.for c = lo_c to hi_c step 1 extracted = extract filter<h x w> from filter<f x h x w x c> ret = linalg.matmul G, extracted ret = linalg.matmul ret, GT inserted = insert ret into filter<h x w x c x f>

Definition at line 1191 of file WinogradConv2D.cpp.

◆ decomposeWinogradInputTransformOp()

FailureOr< Operation * > mlir::linalg::decomposeWinogradInputTransformOp ( RewriterBase rewriter,
linalg::WinogradInputTransformOp  op 
)

Rewrite linalg.winograd_input_transform.

The data layout of the input is NHWC. The transformation matrix is 2-dimension. We need to extract H x W from NHWC first. We generate 4 levels of loops to iterate on N, C, tileH, and tileW. After the rewriting, we get

scf.for h = 0 to tileH step 1 scf.for w = 0 to tileW step 1 scf.for n = 0 to N step 1 scf.for c = 0 to C step 1 extracted = extract extracted<alphaH x alphaW> from input<N x H x W x C> at [n, (h x m), (w x m), c] ret = linalg.matmul BT, extracted ret = linalg.matmul ret, B inserted = insert ret<alphaH x alphaW> into output<alphaH x alphaW x tileH x tileW x N x C> at [0, 0, h, w, n, c]

Definition at line 1197 of file WinogradConv2D.cpp.

◆ decomposeWinogradOutputTransformOp()

FailureOr< Operation * > mlir::linalg::decomposeWinogradOutputTransformOp ( RewriterBase rewriter,
linalg::WinogradOutputTransformOp  op 
)

Rewrite linalg.winograd_output_transform.

The data layout of the output is HWNF. The transformation matrix is 2-dimension. We need to extract H x W from HWNF first. We generate 4 levels of loops to iterate on N, F, tileH, and tileW. After the transformation, we get

scf.for h = 0 to tileH step 1 scf.for w = 0 to tileW step 1 scf.for n = 0 to N step 1 scf.for f = 0 to F step 1 extracted = extract extracted<alphaH x alphaW> from input<alphaH x alphaW x tileH x tileW x N x F> at [0, 0, h, w, n, f] ret = linalg.matmul AT, extracted ret = linalg.matmul ret, A inserted = insert ret<alphaH x alphaW> into output<N x H x W x F> at [n, (h x m), (w x m), f]

Definition at line 1203 of file WinogradConv2D.cpp.

◆ dropUnitDims()

FailureOr< DropUnitDimsResult > mlir::linalg::dropUnitDims ( RewriterBase rewriter,
GenericOp  genericOp,
const ControlDropUnitDims options 
)

◆ extractOrIdentityMap()

AffineMap mlir::linalg::extractOrIdentityMap ( std::optional< AffineMap maybeMap,
unsigned  rank,
MLIRContext context 
)

Returns maybeMap.get() if maybeMap is set, otherwise returns the symbol-less identity map of rank.

Definition at line 2279 of file LinalgOps.cpp.

References mlir::AffineMap::get(), and mlir::AffineMap::getMultiDimIdentityMap().

◆ fuseElementwiseOps()

FailureOr< mlir::linalg::ElementwiseOpFusionResult > mlir::linalg::fuseElementwiseOps ( RewriterBase rewriter,
OpOperand fusedOperand 
)

◆ fuseProducerOfTensor() [1/2]

FailureOr< FusionInfo > mlir::linalg::fuseProducerOfTensor ( OpBuilder b,
OpOperand consumerOpOperand 
)

This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation).

Definition at line 227 of file Fusion.cpp.

References mlir::IROperand< DerivedT, IRValueT >::get(), and getProducerOfTensor().

◆ fuseProducerOfTensor() [2/2]

FailureOr< FusionInfo > mlir::linalg::fuseProducerOfTensor ( OpBuilder b,
OpResult  producerOpResult,
OpOperand consumerOpOperand 
)

This implements the fusion part of the "tileAndFuse on tensors" transformation and thus requires the consumerOpOperand to be a extract_slice op (generally obtained by applying the tiling transformation).

Assumes producerOfTensor is a Linalg op that produces consumerOpOperand.

Definition at line 239 of file Fusion.cpp.

References mlir::OpBuilder::create(), fuse(), mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::getDefiningOp(), mlir::detail::IROperandBase::getOwner(), mlir::OpResult::getOwner(), mlir::Value::getParentBlock(), mlir::OpResult::getResultNumber(), mlir::Value::getType(), mlir::IROperand< DerivedT, IRValueT >::set(), and mlir::OpBuilder::setInsertionPoint().

◆ generalizeNamedOp()

FailureOr< GenericOp > mlir::linalg::generalizeNamedOp ( RewriterBase rewriter,
LinalgOp  namedOp 
)

Create a GenericOp from the given named operation namedOp and replace namedOp.

Return failure if namedOp is a GenericOp or misses a region builder.

Definition at line 53 of file Generalization.cpp.

References mlir::OpBuilder::create(), generalizeNamedOpPrecondition(), mlir::RewriterBase::inlineRegionBefore(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::replaceOp().

Referenced by packMatmulGreedily(), and mlir::linalg::LinalgGeneralizationPattern::returningMatchAndRewrite().

◆ generateLibraryCallName()

std::string mlir::linalg::generateLibraryCallName ( Operation op)

Returns the name mangled library call name to disambiguate between different overloads at the C level.

The name mangling scheme is basic and uses MLIR type names:

  1. form a string which is the concatenation of the linalg op name with all the operand type names, separate by underscores;
  2. drop the linalg. prefix, and the <, >, ? symbols from the type. Assumes op is a LinalgOp.

Examples:

  1. linalg.fill(f, A) : f32, memref<f32> name mangles into linalg_fill_f32_viewf32
  2. linalg.dot A, B, C : (memref<?xf32, stride_specification>, memref<?xf32, stride_specification>, memref<f32>) name mangles into linalg_dot_viewxf32_viewxf32_viewf32
  3. linalg.matmul(...) : memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification> name mangles into linalg_matmul_viewxxf32_viewxxf32_viewxxf32

Definition at line 2340 of file LinalgOps.cpp.

References appendMangledType(), mlir::Operation::getAttrs(), mlir::Operation::getName(), mlir::Operation::getOperandTypes(), and mlir::OperationName::getStringRef().

◆ generateParallelLoopNest()

static void mlir::linalg::generateParallelLoopNest ( OpBuilder b,
Location  loc,
ValueRange  lbs,
ValueRange  ubs,
ValueRange  steps,
ArrayRef< utils::IteratorType >  iteratorTypes,
ArrayRef< linalg::ProcInfo procInfo,
function_ref< void(OpBuilder &, Location, ValueRange)>  bodyBuilderFn,
SmallVectorImpl< Value > &  ivStorage 
)
static

Generates a loop nest consisting of scf.parallel and scf.for, depending on the iteratorTypes.

Consecutive parallel loops create a single scf.parallel operation; each sequential loop creates a new scf.for operation. The body of the innermost loop is populated by bodyBuilderFn that accepts a range of induction variables for all loops. ivStorage is used to store the partial list of induction variables.

Definition at line 372 of file Utils.cpp.

References mlir::ArithBuilder::_and(), mlir::scf::buildLoopNest(), mlir::OpBuilder::create(), mlir::linalg::ProcInfo::distributionMethod, isParallelIterator(), None, and mlir::ArithBuilder::slt().

Referenced by mlir::linalg::GenerateLoopNest< LoopTy >::doit().

◆ getCombinerOp()

static std::optional<Operation *> mlir::linalg::getCombinerOp ( LinalgOp  op)
static

Definition at line 81 of file MeshShardingInterfaceImpl.cpp.

References mlir::matchReduction().

Referenced by getReductionKindOfLinalgOp().

◆ getCombinerOpKind()

std::optional< vector::CombiningKind > mlir::linalg::getCombinerOpKind ( Operation combinerOp)

Return vector::CombiningKind for the given op.

Definition at line 544 of file Vectorization.cpp.

References MINUI.

Referenced by buildMultiDimReduce(), and reductionPreconditions().

◆ getConvolvedIndex()

static Value mlir::linalg::getConvolvedIndex ( OpBuilder b,
Location  loc,
Value  oIndex,
Value  fIndex,
int64_t  stride 
)
static

◆ getLinalgTilingCanonicalizationPatterns()

RewritePatternSet mlir::linalg::getLinalgTilingCanonicalizationPatterns ( MLIRContext ctx)

Canonicalization patterns relevant to apply after tiling patterns.

These are applied automatically by the tiling pass but need to be applied manually when tiling is called programmatically.

Definition at line 858 of file Tiling.cpp.

References populateLinalgTilingCanonicalizationPatterns().

◆ getMesh()

static MeshOp mlir::linalg::getMesh ( Operation op,
ArrayRef< MeshSharding operandShardings,
ArrayRef< MeshSharding resultShardings,
SymbolTableCollection symbolTable 
)
static

Definition at line 105 of file MeshShardingInterfaceImpl.cpp.

References mlir::mesh::getMesh().

Referenced by spmdizeLinalgOpWithShardedReduction().

◆ getPreservedProducerResults()

llvm::SmallDenseSet< int > mlir::linalg::getPreservedProducerResults ( GenericOp  producer,
GenericOp  consumer,
OpOperand fusedOperand 
)

Returns a set of indices of the producer's results which would be preserved after the fusion.

  • There is a chance that the implementation of the transformation does not agree with the result of this method. This function gives a prediction based on an optimized fusion.

Definition at line 104 of file ElementwiseOpFusion.cpp.

References mlir::detail::enumerate(), and isOpOperandCanBeDroppedAfterFusedLinalgs().

Referenced by fuseElementwiseOps().

◆ getPrunedAttributeList()

template<typename OpTy >
SmallVector<NamedAttribute> mlir::linalg::getPrunedAttributeList ( OpTy  op)

Returns an attribute list that excludes pre-defined attributes.

Definition at line 364 of file Utils.h.

Referenced by matchAndReplaceDepthwiseConv().

◆ getReassociationMapForFoldingUnitDims()

std::optional< SmallVector< ReassociationIndices > > mlir::linalg::getReassociationMapForFoldingUnitDims ( ArrayRef< OpFoldResult mixedSizes)

Get the reassociation maps to fold the result of a extract_slice (or source of a insert_slice) operation with given offsets, and sizes to its rank-reduced version.

This is only done for the cases where the size is 1 and offset is 0. Strictly speaking the offset 0 is not required in general, but non-zero offsets are not handled by SPIR-V backend at this point (and potentially cannot be handled).

Definition at line 852 of file Utils.cpp.

References mlir::detail::enumerate().

◆ getReductionKind()

static ReductionKind mlir::linalg::getReductionKind ( Operation op)
static

Definition at line 51 of file MeshShardingInterfaceImpl.cpp.

Referenced by getReductionKindOfLinalgOp().

◆ getReductionKindOfLinalgOp()

static ReductionKind mlir::linalg::getReductionKindOfLinalgOp ( LinalgOp  op)
static

◆ getTensorOutputTypes()

SmallVector< Type > mlir::linalg::getTensorOutputTypes ( LinalgOp  op,
ValueRange  operands 
)

Returns the list of tensor output types produced when the given structured operation op is applied to the given operands.

Note that operands are not necessarily the actual operands of op.

Definition at line 705 of file Utils.cpp.

Referenced by tileLinalgOpImpl().

◆ hasAllOneValues()

static bool mlir::linalg::hasAllOneValues ( DenseIntElementsAttr  attr)
static

Definition at line 27 of file ConvertConv2DToImg2Col.cpp.

Referenced by rewriteInIm2Col().

◆ hasOnlyScalarElementwiseOp()

bool mlir::linalg::hasOnlyScalarElementwiseOp ( Region r)

Detect whether r has only ConstantOp, ElementwiseMappable and YieldOp.

Definition at line 155 of file Utils.cpp.

Referenced by isElementwise().

◆ hasVectorizationImpl()

bool mlir::linalg::hasVectorizationImpl ( Operation op)

Return true if there's dedicated logic in the Linalg Vectorizer to vectorize this Op, false otherwise.

Note that this helper merely implements a very high level check and that the vectorizer also requires various additional pre-conditions to be met for it to work (these are checked by the vectorizer itself).

Definition at line 2150 of file Vectorization.cpp.

Referenced by vectorizeOpPrecondition().

◆ hoistPaddingOnTensors() [1/2]

FailureOr< Value > mlir::linalg::hoistPaddingOnTensors ( RewriterBase rewriter,
tensor::PadOp  opToHoist,
int64_t  numLoops,
ArrayRef< int64_t >  transposeVector,
tensor::PadOp &  hoistedOp,
SmallVectorImpl< TransposeOp > &  transposeOps 
)

Mechanically hoist padding operations on tensors by numLoops into a new, generally larger tensor.

This achieves packing of multiple padding ops into a larger tensor. On success, opToHoist is replaced by the cloned version in the packing loop so the caller can continue reasoning about the padding operation. If transposeVector is non-empty, hoist padding introduces a TransposeOp to transpose the padded tensor before inserting it into the packed tensor. A transposeVector can change the storage order of the padded tensor but does not change the order of the pack or compute loops.

TODO: In the future, we should consider rewriting as a tensor.pack after hoisting since this abstraction is now available.

Example in pseudo-mlir:

If hoistPaddingOnTensors is called with nLoops = 2 on the following IR.

scf.for (%i, %j, %k)
%st0 = tensor.extract_slice f(%i, %k) : ... to tensor<?x?xf32>
%0 = tensor.pad %st0 low[0, 0] high[...] {
^bb0( ... ):
linalg.yield %pad
} : tensor<?x?xf32> to tensor<4x8xf32>
compute(%0)
Eliminates variable at the specified position using Fourier-Motzkin variable elimination.

IR resembling the following is produced:

scf.for (%i) {
%packed_init = tensor.empty range(%j) : tensor<?x4x8xf32>
%packed = scf.for (%k) iter_args(%p : %packed_init) {
%st0 = tensor.extract_slice f(%i, %k) : ... to tensor<?x?xf32>
%0 = tensor.pad %st0 low[0, 0] high[...] {
^bb0( ... ):
linalg.yield %pad
} : tensor<?x?xf32> to tensor<4x8xf32>
%1 = tensor.insert_slice %0 ...
: tensor<4x8xf32> to tensor<?x4x8xf32>
scf.yield %1: tensor<?x4x8xf32>
} -> tensor<?x4x8xf32>
scf.for (%j, %k) {
%st0 = tensor.extract_slice %packed [%k, 0, 0][1, 4, 8][1, 1, 1] :
tensor<?x4x8xf32> to tensor<4x8xf32>
compute(%st0)
}
}

Construct the packing loop nest.

Definition at line 938 of file HoistPadding.cpp.

References buildPackingLoopNestImpl(), mlir::tensor::computeTransposedType(), mlir::OpBuilder::create(), DBGS, mlir::Value::getDefiningOp(), mlir::Operation::getParentOfType(), mlir::Operation::getResult(), replaceByPackingResult(), and mlir::OpBuilder::setInsertionPointAfter().

Referenced by hoistPaddingOnTensors(), and padAndHoistLinalgOp().

◆ hoistPaddingOnTensors() [2/2]

FailureOr< Value > mlir::linalg::hoistPaddingOnTensors ( tensor::PadOp  opToHoist,
int64_t  numLoops,
ArrayRef< int64_t >  transposeVector,
tensor::PadOp &  hoistedOp,
SmallVectorImpl< TransposeOp > &  transposeOps 
)

Calls into hoistPaddingOnTensors with a local IRRewriter.

Definition at line 1002 of file HoistPadding.cpp.

References hoistPaddingOnTensors().

◆ hoistRedundantVectorBroadcasts()

void mlir::linalg::hoistRedundantVectorBroadcasts ( RewriterBase rewriter,
Operation root 
)

Hoist vector.extract/vector.broadcast pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are met:

  1. The vector.extract operation is applied on an iter_argument, and no other operator is using this argument in the body of the loop.
  2. The position of the vector.extract is either a static value, or defined outside of the loop.
  3. The vector.broadcast operation is yielded by the loop. To improve hoisting opportunities, call the moveLoopInvariantCode helper function on the candidate loop above which to hoist.

Definition at line 97 of file Hoisting.cpp.

References mlir::WalkResult::advance(), broadcast(), DBGS, mlir::IROperand< DerivedT, IRValueT >::get(), mlir::Value::hasOneUse(), mlir::WalkResult::interrupt(), mlir::RewriterBase::modifyOpInPlace(), mlir::moveLoopInvariantCode(), mlir::RewriterBase::moveOpAfter(), mlir::RewriterBase::replaceAllUsesWith(), replaceWithDifferentYield(), and mlir::Operation::walk().

◆ hoistRedundantVectorTransfers()

void mlir::linalg::hoistRedundantVectorTransfers ( Operation root,
bool  verifyNonZeroTrip = false 
)

Hoist vector.transfer_read/vector.transfer_write on buffers pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true:

  1. The two ops access the same memref with the same indices.
  2. All operands are invariant under the enclosing scf::ForOp.
  3. No uses of the memref either dominate the transfer_read or are dominated by the transfer_write (i.e. no aliasing between the write and the read across the loop)
  4. The source operands for vector.transfer_{read|write} do not originate from Ops implementing ViewLikeOpInterface (to reduce the risk of aliasing).
  5. If verifyNonZeroTrip is true, then the lower bound of the loop must be statically smaller than the upper bound of the loop, guaranteeing that the loop body will execute at least once. To improve hoisting opportunities, call the moveLoopInvariantCode helper function on the candidate loop above which to hoist. Hoisting the transfers results in scf::ForOp yielding the value that originally transited through memory.

TODO: To further improve hoisting opportunities, fold aliasing memref operations into respective vector.transfer{read|write} operations and avoid using ops implementing ViewLikeOpInterface as the source for transfer Ops.

WARNING: This hoisting does not model parallelism and is generally incorrect when used on distributed loops with memref semantics! NOTE: Setting verifyNonZeroTrip = true makes this more stable for distributed loops with memref semantics, but there could still be some issues when loops are executed a different number of times for different threads.

Definition at line 202 of file Hoisting.cpp.

References mlir::WalkResult::advance(), mlir::ValueBoundsConstraintSet::computeConstantBound(), DBGS, mlir::getForwardSlice(), mlir::WalkResult::interrupt(), mlir::vector::isDisjointTransferSet(), mlir::presburger::LB, mlir::moveLoopInvariantCode(), noAliasingUseInLoop(), mlir::DominanceInfo::properlyDominates(), mlir::presburger::UB, and mlir::Operation::walk().

◆ inferContractionDims() [1/2]

FailureOr< ContractionDimensions > mlir::linalg::inferContractionDims ( ArrayRef< AffineMap indexingMaps)

Definition at line 456 of file LinalgInterfaces.cpp.

References inferContractionDimsImpl(), and inferIteratorsFromOutMap().

◆ inferContractionDims() [2/2]

FailureOr< ContractionDimensions > mlir::linalg::inferContractionDims ( LinalgOp  linalgOp)

Find at least 2 parallel (m and n) and 1 reduction (k) dimension candidates that form a matmul subcomputation within linalgOp.

These dimensions are such that:

  1. The m dimension is involved in an outer-product along LHS (i.e. it is a permutation on RES and LHS and does not appear in RHS).
  2. The n dimension is involved in an outer-product along RHS (i.e. it is a permutation on RES and RHS and does not appear in LHS).
  3. The k dimension appears as a permutation on LHS and RHS.
  4. m, n and k appear only once in any given indexing.
  5. Optional batch dimensions that appear in all operands are captured. This allows e.g. detecting that some contraction is embedded within linalgOp with some orthogonal heuristic. When multiple dimension occurrences exist that match batch, m, n, or k, indices are returned in sorted order. Returns a failure if any of m, n or k is empty.

Definition at line 448 of file LinalgInterfaces.cpp.

References inferContractionDimsImpl().

Referenced by blockPackMatmul(), packMatmulGreedily(), and validateFullTilesOnDims().

◆ inferConvolutionDims()

FailureOr< ConvolutionDimensions > mlir::linalg::inferConvolutionDims ( LinalgOp  linalgOp)

Find at least 1 parallel (output_image) and reduction (filter_loop) dimension candidates that form a convolution subcomputation within linalgOp.

The LHS is assumed to be the convolution input while the RHS is assumed as the filter. These dimensions are such that:

  1. Optional batch dimensions that appear in the input and filter.
  2. The output_image dimension is involved in a cross-correlation along LHS (i.e. it is a permutation on RES and LHS and has an associated filter_loop in RHS).
  3. Optional output_channel dimension is involved in an outer-product along RHS (i.e. it is a permutation on RES and RHS and does not appear in LHS).
  4. Optional input_channel dimension appears as a permutation on LHS and RHS.
  5. The filter_loop dimension appears as a permutation on the RHS and represents the shape of the kernel cross-correlated along a corresponding output_image dim.
  6. The input_channel dimension appears as a permutation on LHS and RHS.
  7. All dimensions appear only once in any given indexing map. This allows e.g. detecting that some convolution is embedded within linalgOp with some orthogonal heuristic. When multiple dimension occurrences exist that match any classification indices are returned in sorted order. Returns a failure if output_image (and implicitly filter_loop) is empty.

Definition at line 817 of file LinalgInterfaces.cpp.

References inferConvolutionDimsImpl().

◆ insertSlicesBack()

SmallVector< Value > mlir::linalg::insertSlicesBack ( OpBuilder builder,
Location  loc,
LinalgOp  op,
ValueRange  operands,
ValueRange  results 
)

Creates insert_slice ops that insert results back into larger tensors they were originally extracted from with extract_slice before being passed as operands to the given structured operation op or its clone.

Note that operands are not necessarily the actual operands of op, the operation serves only as metadata container for operand types and positions.

Definition at line 714 of file Utils.cpp.

References mlir::OpBuilder::create(), and mlir::Value::getDefiningOp().

Referenced by tileLinalgOpImpl().

◆ interchangeGenericOp()

FailureOr< GenericOp > mlir::linalg::interchangeGenericOp ( RewriterBase rewriter,
GenericOp  genericOp,
ArrayRef< unsigned >  interchangeVector 
)

Interchange the iterator_types and iterator_maps dimensions and adapts the index accesses of op.

This is an in-place transformation controlled by interchangeVector. An empty vector is interpreted as the identity permutation and the transformation returns early.

E.g. the permutation (i,j,k) -> (j,k,i) is expressed with interchangeVector = [1,2,0]. All values in interchangeVector must be integers, in the range 0..op.rank without duplications (i.e. [1,1,2] is an invalid permutation).

Return failure if the permutation is not valid.

Definition at line 50 of file Interchange.cpp.

References mlir::RewriterBase::finalizeOpModification(), mlir::AffineMap::getPermutationMap(), interchangeGenericOpPrecondition(), mlir::inversePermutation(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::startOpModification().

Referenced by packMatmulGreedily().

◆ isaBroadcastOpInterface()

std::optional< SmallVector< int64_t > > mlir::linalg::isaBroadcastOpInterface ( GenericOp  genericOp)

Checks whether genericOp is semantically equivalent to a linalg.broadcast.

Returns broadcast dimensions if true.

Definition at line 102 of file LinalgInterfaces.cpp.

Referenced by specializeGenericOp().

◆ isaContractionOpInterface()

bool mlir::linalg::isaContractionOpInterface ( LinalgOp  linalgOp)

Checks whether linalgOp conforms to ContractionOpInterface.

Definition at line 529 of file LinalgInterfaces.cpp.

References mlir::linalg::detail::isContractionInterfaceImpl(), and mlir::linalg::detail::Success.

Referenced by FoldAddIntoDest::matchAndRewrite(), and specializeGenericOp().

◆ isaConvolutionOpInterface()

bool mlir::linalg::isaConvolutionOpInterface ( LinalgOp  linalgOp,
bool  allowEmptyConvolvedDims = false 
)

Checks whether linalgOp conforms to ConvolutionOpInterface.

By default, we require the linalgOp to have non-empty convolved dims (implicitly non-empty output_image and filter_loop). Users can loosen the constraint by setting allowEmptyConvolvedDims to true

Definition at line 1006 of file LinalgInterfaces.cpp.

References mlir::linalg::detail::isConvolutionInterfaceImpl(), and mlir::linalg::detail::Success.

◆ isaCopyOpInterface()

bool mlir::linalg::isaCopyOpInterface ( LinalgOp  linalgOp)

Checks whether linalgOp is semantically equivalent to a linalg.copyOp.

Definition at line 64 of file LinalgInterfaces.cpp.

Referenced by specializeGenericOp().

◆ isaElemwiseSingleBinaryOpInterface()

bool mlir::linalg::isaElemwiseSingleBinaryOpInterface ( GenericOp  genericOp)

Checks whether genericOp is semantically equivalent to a single linalg elementwise binary op e.g.

linalg.sub.

Referenced by specializeGenericOp().

◆ isaElemwiseSingleUnaryOpInterface()

bool mlir::linalg::isaElemwiseSingleUnaryOpInterface ( GenericOp  genericOp)

Checks whether a given genericOp is semantically equivalent to a single linalgelementwise unary op.

e.g. linalg.exp. A linalg.generic body could be a series of unary elementwise ops e.g. exp(neg(x)), such as formed by linalg op fusion. Here we restrict it to detecting cases where body is is a single computation op.

Referenced by specializeGenericOp().

◆ isaFillOpInterface()

std::optional< Value > mlir::linalg::isaFillOpInterface ( GenericOp  genericOp)

Checks whether genericOp is semantically equivalent to a linalg.fill.

Returns the scalar fill value if true.

Definition at line 81 of file LinalgInterfaces.cpp.

References mlir::IROperand< DerivedT, IRValueT >::get().

Referenced by specializeGenericOp().

◆ isaTransposeOpInterface()

std::optional< SmallVector< int64_t > > mlir::linalg::isaTransposeOpInterface ( GenericOp  genericOp)

Checks whether genericOp is semantically equivalent to a linalg.transpose.

Returns permuted dimensions if true.

Definition at line 152 of file LinalgInterfaces.cpp.

Referenced by specializeGenericOp().

◆ isDimSequencePreserved()

bool mlir::linalg::isDimSequencePreserved ( AffineMap  indexingMap,
ReassociationIndicesRef  dimSequence 
)

Return true if a given sequence of dimensions are contiguous in the range of the specified indexing map.

For a given dimSequence, check if the sequence is conserved in the indexingMap.

indexingMap is expected to be a projected permutation. Non-existence of the sequence returns true as well.

Definition at line 1157 of file ElementwiseOpFusion.cpp.

References mlir::detail::enumerate(), mlir::AffineMap::getNumResults(), mlir::AffineMap::getResult(), mlir::AffineMap::getResults(), and mlir::AffineMap::isProjectedPermutation().

Referenced by areDimSequencesPreserved().

◆ isElementwise()

bool mlir::linalg::isElementwise ( LinalgOp  op)

Check if a LinalgOp is an element-wise operation.

Definition at line 169 of file Utils.cpp.

References allIndexingsAreProjectedPermutation(), and hasOnlyScalarElementwiseOp().

Referenced by vectorizeDynamicLinalgOpPrecondition(), vectorizeLinalgOpPrecondition(), and vectorizeScalableVectorPrecondition().

◆ isParallelIterator()

bool mlir::linalg::isParallelIterator ( utils::IteratorType  iteratorType)

Check if iterator type has "parallel" semantics.

Definition at line 184 of file Utils.cpp.

Referenced by generateParallelLoopNest(), getCollapsableIterationSpaceDims(), and tileLinalgOpImpl().

◆ isReductionIterator()

bool mlir::linalg::isReductionIterator ( utils::IteratorType  iteratorType)

◆ linalgOpAnchoredEmptyTensorEliminationStep()

LogicalResult mlir::linalg::linalgOpAnchoredEmptyTensorEliminationStep ( RewriterBase rewriter,
Operation op,
bufferization::OneShotAnalysisState state 
)

Try to eliminate tensor::EmptyOps inside op that are anchored on a LinalgOp.

This transforms looks for LinalgOps that have an unused output operand and an input operand that is rooted in a tensor::EmptyOp. The tensor::EmptyOp uses are replaced with the output operand and the two operands of the LinalgOp are swapped.

Example: %0 = tensor.empty() %1 = linalg.matmul ins(...) outs(%0) %2 = linalg.generic ins(%1) outs(dest) { ^bb0(in: f32, out: f32): // out not used }

The IR is transformed as follows: %0 = tensor.empty() %1 = linalg.matmul ins(...) outs(dest) %2 = linalg.generic ins(%0) outs(%1) { ^bb0(in: f32, out: f32): // Use out instead of in }

The "ins" operand has no uses inside the body of the LinalgOp and can be folded away with existing cleanup patterns. Afterwards, the tensor::EmptyOp can also fold away.

Definition at line 40 of file EliminateEmptyTensors.cpp.

◆ linalgOpToAffineLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToAffineLoops ( RewriterBase rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of affine.for with the proper body for linalgOp.

Emits a loop nest of affine.for with the proper body for linalgOp.

Definition at line 363 of file Loops.cpp.

◆ linalgOpToLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToLoops ( RewriterBase rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of scf.for with the proper body for linalgOp.

Emits a loop nest of scf.for with the proper body for linalgOp.

Definition at line 368 of file Loops.cpp.

◆ linalgOpToParallelLoops()

FailureOr< LinalgLoops > mlir::linalg::linalgOpToParallelLoops ( RewriterBase rewriter,
LinalgOp  linalgOp 
)

Emit a loop nest of scf.parallel with the proper body for linalgOp.

Emits a loop nest of scf.parallel with the proper body for linalgOp.

Definition at line 375 of file Loops.cpp.

◆ lowerPack()

FailureOr< LowerPackResult > mlir::linalg::lowerPack ( RewriterBase rewriter,
tensor::PackOp  packOp 
)

◆ lowerUnPack()

FailureOr< LowerUnPackOpResult > mlir::linalg::lowerUnPack ( RewriterBase rewriter,
tensor::UnPackOp  unPackOp 
)

◆ makeAffineDimExprs()

SmallVector< AffineExpr, 4 > mlir::linalg::makeAffineDimExprs ( unsigned  num,
unsigned &  startIdx,
MLIRContext context 
)

Returns num AffineDimExpr dimensions at positions [startIdx, startIdx + num) and increments startIdx to startIdx + num.

Definition at line 2290 of file LinalgOps.cpp.

References mlir::getAffineDimExpr().

◆ makeComposedPadHighOp()

Value mlir::linalg::makeComposedPadHighOp ( OpBuilder b,
Location  loc,
RankedTensorType  type,
Value  source,
Value  pad,
bool  nofold 
)

Create a tensor::PadOp that pads source to the size of the statically sized type whose static sizes are assumed to be greater than the dynamic source size.

The padding introduces trailing pad values until the target size is met. If source is defined by one or more LinalgOps that have been padded with the same value and sizes, return their padded result instead of creating a tensor::PadOp.

Example:

%0 = tensor.extract_slice %arg0 [%iv0, %iv1] [%sz0, %sz1]
%1 = tensor.pad %0 low[0, 0] high[...] { tensor.yield %cst }
%2 = linalg.matmul ins(...) outs(%1)
%3 = tensor.extract_slice %2 [0, 0] [%sz0, %sz1]

makeComposedPadHighOp(source=%3, pad=cst) returns %2 makeComposedPadHighOp(source=%3, pad=other_cst) returns %4

%4 = tensor.pad %3 low[0, 0] high[...] { tensor.yield %other_cst }

Definition at line 192 of file Utils.cpp.

References mlir::tensor::createPadHighOp(), mlir::Value::getDefiningOp(), mlir::OpResult::getResultNumber(), mlir::m_Constant(), and mlir::matchPattern().

Referenced by padOperandToSmallestStaticBoundingBox().

◆ makeMemRefCopyOp()

GenericOp mlir::linalg::makeMemRefCopyOp ( OpBuilder b,
Location  loc,
Value  from,
Value  to 
)

Returns GenericOp that copies an n-D memref.

Unlike the current implementation of memref::CopyOp, this op can further tile, lower to loops or vectorize.

Definition at line 252 of file Utils.cpp.

References mlir::OpBuilder::create(), mlir::Builder::getContext(), mlir::AffineMap::getMultiDimIdentityMap(), and mlir::Value::getType().

◆ makeTiledLoopRanges()

std::tuple< SmallVector< Range, 4 >, LoopIndexToRangeIndexMap > mlir::linalg::makeTiledLoopRanges ( RewriterBase b,
Location  loc,
AffineMap  map,
ArrayRef< OpFoldResult allShapeSizes,
ArrayRef< OpFoldResult allTileSizes 
)

◆ makeTiledShape()

Operation * mlir::linalg::makeTiledShape ( OpBuilder builder,
Location  loc,
Value  valueToTile,
ArrayRef< OpFoldResult tileSizes,
AffineMap  map,
ArrayRef< OpFoldResult lbs,
ArrayRef< OpFoldResult ubs,
ArrayRef< OpFoldResult subShapeSizes,
bool  omitPartialTileCheck 
)

Creates an extract_slice/subview op for a single valueToTile with builder.

This new operation extracts a tile of valueToTile, starting at offsets lbs and with sizes subShapeSizes. omitPartialTileCheck controls whether to omit the partial/boundary tile condition check in cases where we statically know that it is unnecessary.

Definition at line 554 of file Utils.cpp.

References computeSliceParameters(), and materializeTiledShape().

◆ makeTiledShapes()

SmallVector< Value > mlir::linalg::makeTiledShapes ( OpBuilder builder,
Location  loc,
LinalgOp  linalgOp,
ValueRange  valuesToTile,
ArrayRef< OpFoldResult ivs,
ArrayRef< OpFoldResult tileSizes,
ArrayRef< OpFoldResult sizeBounds,
bool  omitPartialTileCheck 
)

Creates extract_slice/subview ops for all valuesToTile of the given linalgOp with builder, assuming linalgOp is being fused into a loop nest for tiling with the given induction variables ivs and tile sizes tileSizes.

sizeBounds are the iteration space bounds for all the implicit loops in linalgOp. omitPartialTileCheck controls whether to omit the partial/boundary tile condition check in cases where we statically know that it is unnecessary.

Note that a constant zero in tileSizes means no tiling at that implicit loop. The number of non-zero values in tileSizes should be equal to the number of values in ivs.

Definition at line 794 of file Utils.cpp.

References computeAllSliceParameters(), and materializeTiledShape().

Referenced by fuse(), and tileLinalgOpImpl().

◆ materializeTiledShape()

static Operation* mlir::linalg::materializeTiledShape ( OpBuilder builder,
Location  loc,
Value  valueToTile,
const SliceParameters sliceParams 
)
static

◆ offsetIndices() [1/2]

void mlir::linalg::offsetIndices ( OpBuilder b,
LinalgOp  linalgOp,
ArrayRef< OpFoldResult offests 
)

Add the specified offsets to any linalg.index ops contained in the given linalgOp.

The offsets are provided in the same order as iteration space dimensions. Null offests are assumed to be zero.

Definition at line 816 of file Utils.cpp.

Referenced by fuse(), and transformIndexOps().

◆ offsetIndices() [2/2]

void mlir::linalg::offsetIndices ( RewriterBase b,
LinalgOp  linalgOp,
ArrayRef< OpFoldResult offests 
)

◆ pack()

FailureOr< PackResult > mlir::linalg::pack ( RewriterBase rewriter,
linalg::LinalgOp  linalgOp,
ArrayRef< OpFoldResult packedSizes 
)

Implement packing of a single LinalgOp by packedSizes.

Implement packing of a single LinalgOp by performing packing by packedSizes.

There must be one packedSizes entry per linalgOp iterator. Return the packed Linalg op on success, failure otherwise.

Definition at line 477 of file Transforms.cpp.

References mlir::OpBuilder::create(), DBGS, DBGSNL, mlir::getConstantIntValue(), mlir::getElementTypeOrSelf(), mlir::Operation::getRegion(), mlir::Value::getType(), mlir::ValueRange::getTypes(), mlir::Builder::getZeroAttr(), mlir::RewriterBase::notifyMatchFailure(), packLinalgMetadataOnce(), mlir::RewriterBase::replaceOp(), mlir::Region::takeBody(), and mlir::tile().

◆ packMatmulGreedily()

FailureOr< PackResult > mlir::linalg::packMatmulGreedily ( RewriterBase rewriter,
LinalgOp  linalgOp,
ArrayRef< OpFoldResult mnkPackedSizes,
ArrayRef< int64_t >  mnkPaddedSizesNextMultipleOf,
ArrayRef< int64_t >  mnkOrder 
)

Pack a LinalgOp by greedily inferring matmul dimensions (m, n, k) where m and n are proper parallel dimensions and k is a proper reduction dimension.

Packing occurs by rewriting the op as a linalg.generic and calling linalg::pack by mnkPackedSizes. The order of the packed dimensions is customizable: the mnkOrder is a permutation of {0, 1, 2} to reorder {m, n, k} into one of the 8 possible forms. The outer dimensions of the operands are not permuted at this time, this is left for future work.

Definition at line 766 of file Transforms.cpp.

References mlir::computePermutationVector(), DBGS, DBGSNL, generalizeNamedOp(), inferContractionDims(), interchangeGenericOp(), mlir::isPermutationVector(), and mlir::RewriterBase::notifyMatchFailure().

Referenced by blockPackMatmul().

◆ packTranspose()

FailureOr< PackTransposeResult > mlir::linalg::packTranspose ( RewriterBase rewriter,
tensor::PackOp  packOp,
linalg::LinalgOp  linalgOp,
tensor::UnPackOp  maybeUnPackOp,
ArrayRef< int64_t >  outerPerm,
ArrayRef< int64_t >  innerPerm 
)

Transpose a single PackOp -> LinalgOp -> UnPackOp chain and return the transposed PackOp -> LinalgOp -> UnPackOp chain after replacements.

Return failure if either:

  1. the packOp does not have the linalgOp as its unique use.
  2. the maybeUnPackOp, if specified must be a consumer of the result tied to the unique packOp use.
  3. outerPerm (resp. innerPerm) must be valid permutations of packOp.getOuterDimsPerm (resp. packOp.getInnerDimsPerm) or empty.

Definition at line 675 of file Transforms.cpp.

References mlir::OpOperand::getOperandNumber(), mlir::detail::IROperandBase::getOwner(), mlir::isPermutationVector(), mlir::RewriterBase::notifyMatchFailure(), mlir::RewriterBase::replaceOp(), mlir::OpBuilder::setInsertionPoint(), and transposeOneLinalgOperandAndReplace().

Referenced by transposePackedMatmul().

◆ padAndHoistLinalgOp()

FailureOr< LinalgOp > mlir::linalg::padAndHoistLinalgOp ( RewriterBase rewriter,
LinalgOp  linalgOp,
const LinalgPaddingOptions options 
)

◆ peelLoop()

SmallVector< Value > mlir::linalg::peelLoop ( RewriterBase rewriter,
Operation op 
)

Try to peel and canonicalize loop op and return the new result.

Also applies affine_min/max bounds simplification on the fly where relevant.

Definition at line 59 of file Transforms.cpp.

References mlir::Operation::getResults(), and mlir::scf::peelForLoopAndSimplifyBounds().

Referenced by peelLoops().

◆ peelLoops()

void mlir::linalg::peelLoops ( RewriterBase rewriter,
ArrayRef< scf::ForOp >  loops 
)

Peel 'loops' and applies affine_min/max bounds simplification on the fly where relevant.

Definition at line 75 of file Transforms.cpp.

References peelLoop().

◆ populateBlockPackMatmulPatterns()

void mlir::linalg::populateBlockPackMatmulPatterns ( RewritePatternSet patterns,
const ControlBlockPackMatmulFn controlFn 
)

Patterns to block pack Linalg matmul ops.

Definition at line 310 of file BlockPackMatmul.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateBubbleUpExtractSliceOpPatterns()

void mlir::linalg::populateBubbleUpExtractSliceOpPatterns ( RewritePatternSet patterns)

Patterns that are used to bubble up extract slice op above linalg op.

Definition at line 134 of file BubbleUpExtractSlice.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateCollapseDimensions()

void mlir::linalg::populateCollapseDimensions ( RewritePatternSet patterns,
const GetCollapsableDimensionsFn controlCollapseDimensions 
)

Pattern to collapse dimensions in a linalg.generic op.

This will collapse tensor operands when needed and expand back the result tensors.

Definition at line 2147 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateConstantFoldLinalgOperations()

void mlir::linalg::populateConstantFoldLinalgOperations ( RewritePatternSet patterns,
const ControlFusionFn controlFn 
)

Patterns to constant fold Linalg operations.

Definition at line 306 of file ConstantFold.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateContractionOpRankReducingPatterns()

void mlir::linalg::populateContractionOpRankReducingPatterns ( RewritePatternSet patterns)

Adds patterns that reduce the rank of named contraction ops that have unit dimensions in the operand(s) by converting to a sequence of collapse_shape, <corresponding linalg named op>, expand_shape (if on tensors).

For example a linalg.batch_matmul with unit batch size will convert to linalg.matmul and a linalg.matvec with with unit spatial dim in lhs will convert to a linalg.dot.

Definition at line 1066 of file DropUnitDims.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateConvertConv2DToImg2ColPatterns()

void mlir::linalg::populateConvertConv2DToImg2ColPatterns ( RewritePatternSet patterns)

Populates patterns to transform linalg.conv_2d_xxx operations into linalg.generic (for img2col packing) and linalg.matmul.

See also
rewriteInIm2Col for more details.

Definition at line 687 of file ConvertConv2DToImg2Col.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateConvertToDestinationStylePatterns()

void mlir::linalg::populateConvertToDestinationStylePatterns ( RewritePatternSet patterns)

Populate patterns that convert non-destination-style ops to destination style ops.

Definition at line 605 of file ConvertToDestinationStyle.cpp.

References mlir::RewritePatternSet::add().

◆ populateConvolutionVectorizationPatterns()

void mlir::linalg::populateConvolutionVectorizationPatterns ( RewritePatternSet patterns,
PatternBenefit  benefit = 1 
)

Populate patterns for vectorizing low-D convolution ops.

This is a step in progressive lowering for convolution ops, it assume high-D convolution ops were decomposed previously.

Definition at line 3883 of file Vectorization.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateDataLayoutPropagationPatterns()

void mlir::linalg::populateDataLayoutPropagationPatterns ( RewritePatternSet patterns,
const ControlPropagationFn controlPackUnPackPropagation 
)

Patterns to bubble up or down data layout ops across other operations.

Definition at line 1206 of file DataLayoutPropagation.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateDecomposeConvolutionPatterns()

void mlir::linalg::populateDecomposeConvolutionPatterns ( RewritePatternSet patterns,
PatternBenefit  benefit = 1 
)

Linalg decompose convolutions patterns.

Populates patterns to decompose high-D convolution ops into low-D ones. This is a step in progressive lowering for convolution ops, afterwards we can vectorize the low-D convolution ops.

Definition at line 1601 of file Transforms.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateDecomposeLinalgOpsPattern()

void mlir::linalg::populateDecomposeLinalgOpsPattern ( RewritePatternSet patterns,
bool  removeDeadArgsAndResults = true 
)

Populate patterns for splitting a LinalgOp with multiple statements within its payload into multiple GenericOp that have a single statement.

The option removeDeadArgsAndResults adds patterns to remove dead arguments and results from the generated decomposed ops. This is default true since the core decomposition patterns relies on these clean up patterns. It is set to false only for testing purposes.

Definition at line 384 of file DecomposeLinalgOps.cpp.

References mlir::RewritePatternSet::getContext(), mlir::RewritePatternSet::insert(), and populateEraseUnusedOperandsAndResultsPatterns().

◆ populateDecomposePackUnpackPatterns()

void mlir::linalg::populateDecomposePackUnpackPatterns ( RewritePatternSet patterns)

Populates patterns to decompose tensor.pack and tensor.unpack Ops into e.g.

tensor.pad, linalg.transpose, tensor.{insert|extract}_slice. Require all outer dims to be unit.

Definition at line 1622 of file Transforms.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateDecomposeProjectedPermutationPatterns()

void mlir::linalg::populateDecomposeProjectedPermutationPatterns ( RewritePatternSet patterns)

Add patterns to make explicit broadcasts and transforms in the input operands of a genericOp.

Definition at line 246 of file DecomposeGenericByUnfoldingPermutation.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateDecomposeWinogradOpsPatterns()

void mlir::linalg::populateDecomposeWinogradOpsPatterns ( RewritePatternSet patterns)

Patterns to decompose Winograd operators.

Definition at line 1215 of file WinogradConv2D.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateElementwiseOpsFusionPatterns()

void mlir::linalg::populateElementwiseOpsFusionPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlElementwiseOpFusion 
)

Patterns for fusing linalg operation on tensors.

Pattern to fuse linalg.generic -> linalg.generic operations when both operations are fusable elementwise operations.

Definition at line 2136 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), mlir::RewritePatternSet::getContext(), and populateEraseUnusedOperandsAndResultsPatterns().

◆ populateElementwiseToLinalgConversionPatterns()

void mlir::linalg::populateElementwiseToLinalgConversionPatterns ( RewritePatternSet patterns)

Populate patterns that convert ElementwiseMappable ops to linalg parallel loops.

Definition at line 115 of file ElementwiseToLinalg.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateEraseUnnecessaryInputsPatterns()

void mlir::linalg::populateEraseUnnecessaryInputsPatterns ( RewritePatternSet patterns)

Patterns to promote inputs to outputs and remove unused inputs of linalg.generic ops.

Definition at line 428 of file EraseUnusedOperandsAndResults.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

◆ populateEraseUnusedOperandsAndResultsPatterns()

void mlir::linalg::populateEraseUnusedOperandsAndResultsPatterns ( RewritePatternSet patterns)

Pattern to remove dead operands and results of linalg.generic operations.

This is effectively DCE for a linalg op.

Definition at line 421 of file EraseUnusedOperandsAndResults.cpp.

References mlir::RewritePatternSet::getContext(), and mlir::RewritePatternSet::insert().

Referenced by populateDecomposeLinalgOpsPattern(), and populateElementwiseOpsFusionPatterns().

◆ populateFoldAddIntoDestPatterns()

void mlir::linalg::populateFoldAddIntoDestPatterns ( RewritePatternSet patterns)

Pattern to replace linalg.add when destination passing on a contraction op suffices for achieving the sum.

Definition at line 147 of file FoldAddIntoDest.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateFoldReshapeOpsByCollapsingPatterns()

void mlir::linalg::populateFoldReshapeOpsByCollapsingPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlFoldingReshapes 
)

Patterns to fold an expanding tensor.expand_shape operation with its producer generic operation by collapsing the dimensions of the generic op.

Definition at line 2127 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateFoldReshapeOpsByExpansionPatterns()

void mlir::linalg::populateFoldReshapeOpsByExpansionPatterns ( RewritePatternSet patterns,
const ControlFusionFn controlFoldingReshapes 
)

Patterns to fold an expanding (collapsing) tensor_reshape operation with its producer (consumer) generic operation by expanding the dimensionality of the loop in the generic op.

Definition at line 2116 of file ElementwiseOpFusion.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateFoldUnitExtentDimsPatterns()

void mlir::linalg::populateFoldUnitExtentDimsPatterns ( RewritePatternSet patterns,
linalg::ControlDropUnitDims options 
)

Patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors via reassociative reshape ops.

Definition at line 797 of file DropUnitDims.cpp.

References mlir::linalg::ControlDropUnitDims::ExtractInsertSlice, options, populateFoldUnitExtentDimsViaReshapesPatterns(), and populateFoldUnitExtentDimsViaSlicesPatterns().

◆ populateFuseTensorPadWithProducerLinalgOpPatterns()

void mlir::linalg::populateFuseTensorPadWithProducerLinalgOpPatterns ( RewritePatternSet patterns)

Pattern to fuse a tensor.pad operation with the producer of its source, if the producer is a linalg operation with all parallel iterator types.

Definition at line 121 of file FusePadOpWithLinalgProducer.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateInlineConstantOperandsPatterns()

void mlir::linalg::populateInlineConstantOperandsPatterns ( RewritePatternSet patterns)

Patterns that are used to inline constant operands into linalg generic ops.

Definition at line 98 of file InlineScalarOperands.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateInsertSliceVectorizationPatterns()

void mlir::linalg::populateInsertSliceVectorizationPatterns ( RewritePatternSet patterns)

Populates patterns with vectorisation patterns for tensor.insert_slice.

TODO: Avoid having a dedicated populate{} for one pattern. Instead, either expand or merge with other populate{}.

Definition at line 2766 of file Vectorization.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateLinalgGenericOpsSpecializationPatterns()

void mlir::linalg::populateLinalgGenericOpsSpecializationPatterns ( RewritePatternSet patterns)

Populates patterns with patterns to convert linalg.generic ops to named ops where possible.

A linalg.generic can represent wide range and complex computations for which equivalent linalg named op may not exist e.g. linalg.generic that takes a tensor and computes a polynomial such as: p(x) = an*x^n + ... + a1x + a0 There is no equivalent named op to convert to. Many such cases exist.

Definition at line 356 of file Specialize.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateLinalgNamedOpConversionPatterns()

void mlir::linalg::populateLinalgNamedOpConversionPatterns ( RewritePatternSet patterns)

Patterns to convert from one named op to another.

These can be seen as canonicalizations of named ops into another named op.

Definition at line 161 of file NamedOpConversions.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateLinalgNamedOpsGeneralizationPatterns()

void mlir::linalg::populateLinalgNamedOpsGeneralizationPatterns ( RewritePatternSet patterns)

Linalg generalization patterns.

Populates patterns with patterns to convert spec-generated named ops to linalg.generic ops.

Definition at line 95 of file Generalization.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateLinalgTilingCanonicalizationPatterns()

void mlir::linalg::populateLinalgTilingCanonicalizationPatterns ( RewritePatternSet patterns)

◆ populateLinalgToStandardConversionPatterns()

void mlir::linalg::populateLinalgToStandardConversionPatterns ( RewritePatternSet patterns)

Populate the given list with patterns that convert from Linalg to Standard.

Definition at line 127 of file LinalgToStandard.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateMoveInitOperandsToInputPattern()

void mlir::linalg::populateMoveInitOperandsToInputPattern ( RewritePatternSet patterns)

A pattern that converts init operands to input operands.

Definition at line 809 of file DropUnitDims.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populatePadOpVectorizationPatterns()

void mlir::linalg::populatePadOpVectorizationPatterns ( RewritePatternSet patterns,
PatternBenefit  baseBenefit = 1 
)

Populates patterns with patterns that vectorize tensor.pad.

These patterns are meant to apply in a complementary fashion. Benefits are used to encode a certain ordering of pattern application. To avoid scattering magic constants throughout the code base, the patterns must be added with this function. baseBenefit can be used to offset the benefit of all tensor::PadOp vectorization patterns by a certain value.

Definition at line 2771 of file Vectorization.cpp.

References mlir::RewritePatternSet::add(), mlir::PatternBenefit::getBenefit(), and mlir::RewritePatternSet::getContext().

◆ populateSparseTensorRewriting()

void mlir::linalg::populateSparseTensorRewriting ( RewritePatternSet patterns)

Populate patterns that are only useful in the context of sparse tensors.

◆ populateSplitReductionPattern()

void mlir::linalg::populateSplitReductionPattern ( RewritePatternSet patterns,
const ControlSplitReductionFn controlSplitReductionFn,
bool  useAlloc = false 
)

Patterns to apply splitReduction below.

Definition at line 448 of file SplitReduction.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateSwapExtractSliceWithFillPatterns()

void mlir::linalg::populateSwapExtractSliceWithFillPatterns ( RewritePatternSet patterns)

Adds patterns that waps tensor.extract_slice(linalg.fill(cst, init)) into linalg.fill(cst, tensor.extract_slice(init)).

Definition at line 38 of file SwapExtractSliceWithFillPatterns.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateTranposeConv2DPatterns()

void mlir::linalg::populateTranposeConv2DPatterns ( RewritePatternSet patterns)

◆ populateTransposeMatmulPatterns()

void mlir::linalg::populateTransposeMatmulPatterns ( RewritePatternSet patterns,
bool  transposeLHS = true 
)

Patterns to convert Linalg matmul ops to transposed variants.

Definition at line 164 of file TransposeMatmul.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateWinogradConv2DPatterns()

void mlir::linalg::populateWinogradConv2DPatterns ( RewritePatternSet patterns,
int64_t  m,
int64_t  r 
)

Patterns to apply Winograd Conv2D algorithm F(m x m, r x r).

Definition at line 1208 of file WinogradConv2D.cpp.

◆ promoteSubviewAsNewBuffer()

FailureOr< PromotionInfo > mlir::linalg::promoteSubviewAsNewBuffer ( OpBuilder b,
Location  loc,
memref::SubViewOp  subView,
const AllocBufferCallbackFn allocationFn,
DataLayout layout 
)

◆ promoteSubViews()

FailureOr< LinalgOp > mlir::linalg::promoteSubViews ( OpBuilder b,
LinalgOp  op,
const LinalgPromotionOptions options 
)

Promote the subViews into a new buffer allocated at the insertion point b.

Promotion occurs in 3 steps:

  1. Create a new buffer for a full tile (i.e. not clipped at the boundary).
  2. Take a full view on the buffer.
  3. Take a partial slice of the full view in step 2. and copy into it.

Return the modified linalg op (the modification happens in place) as well as all the copy ops created.

Definition at line 421 of file Promotion.cpp.

References options, and promoteSubViews().

◆ promoteSubviewsPrecondition()

LogicalResult mlir::linalg::promoteSubviewsPrecondition ( Operation op,
LinalgPromotionOptions  options 
)

Promote memref.subviews feeding linalg-on-buffers operations.

Definition at line 399 of file Promotion.cpp.

References options.

◆ registerAll()

template<typename... OpTypes>
static void mlir::linalg::registerAll ( MLIRContext ctx)
static

Variadic helper function.

Definition at line 344 of file MeshShardingInterfaceImpl.cpp.

Referenced by registerMeshShardingInterfaceExternalModels().

◆ registerAllDialectInterfaceImplementations()

void mlir::linalg::registerAllDialectInterfaceImplementations ( DialectRegistry registry)

◆ registerBufferizableOpInterfaceExternalModels()

void mlir::linalg::registerBufferizableOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerMeshShardingInterfaceExternalModels()

void mlir::linalg::registerMeshShardingInterfaceExternalModels ( DialectRegistry registry)

◆ registerOne()

template<typename OpType >
static void mlir::linalg::registerOne ( MLIRContext ctx)
static

Definition at line 338 of file MeshShardingInterfaceImpl.cpp.

◆ registerRuntimeVerifiableOpInterfaceExternalModels()

void mlir::linalg::registerRuntimeVerifiableOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerSubsetOpInterfaceExternalModels()

void mlir::linalg::registerSubsetOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerTilingInterfaceExternalModels()

void mlir::linalg::registerTilingInterfaceExternalModels ( DialectRegistry registry)

◆ registerTransformDialectExtension()

void mlir::nvgpu::registerTransformDialectExtension ( DialectRegistry registry)

Definition at line 60 of file DialectExtension.cpp.

References mlir::DialectRegistry::addExtensions().

Referenced by mlir::registerAllExtensions().

◆ registerValueBoundsOpInterfaceExternalModels()

void mlir::linalg::registerValueBoundsOpInterfaceExternalModels ( DialectRegistry registry)

◆ rewriteAsPaddedOp()

LogicalResult mlir::linalg::rewriteAsPaddedOp ( RewriterBase rewriter,
LinalgOp  opToPad,
const LinalgPaddingOptions options,
LinalgOp &  paddedOp,
SmallVector< Value > &  replacements,
SmallVector< tensor::PadOp > &  padOps 
)

Pad the iterator dimensions paddingDimensions of all opToPad operands to a static bounding box.

The original opToPad is cloned and operates on the padded tensors.

  • "options.padToMultipleOf" indicates that each padding dimension should be padded to the specified multiple.
  • Use "options.paddingValues" and "options.nofoldFlags" to set padding value and nofold attribute of the created tensor::PadOps, respectively.
  • The unpadded results (extracted slice of the cloned operation) are returned via replacements.
  • The tensor::PadOps are returned via padOps.
  • "options.copyBackOp" specifies the op type for copying back the unpadded result to the original destination tensor.

Definition at line 153 of file Padding.cpp.

References mlir::clone(), mlir::OpBuilder::create(), DBGS, mlir::detail::enumerate(), mlir::get(), mlir::getElementTypeOrSelf(), mlir::Builder::getIndexAttr(), mlir::Value::getType(), mlir::ValueRange::getTypes(), mlir::Builder::getZeroAttr(), mlir::linalg::LinalgPaddingOptions::LinalgCopy, mlir::linalg::LinalgPaddingOptions::None, mlir::RewriterBase::notifyMatchFailure(), options, padOperandToSmallestStaticBoundingBox(), mlir::reifyResultShapes(), and mlir::OpBuilder::setInsertionPointAfter().

Referenced by padAndHoistLinalgOp().

◆ rewriteInDestinationPassingStyle() [1/3]

FailureOr< Operation * > mlir::linalg::rewriteInDestinationPassingStyle ( RewriterBase rewriter,
tensor::FromElementsOp  fromElementsOp 
)

Rewrite tensor.from_elements to linalg.generic.

Lower tensor.from_elements to a sequence of chained tensor.insert.

Definition at line 345 of file ConvertToDestinationStyle.cpp.

References mlir::OpBuilder::create(), createInserts(), mlir::Value::getDefiningOp(), mlir::RewriterBase::replaceOp(), and mlir::RewriterBase::replaceOpWithNewOp().

◆ rewriteInDestinationPassingStyle() [2/3]

FailureOr< Operation * > mlir::linalg::rewriteInDestinationPassingStyle ( RewriterBase rewriter,
tensor::GenerateOp  generateOp 
)

◆ rewriteInDestinationPassingStyle() [3/3]

FailureOr< Operation * > mlir::linalg::rewriteInDestinationPassingStyle ( RewriterBase rewriter,
tensor::PadOp  padOp 
)

◆ rewriteInIm2Col() [1/4]

FailureOr< std::pair< Operation *, Operation * > > mlir::linalg::rewriteInIm2Col ( RewriterBase rewriter,
linalg::Conv2DNchwFchwOp  convOp 
)

Similar to rewriteInIm2Col with linalg::Conv2DNhwcHwcfOp except because the channels are to the left of the image shape dimensions, the position of the contraction dimension in the resulting matmul is reversed.

This swaps the LHS and RHS of the matmul when compared with nhwc (i.e. (D, C x Kh x Kw) * (C x Kh x Kw, Ho x Wo))

Definition at line 365 of file ConvertConv2DToImg2Col.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), createAdd(), createMul(), mlir::AffineMap::get(), mlir::get(), mlir::Builder::getContext(), getConvolvedIndex(), mlir::Value::getLoc(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::Value::getType(), mlir::getType(), hasAllOneValues(), mlir::RewriterBase::notifyMatchFailure(), mlir::RewriterBase::replaceOp(), and unrollIndex().

◆ rewriteInIm2Col() [2/4]

FailureOr< std::pair< Operation *, Operation * > > mlir::linalg::rewriteInIm2Col ( RewriterBase rewriter,
linalg::Conv2DNhwcFhwcOp  convOp 
)

Same as the above but for Fhwc channel orderings in the filter.

In this case the matrix multiplication is actually a row-wise dot-product rather than a row-column dot-product. This is to avoid transposing the filter matrix which would be required for a regular matrix multiplication to produce the correct output dimensions.

Definition at line 498 of file ConvertConv2DToImg2Col.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), createAdd(), createMul(), mlir::AffineMap::get(), mlir::get(), mlir::Builder::getContext(), getConvolvedIndex(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::Value::getType(), mlir::getType(), hasAllOneValues(), mlir::RewriterBase::notifyMatchFailure(), mlir::RewriterBase::replaceOp(), and unrollIndex().

◆ rewriteInIm2Col() [3/4]

FailureOr< std::pair< Operation *, Operation * > > mlir::linalg::rewriteInIm2Col ( RewriterBase rewriter,
linalg::Conv2DNhwcHwcfOp  convOp 
)

Convert linalg.conv_2d_nhwc_hwcf into linalg.generic (for img2col packing) and linalg.matmul.

A convolution operation can be written as a matrix-matrix multiplication by unfolding the cross-correlation between input and filter and explicitly copy overlapped sliding window inputs.

Consider 2D input X with single channel input and output and 2x2 filter W: [x(0, 0) , x(0, 1) , ..., x(0, n) ] [x(1, 0) , x(1, 1) , ..., x(1, n) ] [. , . ,. , . ] [w(0, 0), w(0, 1)] [. , . , . , . ] (conv) [w(1, 0), w(1, 1)] [. , . , ., . ] [x(n-1, 0), x(n-1, 1), ..., x(n-1, n-1)]

The packed input data (img2col) is a matrix with |rows| = output spatial size, |columns| = filter spatial size. To compute the output Y(i, j) we need to calculate the dot product between filter window at input X(x, y)) and the filter which will look like the following where r.h.s is the img2col matrix and l.h.s is the flattened filter:

[x(0,0), x(0,1), x(1,0), x(1,1)] [x(0,1), x(1,1), x(0,2), x(1,2)] (matmul) [w(0,0), w(0,1), w(1,0), w(1,1)] [x(0,1), x(1,1), x(0,2), x(1,2)] [ . , . , . , . ]

In general for 2D case with (N, H, W, C) input and (Kh, Kw, C, D) filter and output (N, Ho, Wo, D) the convolution is the following matrix-matrix multiplication (Ho x Wo, Kh x Kw x C) * (Kh x Kw x C, D) for each input in the N input. For the case where N > 1 its a batched matrix-matrix multiplication.

On success, return both the operation that produces the img2col tensor and the final operation of the sequence that replaces the original convolution.

Definition at line 79 of file ConvertConv2DToImg2Col.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), createAdd(), createMul(), mlir::AffineMap::get(), mlir::get(), mlir::Builder::getContext(), getConvolvedIndex(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::Value::getType(), mlir::getType(), hasAllOneValues(), mlir::RewriterBase::notifyMatchFailure(), mlir::RewriterBase::replaceOp(), and unrollIndex().

◆ rewriteInIm2Col() [4/4]

FailureOr< std::pair< Operation *, Operation * > > mlir::linalg::rewriteInIm2Col ( RewriterBase rewriter,
linalg::DepthwiseConv2DNhwcHwcOp  convOp 
)

Similar to rewriteInIm2Col with linalg::Conv2DNhwcHwcfOp except there is no reduction among the input channels so each convolution can be a matrix-vector product and by transposing both input filter so channels are outer most the computation is a batched matrix-vector product.

Definition at line 214 of file ConvertConv2DToImg2Col.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), mlir::AffineMap::get(), mlir::get(), mlir::Builder::getAffineConstantExpr(), mlir::Builder::getAffineDimExpr(), mlir::Builder::getContext(), mlir::AffineMap::getMultiDimIdentityMap(), mlir::Operation::getResult(), mlir::Value::getType(), hasAllOneValues(), mlir::inversePermutation(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::replaceOp().

◆ specializeGenericOp()

FailureOr< LinalgOp > mlir::linalg::specializeGenericOp ( RewriterBase rewriter,
GenericOp  genericOp 
)

◆ splitOp()

std::pair< TilingInterface, TilingInterface > mlir::linalg::splitOp ( RewriterBase rewriter,
TilingInterface  op,
unsigned  dimension,
OpFoldResult  splitPoint 
)

Split the given op into two parts along the given iteration space dimension at the specified splitPoint, and return the two parts.

If the second part is statically known to be empty, do not create it and return nullptr instead. Error state is signalled by returning a pair of nullptrs.

For example, the following op:

linalg.matmul ins(%0, %1 : tensor<128x32xf32>, tensor<32x64xf32>) outs(%2 : tensor<128x64xf32>)

split along the first dimension at position 42 will result in:

%3 = tensor.extract_slice %0[0, 0][42, 32][1, 1] %4 = tensor.extract_slice %2[0, 0][42, 64][1, 1] %5 = linalg.matmul ins(%3, %1 : tensor<42x32xf32>, tensor<32x64xf32>) outs(%5 : tensor<42x64xf32>) %6 = tensor.insert_slice %5 into %2[0, 0][42, 64][1, 1]

%7 = tensor.extract_slice %0[42, 0][86, 32][1, 1] %8 = tensor.extract_slice %6[42, 0][86, 64][1, 1] %9 = linalg.matmul ins(%7, %1 : tensor<86x32xf32>, tensor<32x64xf32>) outs(%8 : tensor<86x64xf32>) tensor.insert_slice %5 into %6[42, 0][86, 64][1, 1]

Note that there is no simplification other than constant propagation applied to slice extraction and insertion.

Definition at line 67 of file Split.cpp.

References mlir::bindDims(), createSplitPart(), mlir::Builder::getContext(), mlir::tensor::getOrCreateDestinations(), mlir::AffineMap::inferFromExprList(), mlir::affine::makeComposedFoldedAffineApply(), mlir::affine::makeComposedFoldedAffineMin(), mlir::RewriterBase::modifyOpInPlace(), mlir::Range::offset, mlir::RewriterBase::replaceOp(), and mlir::Range::size.

◆ splitReduction()

FailureOr< SplitReductionResult > mlir::linalg::splitReduction ( RewriterBase b,
LinalgOp  op,
const ControlSplitReductionFn controlSplitReductionFn,
bool  useAlloc = false 
)

◆ splitReductionByScaling()

FailureOr< SplitReductionResult > mlir::linalg::splitReductionByScaling ( RewriterBase b,
LinalgOp  op,
const ControlSplitReductionFn controlSplitReductionFn,
bool  useAlloc = false 
)

Scaling-based implementation of the split reduction transformation.

Core rewrite implementation.

Instead of introducing an ExpandShapeOp, this rewrites a reduction dimension k into k * scale + kk.

Example: ``` %0 = linalg.matmul ins(A, B: tensor<16x256xf32>, tensor<256x32xf32>) outs(C: tensor<16x32xf32>) -> tensor<16x32xf32> ```

Is transformed to:

``` #map0 = affine_map<(d0, d1, d2, d3) -> (d0, d2 * 4 + d3)> #map1 = affine_map<(d0, d1, d2, d3) -> (d2 * 4 + d3, d1)> #map2 = affine_map<(d0, d1, d2, d3) -> (d2, d3)> #map3 = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2)> #map4 = affine_map<(d0, d1, d2) -> (d0, d1, d2)> #map5 = affine_map<(d0, d1, d2) -> (d0, d1)> %0 = tensor.empty [16, 32, 64] : tensor<16x32x64xf32> cst = arith.constant 0.000000e+00 : f32 %1 = linalg.fill ins(cst : f32) outs(%0 : tensor<16x32x64xf32>) -> tensor<16x32x64xf32> %2 = tensor.empty [64, 4] : tensor<64x4xi1>

%3 = linalg.generic {indexing_maps = [#map0, #map1, #map2, #map3], iterator_types = ["parallel", "parallel", "parallel", "reduction"]} ins(A, B, %2 : tensor<16x256xf32>, tensor<256x32xf32>, tensor<64x4xi1>) outs(%1 : tensor<16x32x64xf32>) { ^bb0(arg3: f32, arg4: f32, arg5: i1, arg6: f32): %5 = arith.mulf arg3, arg4 : f32 %6 = arith.addf arg6, %5 : f32 linalg.yield %6 : f32 } -> tensor<16x32x64xf32>

%4 = linalg.generic {indexing_maps = [#map4, #map5], iterator_types = ["parallel", "parallel", "reduction"]} */ // ins(%3 : tensor<16x32x64xf32>) /** outs(C : tensor<16x32xf32>) { ^bb0(arg3: f32, arg4: f32): %5 = arith.addf arg3, arg4 : f32 linalg.yield %5 : f32 } -> tensor<16x32xf32>

return %4 : tensor<16x32xf32> ```

Definition at line 241 of file SplitReduction.cpp.

References mlir::Operation::clone(), mlir::OpBuilder::create(), mlir::tensor::createDynamicDimValues(), mlir::AffineMap::dropResult(), mlir::AffineMap::get(), mlir::getAffineDimExpr(), mlir::Value::getDefiningOp(), mlir::Builder::getIntegerType(), mlir::Builder::getMultiDimIdentityMap(), mlir::arith::getNeutralElement(), mlir::Operation::getResult(), mlir::Value::getType(), mlir::ValueRange::getTypes(), mlir::linalg::SplitReductionOptions::index, mlir::RewriterBase::inlineRegionBefore(), mlir::linalg::SplitReductionOptions::innerParallel, mlir::RankedTensorType::Builder::insertDim(), insertParallelDim(), mlir::matchReduction(), mlir::RewriterBase::notifyMatchFailure(), mlir::linalg::SplitReductionOptions::ratio, mlir::RewriterBase::replaceOp(), scaleReductionDim(), mlir::OpBuilder::setInsertionPoint(), and mlir::Operation::setOperand().

◆ spmdizeLinalgOpWithShardedReduction()

static void mlir::linalg::spmdizeLinalgOpWithShardedReduction ( LinalgOp  op,
ArrayRef< Value spmdizedOperands,
ArrayRef< MeshSharding operandShardings,
ArrayRef< MeshSharding resultShardings,
ArrayRef< utils::IteratorType >  loopIteratorTypes,
ArrayRef< SmallVector< MeshAxis >>  meshAxisAssignmentForLoopIterators,
IRMapping spmdizationMap,
SymbolTableCollection symbolTable,
ImplicitLocOpBuilder builder 
)
static

◆ tileLinalgOp()

FailureOr< TiledLinalgOp > mlir::linalg::tileLinalgOp ( RewriterBase b,
LinalgOp  op,
const LinalgTilingOptions options 
)

Definition at line 824 of file Tiling.cpp.

References options.

◆ tileReductionUsingForall()

FailureOr< linalg::ForallReductionTilingResult > mlir::linalg::tileReductionUsingForall ( RewriterBase b,
PartialReductionOpInterface  op,
ArrayRef< OpFoldResult numThreads,
ArrayRef< OpFoldResult tileSizes = {},
std::optional< ArrayAttr >  mapping = std::nullopt 
)

Method to tile a reduction to parallel iterations computing partial reductions.

After the loop all the partial reduction are merged into a final reduction. For example for the following sequence

%0 = linalg.generic %in ["parallel", "reduction"]
: tensor<7x9xf32> -> tensor<7xf32>

into:

%0 = linalg.fill ... : tensor<7x4xf32>
%1 = scf.forall (%iv) in (%c4) shared_outs(%arg0 = %0)
-> (tensor<7x4xf32>) {
%2 = tensor.extract_slice %arg3 : tensor<7x4xf32> to tensor<7xf32>
%3 = tensor.extract_slice %in : tensor<7x9xf32> -> tensor<7x?xf32>
%4 = linalg.generic %2, %3 ["parallel", "reduction"]
: tensor<7x?xf32> -> tensor<7xf32>
%5 = tensor.insert_slice %3, %arg0[0, %iv] : tensor<7x4xf32>
}
%6 = linalg.generic %1 ["parallel", "reduction"]
: tensor<7x4xf32> -> tensor<7xf32>

Definition at line 596 of file Tiling.cpp.

References calculateTileOffsetsAndSizes(), mlir::OpBuilder::clone(), mlir::OpBuilder::create(), mlir::Operation::emitError(), mlir::RewriterBase::eraseOp(), mlir::getAsOpFoldResult(), mlir::Builder::getIndexAttr(), mlir::tensor::getOrCreateDestinations(), mlir::getValueOrCreateConstantIndexOp(), mlir::linalg::ForallReductionTilingResult::initialValues, mlir::isConstantIntValue(), mlir::linalg::ForallReductionTilingResult::loops, mlir::affine::mapLoopToProcessorIds(), mlir::linalg::ForallReductionTilingResult::mergeOps, mlir::RewriterBase::modifyOpInPlace(), mlir::RewriterBase::notifyMatchFailure(), options, mlir::linalg::ForallReductionTilingResult::parallelTiledOps, mlir::RewriterBase::replaceOp(), mlir::OpBuilder::setInsertionPoint(), mlir::OpBuilder::setInsertionPointAfter(), and mlir::OpBuilder::setInsertionPointToEnd().

◆ transformIndexOps()

void mlir::linalg::transformIndexOps ( RewriterBase b,
LinalgOp  op,
SmallVectorImpl< Value > &  ivs,
const LoopIndexToRangeIndexMap loopIndexToRangeIndex 
)

All indices returned by IndexOp should be invariant with respect to tiling.

Therefore, if an operation is tiled, we have to transform the indices accordingly, i.e. offset them by the values of the corresponding induction variables that are captured implicitly in the body of the op.

Example. linalg.generic before tiling:

#id_2d = (i, j) -> (i, j) #pointwise_2d_trait = { indexing_maps = [#id_2d, #id_2d], iterator_types = ["parallel", "parallel"] } linalg.generic #pointwise_2d_trait operand, result { ^bb0(operand_in: f32, result_in: f32): i = linalg.index 0 : index j = linalg.index 1 : index <some operations that use i, j> }: memref<50x100xf32>, memref<50x100xf32>

After tiling pass with tiles sizes 10 and 25:

#strided = (i, j)[s0, s1, s2] -> (i * s1 + s0 + j * s2)

c1 = arith.constant 1 : index c0 = arith.constant 0 : index c25 = arith.constant 25 : index c10 = arith.constant 10 : index operand_dim_0 = dim operand, 0 : memref<50x100xf32> operand_dim_1 = dim operand, 1 : memref<50x100xf32> scf.for k = c0 to operand_dim_0 step c10 { scf.for l = c0 to operand_dim_1 step c25 { %4 = memref.subview operand[k, l][c10, c25][c1, c1] : memref<50x100xf32> to memref<?x?xf32, #strided> %5 = memref.subview result[k, l][c10, c25][c1, c1] : memref<50x100xf32> to memref<?x?xf32, #strided> linalg.generic pointwise_2d_trait %4, %5 { ^bb0(operand_in: f32, result_in: f32): i = linalg.index 0 : index j = linalg.index 1 : index // Indices k and l are implicitly captured in the body. transformed_i = arith.addi i, k : index // index i is offset by k transformed_j = arith.addi j, l : index // index j is offset by l // Every use of i, j is replaced with transformed_i, transformed_j <some operations that use transformed_i, transformed_j> }: memref<?x?xf32, #strided>, memref<?x?xf32, #strided> } }

TODO: Investigate whether mixing implicit and explicit indices does not lead to losing information.

Definition at line 78 of file Tiling.cpp.

References mlir::detail::enumerate(), mlir::getAsOpFoldResult(), and offsetIndices().

Referenced by tileLinalgOpImpl().

◆ transposeBatchMatmul()

FailureOr< Operation * > mlir::linalg::transposeBatchMatmul ( RewriterBase rewriter,
linalg::BatchMatmulOp  batchMatmulOp,
bool  transposeLHS = true 
)

Pattern to replace.

linalg.batch_matmul(a, b)

with

linalg.batch_matmul_transpose_a(linalg.transpose(a), b)

Only the non-batch dimensions are transposed. By default the LHS is transposed. Set transposeLHS=false to transpose RHS instead.

Definition at line 88 of file TransposeMatmul.cpp.

References mlir::OpBuilder::create(), mlir::Value::getType(), mlir::bufferization::hasTensorSemantics(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::replaceOp().

◆ transposeConv2D() [1/2]

FailureOr< Operation * > mlir::linalg::transposeConv2D ( RewriterBase rewriter,
linalg::Conv2DNhwcFhwcOp  op 
)

Convert linalg.conv_2d_nhwc_fhwc(_q) to linalg.conv_2d_nhwc_hwcf(_q) by materializing transpose.

Definition at line 127 of file TransposeConv2D.cpp.

◆ transposeConv2D() [2/2]

FailureOr< Operation * > mlir::linalg::transposeConv2D ( RewriterBase rewriter,
linalg::Conv2DNhwcFhwcQOp  op 
)

Definition at line 134 of file TransposeConv2D.cpp.

◆ transposeMatmul()

FailureOr< Operation * > mlir::linalg::transposeMatmul ( RewriterBase rewriter,
linalg::MatmulOp  matmulOp,
bool  transposeLHS = true 
)

Convert Linalg matmul ops to transposed variants.

Pattern to replace.

linalg.matmul(a, b)

with

linalg.matmul_transpose_a(linalg.transpose(a), b)

By default the LHS is transposed. Set transposeLHS=false to transpose RHS instead.

Definition at line 31 of file TransposeMatmul.cpp.

References mlir::OpBuilder::create(), mlir::Value::getType(), mlir::bufferization::hasTensorSemantics(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::replaceOp().

◆ unrollIndex()

static SmallVector<Value> mlir::linalg::unrollIndex ( OpBuilder b,
Location  loc,
Value  index,
ArrayRef< int64_t >  factors 
)
static

◆ updateBoundsForCyclicDistribution()

void mlir::linalg::updateBoundsForCyclicDistribution ( OpBuilder builder,
Location  loc,
Value  procId,
Value  nprocs,
Value lb,
Value ub,
Value step 
)

Update the lb, ub and step to get per processor lb, ub and step.

Definition at line 351 of file Utils.cpp.

References mlir::bindDims(), mlir::getAffineSymbolExpr(), mlir::Builder::getContext(), and mlir::affine::makeComposedAffineApply().

Referenced by mlir::linalg::GenerateLoopNest< LoopTy >::doit().

◆ vectorize()

LogicalResult mlir::linalg::vectorize ( RewriterBase rewriter,
Operation op,
ArrayRef< int64_t >  inputVectorSizes = {},
ArrayRef< bool >  inputScalableVecDims = {},
bool  vectorizeNDExtract = false,
bool  flatten1DDepthwiseConv = false 
)

Emit a suitable vector form for an operation.

If provided, inputVectorSizes are used to vectorize this operation. inputVectorSizes must match the rank of the iteration space of the operation and the sizes must be smaller or equal than their counterpart interation space sizes, if static. inputVectorShapes also allows the vectorization of operations with dynamic shapes.

If provided, inputVectorSizes are used to vectorize this operation. inputVectorSizes must match the rank of the iteration space of the operation and the input vector sizes must be greater than or equal to their counterpart iteration space sizes, if static. inputVectorShapes also allows the vectorization of operations with dynamic shapes.

Definition at line 2161 of file Vectorization.cpp.

References LDBG, and vectorizeOpPrecondition().

◆ vectorizeCopy()

LogicalResult mlir::linalg::vectorizeCopy ( RewriterBase builder,
memref::CopyOp  copyOp 
)

◆ vectorizeOpPrecondition()

LogicalResult mlir::linalg::vectorizeOpPrecondition ( Operation op,
ArrayRef< int64_t >  inputVectorSizes = {},
ArrayRef< bool >  inputScalableVecDims = {},
bool  vectorizeNDExtract = false,
bool  flatten1DDepthwiseConv = false 
)

◆ winogradConv2D()

FailureOr< Operation * > mlir::linalg::winogradConv2D ( RewriterBase rewriter,
linalg::Conv2DNhwcFhwcOp  op,
int64_t  m,
int64_t  r 
)

Convert linalg.conv_2d_nhwc_fhwc to Winograd Conv2D algorithm F(m x m, r x r).

m is the dimension size of output and r is the dimension size of filter.

Definition at line 1184 of file WinogradConv2D.cpp.