MLIR  16.0.0git
Namespaces | Classes | Typedefs | Enumerations | Functions
mlir::bufferization Namespace Reference

Namespaces

 detail
 
 func_ext
 

Classes

class  AnalysisState
 AnalysisState provides a variety of helper functions for dealing with tensor values. More...
 
class  BufferizationAliasInfo
 The BufferizationAliasInfo class maintains a list of buffer aliases and equivalence classes to support bufferization. More...
 
struct  BufferizationOptions
 Options for BufferizableOpInterface-based bufferization. More...
 
class  BufferizeTypeConverter
 A helper type converter class that automatically populates the relevant materializations and type conversions for bufferization. More...
 
class  BufferPlacementAllocs
 A simple analysis that detects allocation operations. More...
 
class  BufferPlacementTransformationBase
 The base class for all BufferPlacement transformations. More...
 
struct  DialectAnalysisState
 Dialect-specific analysis state. More...
 
class  OneShotAnalysisState
 State for analysis-enabled bufferization. More...
 
struct  OneShotBufferizationOptions
 Options for analysis-enabled bufferization. More...
 
class  OpFilter
 

Typedefs

using AnchorMatchFn = std::function< bool(OpOperand &, SmallVector< Value > &)>
 A function that matches anchor OpOperands for AllocTensorOp elimination. More...
 
using RewriteFn = std::function< Value(OpBuilder &, Location, OpOperand &)>
 A function that rewrites matched anchors. More...
 

Enumerations

enum  BufferRelation { BufferRelation::None, BufferRelation::Equivalent }
 Specify fine-grain relationship between buffers to enable more analysis. More...
 

Functions

bool isFunctionArgument (Value value)
 Return true if the given value is a BlockArgument of a func::FuncOp. More...
 
FailureOr< ValueallocateTensorForShapedValue (OpBuilder &b, Location loc, Value shapedValue, bool escape, const BufferizationOptions &options, bool copy=true)
 Create an AllocTensorOp for the given shaped value (memref or tensor). More...
 
bool allocationDoesNotEscape (OpResult opResult)
 Return true if the allocation of the given op is guaranteed to not escape the containing block. More...
 
FailureOr< ValuegetBuffer (RewriterBase &rewriter, Value value, const BufferizationOptions &options)
 Lookup the buffer for the given value. More...
 
FailureOr< BaseMemRefTypegetBufferType (Value value, const BufferizationOptions &options)
 Return the buffer type for a given Value (tensor) after bufferization without bufferizing any IR. More...
 
FailureOr< BaseMemRefTypegetBufferType (Value value, const BufferizationOptions &options, const DenseMap< Value, BaseMemRefType > &fixedTypes)
 Return the buffer type for a given Value (tensor) after bufferization without bufferizing any IR. More...
 
void replaceOpWithBufferizedValues (RewriterBase &rewriter, Operation *op, ValueRange values)
 Replace an op with replacement values. More...
 
template<typename OpTy , typename... Args>
OpTy replaceOpWithNewBufferizedOp (RewriterBase &rewriter, Operation *op, Args &&...args)
 Replace an op with a new op. More...
 
bool shouldDeallocateOpResult (OpResult opResult, const BufferizationOptions &options)
 Return true if the buffer of given OpResult should be deallocated. More...
 
BaseMemRefType getMemRefType (Value value, const BufferizationOptions &options, MemRefLayoutAttrInterface layout={}, unsigned memorySpace=0)
 Return a MemRefType to which the type of the given value can be bufferized. More...
 
BaseMemRefType getMemRefTypeWithFullyDynamicLayout (TensorType tensorType, unsigned memorySpace=0)
 Return a MemRef type with fully dynamic layout. More...
 
BaseMemRefType getMemRefTypeWithStaticIdentityLayout (TensorType tensorType, unsigned memorySpace=0)
 Return a MemRef type with a static identity layout (i.e., no layout map). More...
 
OperationgetOwnerOfValue (Value value)
 Return the owner of the given value. More...
 
void populateDynamicDimSizes (OpBuilder &b, Location loc, Value shapedValue, SmallVector< Value > &dynamicDims)
 Populate dynamicDims with tensor::DimOp / memref::DimOp results for all dynamic dimensions of the given shaped value. More...
 
FailureOr< ValuecastOrReallocMemRefValue (OpBuilder &b, Value value, MemRefType type)
 Try to cast the given ranked MemRef-typed value to the given ranked MemRef type. More...
 
LogicalResult foldToMemrefToTensorPair (RewriterBase &rewriter, ToMemrefOp toMemref)
 Try to fold to_memref(to_tensor(x)). More...
 
void registerTransformDialectExtension (DialectRegistry &registry)
 
LogicalResult eliminateAllocTensors (RewriterBase &rewriter, Operation *op, bufferization::AnalysisState &state, AnchorMatchFn anchorMatchFunc, RewriteFn rewriteFunc)
 Try to eliminate AllocTensorOps inside op. More...
 
LogicalResult insertSliceAnchoredAllocTensorEliminationStep (RewriterBase &rewriter, Operation *op, bufferization::AnalysisState &state)
 Try to eliminate AllocTensorOps inside op that are anchored on an InsertSliceOp, i.e., if it is eventually inserted into another tensor (and some other conditions are met). More...
 
void populateBufferizeMaterializationLegality (ConversionTarget &target)
 Marks ops used by bufferization for type conversion materializations as "legal" in the given ConversionTarget. More...
 
void populateEliminateBufferizeMaterializationsPatterns (BufferizeTypeConverter &typeConverter, RewritePatternSet &patterns)
 Populate patterns to eliminate bufferize materializations. More...
 
LogicalResult bufferizeOp (Operation *op, const BufferizationOptions &options, bool copyBeforeWrite=true, const OpFilter *opFilter=nullptr)
 Bufferize op and its nested ops that implement BufferizableOpInterface. More...
 
BufferizationOptions getPartialBufferizationOptions ()
 
FailureOr< memref::GlobalOp > getGlobalFor (arith::ConstantOp constantOp, uint64_t alignment)
 
LogicalResult analyzeOp (Operation *op, OneShotAnalysisState &state)
 Analyze op and its nested ops. More...
 
LogicalResult runOneShotBufferize (Operation *op, const OneShotBufferizationOptions &options)
 Run One-Shot Bufferize on the given op: Analysis + Bufferization. More...
 
LogicalResult analyzeModuleOp (ModuleOp moduleOp, OneShotAnalysisState &state)
 Analyze moduleOp and its nested ops. More...
 
LogicalResult bufferizeModuleOp (ModuleOp moduleOp, const OneShotBufferizationOptions &options)
 Bufferize op and its nested ops that implement BufferizableOpInterface. More...
 
LogicalResult runOneShotModuleBufferize (ModuleOp moduleOp, const bufferization::OneShotBufferizationOptions &options)
 Run One-Shot Module Bufferization on the given module. More...
 
std::unique_ptr< PasscreateBufferDeallocationPass ()
 Creates an instance of the BufferDeallocation pass to free all allocated buffers. More...
 
LogicalResult deallocateBuffers (Operation *op)
 Run buffer deallocation. More...
 
std::unique_ptr< PasscreateBufferHoistingPass ()
 Creates a pass that moves allocations upwards to reduce the number of required copies that are inserted during the BufferDeallocation pass. More...
 
std::unique_ptr< PasscreateBufferLoopHoistingPass ()
 Creates a pass that moves allocations upwards out of loops. More...
 
std::unique_ptr< PasscreateBufferResultsToOutParamsPass ()
 Creates a pass that converts memref function results to out-params. More...
 
LogicalResult promoteBufferResultsToOutParams (ModuleOp module)
 Replace buffers that are returned from a function with an out parameter. More...
 
std::unique_ptr< PasscreateDropEquivalentBufferResultsPass ()
 Creates a pass that drops memref function results that are equivalent to a function argument. More...
 
std::unique_ptr< PasscreateEmptyTensorToAllocTensorPass ()
 Create a pass that rewrites tensor.empty to bufferization.alloc_tensor. More...
 
LogicalResult dropEquivalentBufferResults (ModuleOp module)
 Drop all memref function results that are equivalent to a function argument. More...
 
std::unique_ptr< OperationPass< func::FuncOp > > createFinalizingBufferizePass ()
 Creates a pass that finalizes a partial bufferization by removing remaining bufferization.to_tensor and bufferization.to_memref operations. More...
 
std::unique_ptr< PasscreateOneShotBufferizePass ()
 Create a pass that bufferizes all ops that implement BufferizableOpInterface with One-Shot Bufferize. More...
 
std::unique_ptr< PasscreateOneShotBufferizePass (const OneShotBufferizationOptions &options)
 Create a pass that bufferizes all ops that implement BufferizableOpInterface with One-Shot Bufferize and the specified bufferization options. More...
 
std::unique_ptr< PasscreatePromoteBuffersToStackPass (unsigned maxAllocSizeInBytes=1024, unsigned maxRankOfAllocatedMemRef=1)
 Creates a pass that promotes heap-based allocations to stack-based ones. More...
 
std::unique_ptr< PasscreatePromoteBuffersToStackPass (std::function< bool(Value)> isSmallAlloc)
 Creates a pass that promotes heap-based allocations to stack-based ones. More...
 
std::unique_ptr< PasscreateAllocTensorEliminationPass ()
 Create a pass that tries to eliminate alloc_tensor ops that are anchored on insert_slice ops. More...
 
std::unique_ptr< PasscreateBufferizationBufferizePass ()
 Create a pass that bufferizes ops from the bufferization dialect. More...
 
std::unique_ptr< PasscreateTensorCopyInsertionPass ()
 Create a pass that resolves out-of-place tensor OpOperands with copies. More...
 
std::unique_ptr< PasscreateTensorCopyInsertionPass (const OneShotBufferizationOptions &options)
 
void registerAllocationOpInterfaceExternalModels (DialectRegistry &registry)
 Register external models for AllocationOpInterface. More...
 
LogicalResult insertTensorCopies (Operation *op, const OneShotBufferizationOptions &options)
 
LogicalResult insertTensorCopies (Operation *op, const AnalysisState &state)
 

Typedef Documentation

◆ AnchorMatchFn

using mlir::bufferization::AnchorMatchFn = typedef std::function<bool(OpOperand &, SmallVector<Value> &)>

A function that matches anchor OpOperands for AllocTensorOp elimination.

If an OpOperand is matched, the function should populate the SmallVector with all values that are needed during RewriteFn to produce the replacement value.

Definition at line 21 of file AllocTensorElimination.h.

◆ RewriteFn

using mlir::bufferization::RewriteFn = typedef std::function<Value(OpBuilder &, Location, OpOperand &)>

A function that rewrites matched anchors.

Definition at line 24 of file AllocTensorElimination.h.

Enumeration Type Documentation

◆ BufferRelation

Specify fine-grain relationship between buffers to enable more analysis.

Enumerator
None 
Equivalent 

Definition at line 315 of file BufferizableOpInterface.h.

Function Documentation

◆ allocateTensorForShapedValue()

FailureOr< Value > mlir::bufferization::allocateTensorForShapedValue ( OpBuilder b,
Location  loc,
Value  shapedValue,
bool  escape,
const BufferizationOptions options,
bool  copy = true 
)

◆ allocationDoesNotEscape()

bool mlir::bufferization::allocationDoesNotEscape ( OpResult  opResult)

Return true if the allocation of the given op is guaranteed to not escape the containing block.

Definition at line 48 of file BufferizableOpInterface.cpp.

References mlir::Operation::getAttrOfType(), mlir::Value::getDefiningOp(), and mlir::OpResult::getResultNumber().

Referenced by mlir::bufferization::AnalysisState::getOptions().

◆ analyzeModuleOp()

LogicalResult mlir::bufferization::analyzeModuleOp ( ModuleOp  moduleOp,
OneShotAnalysisState state 
)

◆ analyzeOp()

LogicalResult mlir::bufferization::analyzeOp ( Operation op,
OneShotAnalysisState state 
)

◆ bufferizeModuleOp()

LogicalResult mlir::bufferization::bufferizeModuleOp ( ModuleOp  moduleOp,
const OneShotBufferizationOptions options 
)

Bufferize op and its nested ops that implement BufferizableOpInterface.

Note: This function does not run One-Shot Analysis. No buffer copies are inserted unless options.copyBeforeWrite is set, in which case buffers are copied before every write.

Definition at line 403 of file OneShotModuleBufferize.cpp.

References mlir::bufferization::BufferizationOptions::bufferizeFunctionBoundaries, bufferizeOp(), mlir::bufferization::BufferizationOptions::copyBeforeWrite, mlir::failed(), mlir::failure(), foldMemRefCasts(), mlir::bufferization::BufferizationOptions::functionBoundaryTypeConversion, getFuncOpsOrderedByCalls(), mlir::bufferization::BufferizationOptions::InferLayoutMap, removeBufferizationAttributes(), and mlir::success().

Referenced by runOneShotModuleBufferize().

◆ bufferizeOp()

LogicalResult mlir::bufferization::bufferizeOp ( Operation op,
const BufferizationOptions options,
bool  copyBeforeWrite = true,
const OpFilter opFilter = nullptr 
)

Bufferize op and its nested ops that implement BufferizableOpInterface.

If copyBeforeWrite, buffers are duplicated and copied before any tensor use that bufferizes to a memory write.

Note: In the general case, it unsafe to run with copyBeforeWrite = false because read-after-write conflicts may materialize during bufferization. copyBeforeWrite = false is safe only if the input IR is guaranteed to not require any out-of-place bufferization.

Note: This function bufferizes ops without utilizing analysis results. It can be used to implement partial bufferization passes.

Check the result of bufferization. Return an error if an op was not bufferized, unless partial bufferization is allowed.

Definition at line 388 of file Bufferize.cpp.

References mlir::Operation::emitError(), mlir::failed(), mlir::failure(), foldToMemrefToTensorPair(), mlir::Operation::getContext(), mlir::Operation::getUses(), hasTensorSemantics(), insertTensorCopies(), mlir::bufferization::OpFilter::isOpAllowed(), options, mlir::PostOrder, mlir::success(), and mlir::Operation::walk().

Referenced by bufferizeModuleOp(), populateEliminateBufferizeMaterializationsPatterns(), runOneShotBufferize(), and mlir::sparse_tensor::BufferizeDenseOpsPass::runOnOperation().

◆ castOrReallocMemRefValue()

FailureOr< Value > mlir::bufferization::castOrReallocMemRefValue ( OpBuilder b,
Value  value,
MemRefType  type 
)

Try to cast the given ranked MemRef-typed value to the given ranked MemRef type.

Insert a reallocation + copy if it cannot be statically guaranteed that a direct cast would be valid.

E.g., when casting from a ranked MemRef type with dynamic layout to a ranked MemRef type with static layout, it is not statically known whether the cast will succeed or not. Such memref.cast ops may fail at runtime. This function never generates such casts and conservatively inserts a copy.

This function returns failure() in case of unsupported casts. E.g., casts with differing element types or memory spaces.

Definition at line 27 of file BufferizationOps.cpp.

References mlir::Type::cast(), copy(), mlir::OpBuilder::create(), mlir::OpBuilder::createOrFold(), mlir::failed(), mlir::failure(), mlir::Value::getLoc(), mlir::getStridesAndOffset(), mlir::Value::getType(), and value.

Referenced by mlir::bufferization::BufferizeTypeConverter::BufferizeTypeConverter(), and foldToMemrefToTensorPair().

◆ createAllocTensorEliminationPass()

std::unique_ptr< Pass > mlir::bufferization::createAllocTensorEliminationPass ( )

Create a pass that tries to eliminate alloc_tensor ops that are anchored on insert_slice ops.

Definition at line 271 of file AllocTensorElimination.cpp.

◆ createBufferDeallocationPass()

std::unique_ptr< Pass > mlir::bufferization::createBufferDeallocationPass ( )

Creates an instance of the BufferDeallocation pass to free all allocated buffers.

Definition at line 711 of file BufferDeallocation.cpp.

◆ createBufferHoistingPass()

std::unique_ptr< Pass > mlir::bufferization::createBufferHoistingPass ( )

Creates a pass that moves allocations upwards to reduce the number of required copies that are inserted during the BufferDeallocation pass.

Definition at line 435 of file BufferOptimizations.cpp.

◆ createBufferizationBufferizePass()

std::unique_ptr< Pass > mlir::bufferization::createBufferizationBufferizePass ( )

Create a pass that bufferizes ops from the bufferization dialect.

Definition at line 285 of file Bufferize.cpp.

◆ createBufferLoopHoistingPass()

std::unique_ptr< Pass > mlir::bufferization::createBufferLoopHoistingPass ( )

Creates a pass that moves allocations upwards out of loops.

This avoids reallocations inside of loops.

Definition at line 439 of file BufferOptimizations.cpp.

◆ createBufferResultsToOutParamsPass()

std::unique_ptr< Pass > mlir::bufferization::createBufferResultsToOutParamsPass ( )

Creates a pass that converts memref function results to out-params.

Definition at line 199 of file BufferResultsToOutParams.cpp.

◆ createDropEquivalentBufferResultsPass()

std::unique_ptr< Pass > mlir::bufferization::createDropEquivalentBufferResultsPass ( )

Creates a pass that drops memref function results that are equivalent to a function argument.

Definition at line 157 of file DropEquivalentBufferResults.cpp.

◆ createEmptyTensorToAllocTensorPass()

std::unique_ptr< Pass > mlir::bufferization::createEmptyTensorToAllocTensorPass ( )

Create a pass that rewrites tensor.empty to bufferization.alloc_tensor.

Definition at line 62 of file EmptyTensorToAllocTensor.cpp.

◆ createFinalizingBufferizePass()

std::unique_ptr< OperationPass< func::FuncOp > > mlir::bufferization::createFinalizingBufferizePass ( )

Creates a pass that finalizes a partial bufferization by removing remaining bufferization.to_tensor and bufferization.to_memref operations.

Definition at line 299 of file Bufferize.cpp.

Referenced by mlir::sparse_tensor::buildSparseCompiler().

◆ createOneShotBufferizePass() [1/2]

std::unique_ptr< Pass > mlir::bufferization::createOneShotBufferizePass ( )

Create a pass that bufferizes all ops that implement BufferizableOpInterface with One-Shot Bufferize.

Definition at line 289 of file Bufferize.cpp.

◆ createOneShotBufferizePass() [2/2]

std::unique_ptr< Pass > mlir::bufferization::createOneShotBufferizePass ( const OneShotBufferizationOptions options)

Create a pass that bufferizes all ops that implement BufferizableOpInterface with One-Shot Bufferize and the specified bufferization options.

Definition at line 293 of file Bufferize.cpp.

References options.

◆ createPromoteBuffersToStackPass() [1/2]

std::unique_ptr< Pass > mlir::bufferization::createPromoteBuffersToStackPass ( unsigned  maxAllocSizeInBytes = 1024,
unsigned  maxRankOfAllocatedMemRef = 1 
)

Creates a pass that promotes heap-based allocations to stack-based ones.

Only buffers smaller than the provided size are promoted. Dynamic shaped buffers are promoted up to the given rank.

Definition at line 443 of file BufferOptimizations.cpp.

◆ createPromoteBuffersToStackPass() [2/2]

std::unique_ptr< Pass > mlir::bufferization::createPromoteBuffersToStackPass ( std::function< bool(Value)>  isSmallAlloc)

Creates a pass that promotes heap-based allocations to stack-based ones.

Only buffers smaller with isSmallAlloc(alloc) == true are promoted.

Definition at line 449 of file BufferOptimizations.cpp.

◆ createTensorCopyInsertionPass() [1/2]

std::unique_ptr< Pass > mlir::bufferization::createTensorCopyInsertionPass ( )

Create a pass that resolves out-of-place tensor OpOperands with copies.

Definition at line 198 of file TensorCopyInsertion.cpp.

Referenced by mlir::sparse_tensor::buildSparseCompiler().

◆ createTensorCopyInsertionPass() [2/2]

std::unique_ptr< Pass > mlir::bufferization::createTensorCopyInsertionPass ( const OneShotBufferizationOptions options)

Definition at line 202 of file TensorCopyInsertion.cpp.

References options.

◆ deallocateBuffers()

LogicalResult mlir::bufferization::deallocateBuffers ( Operation op)

◆ dropEquivalentBufferResults()

LogicalResult mlir::bufferization::dropEquivalentBufferResults ( ModuleOp  module)

◆ eliminateAllocTensors()

LogicalResult mlir::bufferization::eliminateAllocTensors ( RewriterBase rewriter,
Operation op,
bufferization::AnalysisState state,
AnchorMatchFn  anchorMatchFunc,
RewriteFn  rewriteFunc 
)

Try to eliminate AllocTensorOps inside op.

  • rewriteFunc generates the replacement for the AllocTensorOp.
  • Only AllocTensorOps that are anchored on a matching OpOperand as per anchorMatchFunc are considered. "Anchored" means that there is a path on the reverse SSA use-def chain, starting from the OpOperand and always following the aliasing OpOperand, that eventually ends at a single AllocTensorOp.

    An AllocTensorOp is replaced with the result of rewriteFunc if it is anchored on a matching OpOperand. "Anchored" means that there is a path on the reverse SSA use-def chain, starting from the OpOperand and always following the aliasing OpOperand, that eventually ends at a single AllocTensorOp.

Definition at line 107 of file AllocTensorElimination.cpp.

References mlir::WalkResult::advance(), mlir::bufferization::AnalysisState::areEquivalentBufferizedValues(), mlir::Value::dyn_cast(), mlir::failure(), findValidInsertionPoint(), mlir::bufferization::AnalysisState::findValueInReverseUseDefChain(), mlir::IROperand< DerivedT, IRValueT >::get(), mlir::bufferization::AnalysisState::getAliasingOpOperand(), mlir::Value::getDefiningOp(), mlir::Value::getLoc(), mlir::Operation::getOpOperands(), mlir::Value::getType(), mlir::bufferization::AnalysisState::isInPlace(), mlir::RewriterBase::replaceOp(), mlir::OpBuilder::setInsertionPoint(), mlir::WalkResult::skip(), mlir::Operation::walk(), and mlir::WalkResult::wasInterrupted().

Referenced by insertSliceAnchoredAllocTensorEliminationStep().

◆ foldToMemrefToTensorPair()

LogicalResult mlir::bufferization::foldToMemrefToTensorPair ( RewriterBase rewriter,
ToMemrefOp  toMemref 
)

Try to fold to_memref(to_tensor(x)).

If x's type and the result type of the to_memref op are different, a memref.cast is needed.

Definition at line 88 of file BufferizationOps.cpp.

References castOrReallocMemRefValue(), mlir::Type::dyn_cast(), mlir::failed(), mlir::failure(), mlir::RewriterBase::replaceOp(), mlir::RewriterBase::replaceOpWithNewOp(), and mlir::success().

Referenced by bufferizeOp(), and populateDynamicDimSizes().

◆ getBuffer()

FailureOr< Value > mlir::bufferization::getBuffer ( RewriterBase rewriter,
Value  value,
const BufferizationOptions options 
)

Lookup the buffer for the given value.

If the value was not bufferized yet, wrap it in a ToMemrefOp. Otherwise, it is the result of a ToTensorOp, from which the memref operand is returned.

Definition at line 548 of file BufferizableOpInterface.cpp.

References mlir::OpBuilder::create(), mlir::Type::dyn_cast(), ensureToMemrefOpIsValid(), mlir::failed(), mlir::failure(), getBufferType(), mlir::Value::getDefiningOp(), mlir::Value::getLoc(), mlir::Value::getType(), setInsertionPointAfter(), and value.

Referenced by mlir::bufferization::func_ext::CallOpInterface::bufferize(), mlir::bufferization::AnalysisState::getOptions(), and populateDynamicDimSizes().

◆ getBufferType() [1/2]

FailureOr< BaseMemRefType > mlir::bufferization::getBufferType ( Value  value,
const BufferizationOptions options 
)

Return the buffer type for a given Value (tensor) after bufferization without bufferizing any IR.

Return the buffer type for a given Value (tensor) after bufferization.

Note: It should be sufficient to call getBuffer()->getType() in most cases. However, when a buffer type should be predicted without modifying any IR, this function can be used.

This function is a wrapper around BufferizableOpInterface::getBufferType.

Definition at line 606 of file BufferizableOpInterface.cpp.

Referenced by allocateTensorForShapedValue(), mlir::bufferization::detail::defaultGetBufferType(), getBuffer(), mlir::bufferization::AnalysisState::getOptions(), and populateDynamicDimSizes().

◆ getBufferType() [2/2]

FailureOr< BaseMemRefType > AllocTensorOp::getBufferType ( Value  value,
const BufferizationOptions options,
const DenseMap< Value, BaseMemRefType > &  fixedTypes 
)

Return the buffer type for a given Value (tensor) after bufferization without bufferizing any IR.

Return the buffer type for a given Value (tensor) after bufferization.

If at any point during the type computation, the type of a value in fixedTypes in required, the mapped type is used.

Note: It should be sufficient to call getBuffer()->getType() in most cases. However, when a buffer type should be predicted without modifying any IR, this function can be used.

This function is a wrapper around BufferizableOpInterface::getBufferType.

Definition at line 612 of file BufferizableOpInterface.cpp.

References mlir::bufferization::BufferizationOptions::defaultMemorySpace, mlir::bufferization::BufferizationOptions::dynCastBufferizableOp(), getMemRefType(), getOwnerOfValue(), mlir::Value::getType(), and mlir::Type::isa().

◆ getGlobalFor()

FailureOr< memref::GlobalOp > mlir::bufferization::getGlobalFor ( arith::ConstantOp  constantOp,
uint64_t  alignment 
)

◆ getMemRefType()

BaseMemRefType mlir::bufferization::getMemRefType ( Value  value,
const BufferizationOptions options,
MemRefLayoutAttrInterface  layout = {},
unsigned  memorySpace = 0 
)

Return a MemRefType to which the type of the given value can be bufferized.

If possible, op bufferization implementations should not use this function and instead infer precise memref types for tensor results by themselves.

Unless a layout map was specified, options.unknownTypeConverterFn determines what kind of layout map will be used. For best composability (without copies), the fully dynamic layout map is used by default.

Note: Canonicalization patterns could clean up layout maps and infer more precise layout maps after bufferization. However, many possible canonicalizations are currently not implemented.

Definition at line 719 of file BufferizableOpInterface.cpp.

References mlir::Type::cast(), mlir::Value::getType(), and mlir::bufferization::BufferizationOptions::unknownTypeConverterFn.

Referenced by composeSetAndOperands(), mlir::bufferization::detail::defaultGetBufferType(), foldTransferInBoundsAttribute(), getBufferType(), CollapseShapeOpMemRefCastFolder::matchAndRewrite(), CanonicalizeSingleResultAffineMinMaxOp< T >::matchAndRewrite(), parseGlobalMemrefOpTypeAndInitialValue(), replaceOpWithNewBufferizedOp(), verifyMemoryOpIndexing(), verifyMultShape(), and verifyVectorMemoryOp().

◆ getMemRefTypeWithFullyDynamicLayout()

BaseMemRefType mlir::bufferization::getMemRefTypeWithFullyDynamicLayout ( TensorType  tensorType,
unsigned  memorySpace = 0 
)

◆ getMemRefTypeWithStaticIdentityLayout()

BaseMemRefType mlir::bufferization::getMemRefTypeWithStaticIdentityLayout ( TensorType  tensorType,
unsigned  memorySpace = 0 
)

◆ getOwnerOfValue()

Operation * mlir::bufferization::getOwnerOfValue ( Value  value)

Return the owner of the given value.

In case of a BlockArgument that is the owner of the block. In case of an OpResult that is the defining op.

Definition at line 42 of file BufferizableOpInterface.cpp.

References mlir::Value::cast(), and mlir::Value::dyn_cast().

Referenced by mlir::bufferization::detail::defaultGetBufferType(), getBufferType(), and replaceOpWithNewBufferizedOp().

◆ getPartialBufferizationOptions()

BufferizationOptions mlir::bufferization::getPartialBufferizationOptions ( )

◆ insertSliceAnchoredAllocTensorEliminationStep()

LogicalResult mlir::bufferization::insertSliceAnchoredAllocTensorEliminationStep ( RewriterBase rewriter,
Operation op,
bufferization::AnalysisState state 
)

Try to eliminate AllocTensorOps inside op that are anchored on an InsertSliceOp, i.e., if it is eventually inserted into another tensor (and some other conditions are met).

Try to eliminate AllocTensorOps inside op.

An AllocTensorOp can be eliminated if it is eventually inserted into another tensor (and some other conditions are met).

E.g.: %0 = linalg.alloc_tensor %1 = linalg.fill(cst, %0) {inplace = [true]} %2 = tensor.insert_slice %1 into t[10][20][1]

AllocTensorOp elimination will try to fill t inplace instead of filling a new allocation %0 and inserting it into t. This is done by replacing the AllocTensorOp with:

%0 = tensor.extract_slice t[10][20][1]

The analysis looks for matching ExtractSliceOp/InsertSliceOp pairs and lets those bufferize inplace in the absence of other conflicts.

Starting from an InsertSliceOp, an AllocTensorOp at the end of the insert source's reverse use-def chain is eliminated if:

  • On the reverse use-def chain path from the InsertSliceOp to the AllocTensorOp, all ops were decided to bufferize inplace and the buffer relation is "equivalent" (TODO: can be relaxed if needed).
  • The reverse use-def chain has exactly one end, which is the AllocTensorOp.

Definition at line 206 of file AllocTensorElimination.cpp.

References analyzeOp(), mlir::OpBuilder::create(), eliminateAllocTensors(), mlir::failed(), mlir::Operation::getContext(), mlir::detail::IROperandBase::getOwner(), mlir::Operation::getResult(), mlir::DialectRegistry::insert(), and options.

◆ insertTensorCopies() [1/2]

LogicalResult mlir::bufferization::insertTensorCopies ( Operation op,
const OneShotBufferizationOptions options 
)

◆ insertTensorCopies() [2/2]

LogicalResult mlir::bufferization::insertTensorCopies ( Operation op,
const AnalysisState state 
)

◆ isFunctionArgument()

bool mlir::bufferization::isFunctionArgument ( Value  value)

Return true if the given value is a BlockArgument of a func::FuncOp.

Definition at line 712 of file BufferizableOpInterface.cpp.

References mlir::Value::dyn_cast().

◆ populateBufferizeMaterializationLegality()

void mlir::bufferization::populateBufferizeMaterializationLegality ( ConversionTarget target)

Marks ops used by bufferization for type conversion materializations as "legal" in the given ConversionTarget.

This function should be called by all bufferization passes using BufferizeTypeConverter so that materializations work properly. One exception is bufferization passes doing "full" conversions, where it can be desirable for even the materializations to remain illegal so that they are eliminated, such as via the patterns in populateEliminateBufferizeMaterializationsPatterns.

Definition at line 88 of file Bufferize.cpp.

References mlir::ConversionTarget::addLegalOp(), mlir::OpConversionPattern< SourceOp >::OpConversionPattern(), mlir::ConversionPatternRewriter::replaceOp(), and mlir::success().

◆ populateDynamicDimSizes()

void mlir::bufferization::populateDynamicDimSizes ( OpBuilder b,
Location  loc,
Value  shapedValue,
SmallVector< Value > &  dynamicDims 
)

Populate dynamicDims with tensor::DimOp / memref::DimOp results for all dynamic dimensions of the given shaped value.

Definition at line 132 of file BufferizationOps.cpp.

References mlir::RewritePatternSet::add(), mlir::OperationState::addAttribute(), mlir::OperationState::addTypes(), mlir::OperationState::attributes, mlir::Attribute::cast(), mlir::Type::cast(), copy(), mlir::OpBuilder::create(), mlir::bufferization::BufferizationOptions::createAlloc(), mlir::bufferization::BufferizationOptions::createDealloc(), mlir::bufferization::BufferizationOptions::createMemCpy(), mlir::bufferization::BufferizationOptions::defaultMemorySpace, mlir::emitError(), mlir::RewriterBase::eraseOp(), mlir::failed(), mlir::failure(), mlir::memref::findDealloc(), mlir::memref::foldMemRefCast(), foldToMemrefToTensorPair(), mlir::SideEffects::Effect::Base< DerivedEffect, BaseEffect >::get(), mlir::SideEffects::Resource::Base< DefaultResource >::get(), mlir::Operation::getBlock(), getBuffer(), getBufferType(), mlir::AsmParser::getBuilder(), mlir::Value::getDefiningOp(), mlir::Builder::getDenseI32ArrayAttr(), mlir::Builder::getIndexType(), mlir::OpBuilder::getInsertionBlock(), mlir::Value::getLoc(), getMemRefTypeWithStaticIdentityLayout(), mlir::OpOperand::getOperandNumber(), mlir::sparse_tensor::getSparseTensorEncoding(), mlir::Block::getTerminator(), mlir::Value::getType(), isMemoryWrite(), mlir::m_ConstantInt(), mlir::matchPattern(), mlir::OperationState::operands, options, mlir::AsmParser::parseColon(), mlir::AsmParser::parseCustomTypeWithFallback(), mlir::AsmParser::parseLParen(), mlir::OpAsmParser::parseOperand(), mlir::OpAsmParser::parseOperandList(), mlir::AsmParser::parseOptionalAttrDict(), mlir::AsmParser::parseOptionalKeyword(), mlir::AsmParser::parseRParen(), print(), mlir::OpAsmPrinter::printOptionalAttrDict(), mlir::AsmPrinter::printStrippedAttrOrType(), mlir::RewriterBase::replaceOp(), replaceOpWithBufferizedValues(), mlir::RewriterBase::replaceOpWithNewOp(), mlir::OpAsmParser::resolveOperand(), mlir::OpAsmParser::resolveOperands(), mlir::OpBuilder::setInsertionPoint(), shouldDeallocateOpResult(), mlir::LogicalResult::succeeded(), mlir::succeeded(), mlir::success(), value, and mlir::verify().

Referenced by allocateTensorForShapedValue().

◆ populateEliminateBufferizeMaterializationsPatterns()

void mlir::bufferization::populateEliminateBufferizeMaterializationsPatterns ( BufferizeTypeConverter typeConverter,
RewritePatternSet patterns 
)

Populate patterns to eliminate bufferize materializations.

In particular, these are the tensor_load/buffer_cast ops.

Definition at line 125 of file Bufferize.cpp.

References mlir::RewritePatternSet::add(), mlir::OpPassManager::addPass(), mlir::bufferization::OpFilter::allowDialect(), mlir::bufferization::OpFilter::allowOperation(), mlir::bufferization::OneShotBufferizationOptions::allowReturnAllocs, mlir::bufferization::BufferizationOptions::allowUnknownOps, mlir::bufferization::BufferizationOptions::analysisFuzzerSeed, mlir::applyFullConversion(), mlir::bufferization::BufferizationOptions::bufferizeFunctionBoundaries, bufferizeOp(), mlir::Type::cast(), mlir::bufferization::BufferizationOptions::copyBeforeWrite, mlir::createCanonicalizerPass(), mlir::createCSEPass(), mlir::bufferization::BufferizationOptions::createDeallocs, mlir::createLoopInvariantCodeMotionPass(), mlir::bufferization::BufferizationOptions::defaultMemorySpace, mlir::failed(), mlir::bufferization::BufferizationOptions::FullyDynamicLayoutMap, mlir::bufferization::BufferizationOptions::functionBoundaryTypeConversion, mlir::RewritePatternSet::getContext(), getMemRefTypeWithFullyDynamicLayout(), getMemRefTypeWithStaticIdentityLayout(), getPartialBufferizationOptions(), mlir::Value::getType(), mlir::bufferization::BufferizationOptions::IdentityLayoutMap, mlir::bufferization::BufferizationOptions::InferLayoutMap, mlir::DialectRegistry::insert(), mlir::TypeConverter::isLegal(), mlir::ConversionTarget::markUnknownOpDynamicallyLegal(), None, mlir::bufferization::BufferizationOptions::opFilter, options, mlir::bufferization::BufferizationOptions::printConflicts, registerAllocationOpInterfaceExternalModels(), runOneShotBufferize(), runOneShotModuleBufferize(), mlir::bufferization::BufferizationOptions::testAnalysisOnly, mlir::bufferization::BufferizationOptions::unknownTypeConverterFn, and value.

◆ promoteBufferResultsToOutParams()

LogicalResult mlir::bufferization::promoteBufferResultsToOutParams ( ModuleOp  module)

Replace buffers that are returned from a function with an out parameter.

Also update all call sites.

Definition at line 173 of file BufferResultsToOutParams.cpp.

References mlir::failed(), mlir::failure(), mlir::success(), updateCalls(), updateFuncOp(), and updateReturnOps().

◆ registerAllocationOpInterfaceExternalModels()

void mlir::bufferization::registerAllocationOpInterfaceExternalModels ( DialectRegistry registry)

Register external models for AllocationOpInterface.

Definition at line 700 of file BufferDeallocation.cpp.

References mlir::DialectRegistry::addExtension().

Referenced by populateEliminateBufferizeMaterializationsPatterns(), and validateSupportedControlFlow().

◆ registerTransformDialectExtension()

void mlir::bufferization::registerTransformDialectExtension ( DialectRegistry registry)

◆ replaceOpWithBufferizedValues()

void mlir::bufferization::replaceOpWithBufferizedValues ( RewriterBase rewriter,
Operation op,
ValueRange  values 
)

◆ replaceOpWithNewBufferizedOp()

template<typename OpTy , typename... Args>
OpTy mlir::bufferization::replaceOpWithNewBufferizedOp ( RewriterBase rewriter,
Operation op,
Args &&...  args 
)

Replace an op with a new op.

The new op must have the same number of results as the replaced op. The new op may not return any tensor values.

Definition at line 523 of file BufferizableOpInterface.h.

References mlir::OpBuilder::create(), mlir::Operation::getLoc(), getMemRefType(), getMemRefTypeWithFullyDynamicLayout(), getMemRefTypeWithStaticIdentityLayout(), getOwnerOfValue(), mlir::Operation::getResults(), replaceOpWithBufferizedValues(), and shouldDeallocateOpResult().

◆ runOneShotBufferize()

LogicalResult mlir::bufferization::runOneShotBufferize ( Operation op,
const OneShotBufferizationOptions options 
)

◆ runOneShotModuleBufferize()

LogicalResult mlir::bufferization::runOneShotModuleBufferize ( ModuleOp  moduleOp,
const bufferization::OneShotBufferizationOptions options 
)

Run One-Shot Module Bufferization on the given module.

Performs a simple function call analysis to determine which function arguments are inplaceable. Then analyzes and bufferizes FuncOps one-by-one with One-Shot Bufferize.

Definition at line 439 of file OneShotModuleBufferize.cpp.

References mlir::bufferization::BufferizationOptions::bufferizeFunctionBoundaries, bufferizeModuleOp(), mlir::bufferization::BufferizationOptions::copyBeforeWrite, mlir::failed(), mlir::failure(), insertTensorCopies(), mlir::success(), and mlir::bufferization::BufferizationOptions::testAnalysisOnly.

Referenced by populateEliminateBufferizeMaterializationsPatterns().

◆ shouldDeallocateOpResult()

bool mlir::bufferization::shouldDeallocateOpResult ( OpResult  opResult,
const BufferizationOptions options 
)

Return true if the buffer of given OpResult should be deallocated.

This function should be called during BufferizableOpInterface::bufferize implementations that allocate a new buffer for the given OpResult.

Definition at line 209 of file BufferizableOpInterface.cpp.

References mlir::Attribute::cast(), mlir::bufferization::BufferizationOptions::createDeallocs, mlir::bufferization::BufferizationOptions::dynCastBufferizableOp(), mlir::Operation::getAttr(), mlir::OpResult::getOwner(), and mlir::Operation::hasAttr().

Referenced by populateDynamicDimSizes(), and replaceOpWithNewBufferizedOp().