MLIR  20.0.0git
Classes | Functions
mlir::memref Namespace Reference

Classes

struct  LinearizedMemRefInfo
 For a memref with offset, sizes and strides, returns the offset, size, and potentially the size padded at the front to use for the linearized memref. More...
 

Functions

LogicalResult foldMemRefCast (Operation *op, Value inner=nullptr)
 This is a common utility used for patterns of the form "someop(memref.cast) -> someop". More...
 
Type getTensorTypeFromMemRefType (Type type)
 Return an unranked/ranked tensor type for the given unranked/ranked memref type. More...
 
std::optional< Operation * > findDealloc (Value allocValue)
 Finds a single dealloc operation for the given allocated value. More...
 
OpFoldResult getMixedSize (OpBuilder &builder, Location loc, Value value, int64_t dim)
 Return the dimension of the given memref value. More...
 
SmallVector< OpFoldResultgetMixedSizes (OpBuilder &builder, Location loc, Value value)
 Return the dimensions of the given memref value. More...
 
Value createCanonicalRankReducingSubViewOp (OpBuilder &b, Location loc, Value memref, ArrayRef< int64_t > targetShape)
 Create a rank-reducing SubViewOp @[0 . More...
 
void registerMemorySlotExternalModels (DialectRegistry &registry)
 
void registerValueBoundsOpInterfaceExternalModels (DialectRegistry &registry)
 
void registerTransformDialectExtension (DialectRegistry &registry)
 
void registerAllocationOpInterfaceExternalModels (DialectRegistry &registry)
 
void registerBufferViewFlowOpInterfaceExternalModels (DialectRegistry &registry)
 
void populateComposeSubViewPatterns (RewritePatternSet &patterns, MLIRContext *context)
 
std::unique_ptr< PasscreateExpandOpsPass ()
 Creates an instance of the ExpandOps pass that legalizes memref dialect ops to be convertible to LLVM. More...
 
std::unique_ptr< PasscreateFoldMemRefAliasOpsPass ()
 Creates an operation pass to fold memref aliasing ops into consumer load/store ops into patterns. More...
 
std::unique_ptr< OperationPass< ModuleOp > > createNormalizeMemRefsPass ()
 Creates an interprocedural pass to normalize memrefs to have a trivial (identity) layout map. More...
 
std::unique_ptr< PasscreateResolveRankedShapeTypeResultDimsPass ()
 Creates an operation pass to resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands. More...
 
std::unique_ptr< PasscreateResolveShapedTypeResultDimsPass ()
 Creates an operation pass to resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface or the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands. More...
 
std::unique_ptr< PasscreateExpandStridedMetadataPass ()
 Creates an operation pass to expand some memref operation into easier to reason about operations. More...
 
std::unique_ptr< PasscreateExpandReallocPass (bool emitDeallocs=true)
 Creates an operation pass to expand memref.realloc operations into their components. More...
 
void registerRuntimeVerifiableOpInterfaceExternalModels (DialectRegistry &registry)
 
void populateExpandOpsPatterns (RewritePatternSet &patterns)
 Collects a set of patterns to rewrite ops within the memref dialect. More...
 
void populateFoldMemRefAliasOpPatterns (RewritePatternSet &patterns)
 Appends patterns for folding memref aliasing ops into consumer load/store ops into patterns. More...
 
void populateResolveRankedShapedTypeResultDimsPatterns (RewritePatternSet &patterns)
 Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands. More...
 
void populateResolveShapedTypeResultDimsPatterns (RewritePatternSet &patterns)
 Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface, in terms of shapes of its input operands. More...
 
void populateExpandStridedMetadataPatterns (RewritePatternSet &patterns)
 Appends patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs. More...
 
void populateResolveExtractStridedMetadataPatterns (RewritePatternSet &patterns)
 Appends patterns for resolving memref.extract_strided_metadata into memref.extract_strided_metadata of its source. More...
 
void populateExpandReallocPatterns (RewritePatternSet &patterns, bool emitDeallocs=true)
 Appends patterns for expanding memref.realloc operations. More...
 
void populateMemRefWideIntEmulationPatterns (const arith::WideIntEmulationConverter &typeConverter, RewritePatternSet &patterns)
 Appends patterns for emulating wide integer memref operations with ops over narrower integer types. More...
 
void populateMemRefWideIntEmulationConversions (arith::WideIntEmulationConverter &typeConverter)
 Appends type conversions for emulating wide integer memref operations with ops over narrowe integer types. More...
 
void populateMemRefNarrowTypeEmulationPatterns (const arith::NarrowTypeEmulationConverter &typeConverter, RewritePatternSet &patterns)
 Appends patterns for emulating memref operations over narrow types with ops over wider types. More...
 
void populateMemRefNarrowTypeEmulationConversions (arith::NarrowTypeEmulationConverter &typeConverter)
 Appends type conversions for emulating memref operations over narrow types with ops over wider types. More...
 
FailureOr< memref::AllocOp > multiBuffer (RewriterBase &rewriter, memref::AllocOp allocOp, unsigned multiplier, bool skipOverrideAnalysis=false)
 Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations. More...
 
FailureOr< memref::AllocOp > multiBuffer (memref::AllocOp allocOp, unsigned multiplier, bool skipOverrideAnalysis=false)
 Call into multiBuffer with locally constructed IRRewriter. More...
 
void populateExtractAddressComputationsPatterns (RewritePatternSet &patterns)
 Appends patterns for extracting address computations from the instructions with memory accesses such that these memory accesses use only a base pointer. More...
 
FailureOr< ValuebuildIndependentOp (OpBuilder &b, AllocaOp allocaOp, ValueRange independencies)
 Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies. More...
 
FailureOr< ValuereplaceWithIndependentOp (RewriterBase &rewriter, memref::AllocaOp allocaOp, ValueRange independencies)
 Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies. More...
 
memref::AllocaOp allocToAlloca (RewriterBase &rewriter, memref::AllocOp alloc, function_ref< bool(memref::AllocOp, memref::DeallocOp)> filter=nullptr)
 Replaces the given alloc with the corresponding alloca and returns it if the following conditions are met: More...
 
bool isStaticShapeAndContiguousRowMajor (MemRefType type)
 Returns true, if the memref type has static shapes and represents a contiguous chunk of memory. More...
 
std::pair< LinearizedMemRefInfo, OpFoldResultgetLinearizedMemRefOffsetAndSize (OpBuilder &builder, Location loc, int srcBits, int dstBits, OpFoldResult offset, ArrayRef< OpFoldResult > sizes, ArrayRef< OpFoldResult > strides, ArrayRef< OpFoldResult > indices={})
 
LinearizedMemRefInfo getLinearizedMemRefOffsetAndSize (OpBuilder &builder, Location loc, int srcBits, int dstBits, OpFoldResult offset, ArrayRef< OpFoldResult > sizes)
 For a memref with offset and sizes, returns the offset and size to use for the linearized memref, assuming that the strides are computed from a row-major ordering of the sizes;. More...
 
void eraseDeadAllocAndStores (RewriterBase &rewriter, Operation *parentOp)
 Track temporary allocations that are never read from. More...
 
SmallVector< OpFoldResultcomputeSuffixProductIRBlock (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
 Given a set of sizes, return the suffix product. More...
 
SmallVector< OpFoldResultcomputeStridesIRBlock (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
 
MemrefValue skipFullyAliasingOperations (MemrefValue source)
 Walk up the source chain until an operation that changes/defines the view of memory is found (i.e. More...
 
bool isSameViewOrTrivialAlias (MemrefValue a, MemrefValue b)
 Checks if two (memref) values are the same or statically known to alias the same region of memory. More...
 
MemrefValue skipViewLikeOps (MemrefValue source)
 Walk up the source chain until we find an operation that is not a view of the source memref (i.e. More...
 
static bool resultIsNotRead (Operation *op, std::vector< Operation * > &uses)
 Returns true if all the uses of op are not read/load. More...
 
static SmallVector< OpFoldResultcomputeSuffixProductIRBlockImpl (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes, OpFoldResult unit)
 

Function Documentation

◆ allocToAlloca()

memref::AllocaOp mlir::memref::allocToAlloca ( RewriterBase rewriter,
memref::AllocOp  alloc,
function_ref< bool(memref::AllocOp, memref::DeallocOp)>  filter = nullptr 
)

Replaces the given alloc with the corresponding alloca and returns it if the following conditions are met:

  • the corresponding dealloc is available in the same block as the alloc;
  • the filter, if provided, succeeds on the alloc/dealloc pair. Otherwise returns nullptr and leaves the IR unchanged.

Definition at line 181 of file IndependenceTransforms.cpp.

References mlir::RewriterBase::eraseOp(), mlir::RewriterBase::replaceOpWithNewOp(), and mlir::OpBuilder::setInsertionPoint().

◆ buildIndependentOp()

FailureOr<Value> mlir::memref::buildIndependentOp ( OpBuilder b,
AllocaOp  allocaOp,
ValueRange  independencies 
)

Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.

If the op is already independent of all independencies, the same AllocaOp result is returned.

Failure indicates the no suitable upper bound for the dynamic sizes could be found.

Referenced by replaceWithIndependentOp().

◆ computeStridesIRBlock()

SmallVector<OpFoldResult> mlir::memref::computeStridesIRBlock ( Location  loc,
OpBuilder builder,
ArrayRef< OpFoldResult sizes 
)
inline

Definition at line 100 of file MemRefUtils.h.

References computeSuffixProductIRBlock().

◆ computeSuffixProductIRBlock()

SmallVector< OpFoldResult > mlir::memref::computeSuffixProductIRBlock ( Location  loc,
OpBuilder builder,
ArrayRef< OpFoldResult sizes 
)

Given a set of sizes, return the suffix product.

When applied to slicing, this is the calculation needed to derive the strides (i.e. the number of linear indices to skip along the (k-1) most minor dimensions to get the next k-slice).

This is the basis to linearize an n-D offset confined to [0 ... sizes].

Assuming sizes is [s0, .. sn], return the vector<Value> [s1 * ... * sn, s2 * ... * sn, ..., sn, 1].

It is the caller's responsibility to provide valid OpFoldResult type values and construct valid IR in the end.

sizes elements are asserted to be non-negative.

Return an empty vector if sizes is empty.

The function emits an IR block which computes suffix product for provided sizes.

Definition at line 177 of file MemRefUtils.cpp.

References computeSuffixProductIRBlockImpl(), and mlir::Builder::getIndexAttr().

Referenced by computeStridesIRBlock(), and resolveSourceIndicesExpandShape().

◆ computeSuffixProductIRBlockImpl()

static SmallVector<OpFoldResult> mlir::memref::computeSuffixProductIRBlockImpl ( Location  loc,
OpBuilder builder,
ArrayRef< OpFoldResult sizes,
OpFoldResult  unit 
)
static

Definition at line 162 of file MemRefUtils.cpp.

References mlir::bindSymbols(), and mlir::Builder::getContext().

Referenced by computeSuffixProductIRBlock().

◆ createCanonicalRankReducingSubViewOp()

Value mlir::memref::createCanonicalRankReducingSubViewOp ( OpBuilder b,
Location  loc,
Value  memref,
ArrayRef< int64_t >  targetShape 
)

Create a rank-reducing SubViewOp @[0 .

. 0] with strides [1 .. 1] and appropriate sizes (i.e. memref.getSizes()) to reduce the rank of memref to that of targetShape.

Definition at line 3101 of file MemRefOps.cpp.

References mlir::OpBuilder::createOrFold(), mlir::Builder::getIndexAttr(), getMixedSizes(), and mlir::Value::getType().

◆ createExpandOpsPass()

std::unique_ptr< Pass > mlir::memref::createExpandOpsPass ( )

Creates an instance of the ExpandOps pass that legalizes memref dialect ops to be convertible to LLVM.

For example, memref.reshape gets converted to memref_reinterpret_cast.

Definition at line 165 of file ExpandOps.cpp.

◆ createExpandReallocPass()

std::unique_ptr< Pass > mlir::memref::createExpandReallocPass ( bool  emitDeallocs = true)

Creates an operation pass to expand memref.realloc operations into their components.

Definition at line 173 of file ExpandRealloc.cpp.

Referenced by mlir::bufferization::buildBufferDeallocationPipeline(), and mlir::sparse_tensor::buildSparsifier().

◆ createExpandStridedMetadataPass()

std::unique_ptr<Pass> mlir::memref::createExpandStridedMetadataPass ( )

Creates an operation pass to expand some memref operation into easier to reason about operations.

Referenced by mlir::sparse_tensor::buildSparsifier().

◆ createFoldMemRefAliasOpsPass()

std::unique_ptr< Pass > mlir::memref::createFoldMemRefAliasOpsPass ( )

Creates an operation pass to fold memref aliasing ops into consumer load/store ops into patterns.

Definition at line 863 of file FoldMemRefAliasOps.cpp.

◆ createNormalizeMemRefsPass()

std::unique_ptr< OperationPass< ModuleOp > > mlir::memref::createNormalizeMemRefsPass ( )

Creates an interprocedural pass to normalize memrefs to have a trivial (identity) layout map.

Definition at line 57 of file NormalizeMemRefs.cpp.

◆ createResolveRankedShapeTypeResultDimsPass()

std::unique_ptr< Pass > mlir::memref::createResolveRankedShapeTypeResultDimsPass ( )

Creates an operation pass to resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 214 of file ResolveShapedTypeResultDims.cpp.

◆ createResolveShapedTypeResultDimsPass()

std::unique_ptr< Pass > mlir::memref::createResolveShapedTypeResultDimsPass ( )

Creates an operation pass to resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface or the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 210 of file ResolveShapedTypeResultDims.cpp.

◆ eraseDeadAllocAndStores()

void mlir::memref::eraseDeadAllocAndStores ( RewriterBase rewriter,
Operation parentOp 
)

Track temporary allocations that are never read from.

If this is the case it means both the allocations and associated stores can be removed.

Definition at line 148 of file MemRefUtils.cpp.

References mlir::RewriterBase::eraseOp(), resultIsNotRead(), and mlir::Operation::walk().

◆ findDealloc()

std::optional< Operation * > mlir::memref::findDealloc ( Value  allocValue)

Finds a single dealloc operation for the given allocated value.

Finds the unique dealloc operation (if one exists) for allocValue.

If there are > 1 deallocates for allocValue, returns std::nullopt, else returns the single deallocate if it exists or nullptr.

Definition at line 61 of file MemRefDialect.cpp.

References mlir::Value::getUsers().

◆ foldMemRefCast()

LogicalResult mlir::memref::foldMemRefCast ( Operation op,
Value  inner = nullptr 
)

This is a common utility used for patterns of the form "someop(memref.cast) -> someop".

This is a common class used for patterns of the form "someop(memrefcast) -> someop".

It folds the source of any memref.cast into the root operation directly.

Definition at line 44 of file MemRefOps.cpp.

References mlir::Operation::getOpOperands().

Referenced by mlir::affine::AffineDmaStartOp::fold(), and mlir::affine::AffineDmaWaitOp::fold().

◆ getLinearizedMemRefOffsetAndSize() [1/2]

LinearizedMemRefInfo mlir::memref::getLinearizedMemRefOffsetAndSize ( OpBuilder builder,
Location  loc,
int  srcBits,
int  dstBits,
OpFoldResult  offset,
ArrayRef< OpFoldResult sizes 
)

For a memref with offset and sizes, returns the offset and size to use for the linearized memref, assuming that the strides are computed from a row-major ordering of the sizes;.

  • If the linearization is done for emulating load/stores of element type with bitwidth srcBits using element type with bitwidth dstBits, the linearized offset and size are scaled down by dstBits/srcBits.

Definition at line 105 of file MemRefUtils.cpp.

References mlir::bindSymbols(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), getLinearizedMemRefOffsetAndSize(), and mlir::affine::makeComposedFoldedAffineApply().

◆ getLinearizedMemRefOffsetAndSize() [2/2]

std::pair< LinearizedMemRefInfo, OpFoldResult > mlir::memref::getLinearizedMemRefOffsetAndSize ( OpBuilder builder,
Location  loc,
int  srcBits,
int  dstBits,
OpFoldResult  offset,
ArrayRef< OpFoldResult sizes,
ArrayRef< OpFoldResult strides,
ArrayRef< OpFoldResult indices = {} 
)

◆ getMixedSize()

OpFoldResult mlir::memref::getMixedSize ( OpBuilder builder,
Location  loc,
Value  value,
int64_t  dim 
)

Return the dimension of the given memref value.

Definition at line 67 of file MemRefOps.cpp.

References mlir::OpBuilder::createOrFold(), mlir::Builder::getIndexAttr(), and mlir::Value::getType().

Referenced by createInBoundsCond(), and getMixedSizes().

◆ getMixedSizes()

SmallVector< OpFoldResult > mlir::memref::getMixedSizes ( OpBuilder builder,
Location  loc,
Value  value 
)

◆ getTensorTypeFromMemRefType()

Type mlir::memref::getTensorTypeFromMemRefType ( Type  type)

Return an unranked/ranked tensor type for the given unranked/ranked memref type.

Definition at line 59 of file MemRefOps.cpp.

References mlir::get(), and mlir::Type::getContext().

Referenced by parseGlobalMemrefOpTypeAndInitialValue().

◆ isSameViewOrTrivialAlias()

bool mlir::memref::isSameViewOrTrivialAlias ( MemrefValue  a,
MemrefValue  b 
)
inline

Checks if two (memref) values are the same or statically known to alias the same region of memory.

Definition at line 111 of file MemRefUtils.h.

References skipFullyAliasingOperations().

◆ isStaticShapeAndContiguousRowMajor()

bool mlir::memref::isStaticShapeAndContiguousRowMajor ( MemRefType  type)

Returns true, if the memref type has static shapes and represents a contiguous chunk of memory.

Definition at line 24 of file MemRefUtils.cpp.

References mlir::getStridesAndOffset().

◆ multiBuffer() [1/2]

FailureOr< memref::AllocOp > mlir::memref::multiBuffer ( memref::AllocOp  allocOp,
unsigned  multiplier,
bool  skipOverrideAnalysis = false 
)

Call into multiBuffer with locally constructed IRRewriter.

Definition at line 246 of file MultiBuffer.cpp.

References multiBuffer().

◆ multiBuffer() [2/2]

FailureOr< memref::AllocOp > mlir::memref::multiBuffer ( RewriterBase rewriter,
memref::AllocOp  allocOp,
unsigned  multiplier,
bool  skipOverrideAnalysis = false 
)

Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations.

It returns the new allocation if the original allocation was multi-buffered and returns failure() otherwise. When skipOverrideAnalysis, the pass will apply the transformation without checking thwt the buffer is overrided at the beginning of each iteration. This implies that user knows that there is no data carried across loop iterations. Example:

%0 = memref.alloc() : memref<4x128xf32>
scf.for %iv = %c1 to %c1024 step %c3 {
memref.copy %1, %0 : memref<4x128xf32> to memref<4x128xf32>
"some_use"(%0) : (memref<4x128xf32>) -> ()
}

into:

%0 = memref.alloc() : memref<5x4x128xf32>
scf.for %iv = %c1 to %c1024 step %c3 {
%s = arith.subi %iv, %c1 : index
%d = arith.divsi %s, %c3 : index
%i = arith.remsi %d, %c5 : index
%sv = memref.subview %0[%i, 0, 0] [1, 4, 128] [1, 1, 1] :
memref<5x4x128xf32> to memref<4x128xf32, strided<[128, 1], offset: ?>>
memref.copy %1, %sv : memref<4x128xf32> to memref<4x128xf32, strided<...>>
"some_use"(%sv) : (memref<4x128xf32, strided<...>) -> ()
}

Make sure there is no loop-carried dependency on the allocation.

Definition at line 99 of file MultiBuffer.cpp.

References mlir::bindDims(), mlir::OpBuilder::create(), DBGS, mlir::DominanceInfo::dominates(), mlir::RewriterBase::eraseOp(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), mlir::Operation::getUsers(), mlir::getValueOrCreateConstantIndexOp(), mlir::affine::makeComposedAffineApply(), overrideBuffer(), replaceUsesAndPropagateType(), mlir::OpBuilder::setInsertionPoint(), mlir::OpBuilder::setInsertionPointToStart(), and mlir::MemRefType::Builder::setShape().

Referenced by multiBuffer().

◆ populateComposeSubViewPatterns()

void mlir::memref::populateComposeSubViewPatterns ( RewritePatternSet patterns,
MLIRContext context 
)

Definition at line 144 of file ComposeSubView.cpp.

References mlir::RewritePatternSet::add().

◆ populateExpandOpsPatterns()

void mlir::memref::populateExpandOpsPatterns ( RewritePatternSet patterns)

Collects a set of patterns to rewrite ops within the memref dialect.

Definition at line 160 of file ExpandOps.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateExpandReallocPatterns()

void mlir::memref::populateExpandReallocPatterns ( RewritePatternSet patterns,
bool  emitDeallocs = true 
)

Appends patterns for expanding memref.realloc operations.

Definition at line 168 of file ExpandRealloc.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateExpandStridedMetadataPatterns()

void mlir::memref::populateExpandStridedMetadataPatterns ( RewritePatternSet patterns)

Appends patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs.

◆ populateExtractAddressComputationsPatterns()

void mlir::memref::populateExtractAddressComputationsPatterns ( RewritePatternSet patterns)

Appends patterns for extracting address computations from the instructions with memory accesses such that these memory accesses use only a base pointer.

For instance,

memref.load %base[%off0, ...]

Will be rewritten in:

%new_base = memref.subview %base[%off0,...][1,...][1,...]
memref.load %new_base[%c0,...]

Definition at line 287 of file ExtractAddressComputations.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateFoldMemRefAliasOpPatterns()

void mlir::memref::populateFoldMemRefAliasOpPatterns ( RewritePatternSet patterns)

Appends patterns for folding memref aliasing ops into consumer load/store ops into patterns.

Definition at line 810 of file FoldMemRefAliasOps.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateMemRefNarrowTypeEmulationConversions()

void mlir::memref::populateMemRefNarrowTypeEmulationConversions ( arith::NarrowTypeEmulationConverter typeConverter)

Appends type conversions for emulating memref operations over narrow types with ops over wider types.

Definition at line 619 of file EmulateNarrowType.cpp.

References mlir::TypeConverter::addConversion(), mlir::get(), getLinearizedShape(), mlir::arith::NarrowTypeEmulationConverter::getLoadStoreBitwidth(), and mlir::getStridesAndOffset().

◆ populateMemRefNarrowTypeEmulationPatterns()

void mlir::memref::populateMemRefNarrowTypeEmulationPatterns ( const arith::NarrowTypeEmulationConverter typeConverter,
RewritePatternSet patterns 
)

Appends patterns for emulating memref operations over narrow types with ops over wider types.

Definition at line 586 of file EmulateNarrowType.cpp.

References mlir::RewritePatternSet::add(), mlir::RewritePatternSet::getContext(), and populateResolveExtractStridedMetadataPatterns().

◆ populateMemRefWideIntEmulationConversions()

void mlir::memref::populateMemRefWideIntEmulationConversions ( arith::WideIntEmulationConverter typeConverter)

Appends type conversions for emulating wide integer memref operations with ops over narrowe integer types.

Definition at line 148 of file EmulateWideInt.cpp.

References mlir::TypeConverter::addConversion(), mlir::TypeConverter::convertType(), and mlir::arith::WideIntEmulationConverter::getMaxTargetIntBitWidth().

◆ populateMemRefWideIntEmulationPatterns()

void mlir::memref::populateMemRefWideIntEmulationPatterns ( const arith::WideIntEmulationConverter typeConverter,
RewritePatternSet patterns 
)

Appends patterns for emulating wide integer memref operations with ops over narrower integer types.

Definition at line 140 of file EmulateWideInt.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

◆ populateResolveExtractStridedMetadataPatterns()

void mlir::memref::populateResolveExtractStridedMetadataPatterns ( RewritePatternSet patterns)

Appends patterns for resolving memref.extract_strided_metadata into memref.extract_strided_metadata of its source.

Referenced by populateMemRefNarrowTypeEmulationPatterns().

◆ populateResolveRankedShapedTypeResultDimsPatterns()

void mlir::memref::populateResolveRankedShapedTypeResultDimsPatterns ( RewritePatternSet patterns)

Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 180 of file ResolveShapedTypeResultDims.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by populateFoldUnitExtentDimsViaReshapesPatterns(), and populateFoldUnitExtentDimsViaSlicesPatterns().

◆ populateResolveShapedTypeResultDimsPatterns()

void mlir::memref::populateResolveShapedTypeResultDimsPatterns ( RewritePatternSet patterns)

Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 187 of file ResolveShapedTypeResultDims.cpp.

References mlir::RewritePatternSet::add(), and mlir::RewritePatternSet::getContext().

Referenced by populateFoldUnitExtentDimsViaReshapesPatterns(), and populateFoldUnitExtentDimsViaSlicesPatterns().

◆ registerAllocationOpInterfaceExternalModels()

void mlir::memref::registerAllocationOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerBufferViewFlowOpInterfaceExternalModels()

void mlir::memref::registerBufferViewFlowOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerMemorySlotExternalModels()

void mlir::memref::registerMemorySlotExternalModels ( DialectRegistry registry)

Definition at line 349 of file MemRefMemorySlot.cpp.

References mlir::DialectRegistry::addExtension().

Referenced by mlir::registerAllDialects().

◆ registerRuntimeVerifiableOpInterfaceExternalModels()

void mlir::memref::registerRuntimeVerifiableOpInterfaceExternalModels ( DialectRegistry registry)

◆ registerTransformDialectExtension()

void mlir::memref::registerTransformDialectExtension ( DialectRegistry registry)

◆ registerValueBoundsOpInterfaceExternalModels()

void mlir::memref::registerValueBoundsOpInterfaceExternalModels ( DialectRegistry registry)

◆ replaceWithIndependentOp()

FailureOr< Value > mlir::memref::replaceWithIndependentOp ( RewriterBase rewriter,
memref::AllocaOp  allocaOp,
ValueRange  independencies 
)

Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.

If the op is already independent of all independencies, the same AllocaOp result is returned.

The original AllocaOp is replaced with the new one, wrapped in a SubviewOp. The result type of the replacement is different from the original allocation type: it has the same shape, but a different layout map. This function updates all users that do not have a memref result or memref region block argument, and some frequently used memref dialect ops (such as memref.subview). It does not update other uses such as the init_arg of an scf.for op. Such uses are wrapped in unrealized_conversion_cast.

Failure indicates the no suitable upper bound for the dynamic sizes could be found.

Example (make independent of iv):

scf.for %iv = %c0 to %sz step %c1 {
%0 = memref.alloca(%iv) : memref<?xf32>
%1 = memref.subview %0[0][5][1] : ...
linalg.generic outs(%1 : ...) ...
%2 = scf.for ... iter_arg(%arg0 = %0) ...
...
}

The above IR is rewritten to:

scf.for %iv = %c0 to %sz step %c1 {
%0 = memref.alloca(%sz - 1) : memref<?xf32>
%0_subview = memref.subview %0[0][%iv][1]
: memref<?xf32> to memref<?xf32, #map>
%1 = memref.subview %0_subview[0][5][1] : ...
linalg.generic outs(%1 : ...) ...
%cast = unrealized_conversion_cast %0_subview
: memref<?xf32, #map> to memref<?xf32>
%2 = scf.for ... iter_arg(%arg0 = %cast) ...
...
}

Definition at line 169 of file IndependenceTransforms.cpp.

References buildIndependentOp(), and replaceAndPropagateMemRefType().

◆ resultIsNotRead()

static bool mlir::memref::resultIsNotRead ( Operation op,
std::vector< Operation * > &  uses 
)
static

Returns true if all the uses of op are not read/load.

There can be SubviewOp users as long as all its users are also StoreOp/transfer_write. If return true it also fills out the uses, if it returns false uses is unchanged.

Definition at line 131 of file MemRefUtils.cpp.

References mlir::Operation::getNumRegions(), mlir::Operation::getNumResults(), and mlir::Operation::getUses().

Referenced by eraseDeadAllocAndStores().

◆ skipFullyAliasingOperations()

MemrefValue mlir::memref::skipFullyAliasingOperations ( MemrefValue  source)

Walk up the source chain until an operation that changes/defines the view of memory is found (i.e.

skip operations that alias the entire view).

Definition at line 183 of file MemRefUtils.cpp.

Referenced by isSameViewOrTrivialAlias().

◆ skipViewLikeOps()

MemrefValue mlir::memref::skipViewLikeOps ( MemrefValue  source)

Walk up the source chain until we find an operation that is not a view of the source memref (i.e.

implements ViewLikeOpInterface).

Definition at line 200 of file MemRefUtils.cpp.