MLIR 22.0.0git
mlir::memref Namespace Reference

Namespaces

namespace  impl

Classes

struct  ExpandReallocPassOptions
struct  LinearizedMemRefInfo
 For a memref with offset, sizes and strides, returns the offset, size, and potentially the size padded at the front to use for the linearized memref. More...
struct  MemRefEmulateWideIntOptions
struct  ResolveRankedShapeTypeResultDimsPassOptions
struct  ResolveShapedTypeResultDimsPassOptions

Functions

LogicalResult foldMemRefCast (Operation *op, Value inner=nullptr)
 This is a common utility used for patterns of the form "someop(memref.cast) -> someop".
Type getTensorTypeFromMemRefType (Type type)
 Return an unranked/ranked tensor type for the given unranked/ranked memref type.
std::optional< Operation * > findDealloc (Value allocValue)
 Finds a single dealloc operation for the given allocated value.
OpFoldResult getMixedSize (OpBuilder &builder, Location loc, Value value, int64_t dim)
 Return the dimension of the given memref value.
SmallVector< OpFoldResultgetMixedSizes (OpBuilder &builder, Location loc, Value value)
 Return the dimensions of the given memref value.
Value createCanonicalRankReducingSubViewOp (OpBuilder &b, Location loc, Value memref, ArrayRef< int64_t > targetShape)
 Create a rank-reducing SubViewOp @[0 .
void registerMemorySlotExternalModels (DialectRegistry &registry)
void registerValueBoundsOpInterfaceExternalModels (DialectRegistry &registry)
void registerTransformDialectExtension (DialectRegistry &registry)
void registerAllocationOpInterfaceExternalModels (DialectRegistry &registry)
void registerBufferViewFlowOpInterfaceExternalModels (DialectRegistry &registry)
void populateComposeSubViewPatterns (RewritePatternSet &patterns, MLIRContext *context)
std::unique_ptr<::mlir::PasscreateExpandOpsPass ()
std::unique_ptr<::mlir::PasscreateExpandReallocPass ()
std::unique_ptr<::mlir::PasscreateExpandReallocPass (ExpandReallocPassOptions options)
std::unique_ptr<::mlir::PasscreateExpandStridedMetadataPass ()
std::unique_ptr<::mlir::PasscreateFlattenMemrefsPass ()
std::unique_ptr<::mlir::PasscreateFoldMemRefAliasOpsPass ()
std::unique_ptr<::mlir::PasscreateMemRefEmulateWideInt ()
std::unique_ptr<::mlir::PasscreateMemRefEmulateWideInt (MemRefEmulateWideIntOptions options)
std::unique_ptr<::mlir::PasscreateNormalizeMemRefsPass ()
std::unique_ptr<::mlir::PasscreateReifyResultShapesPass ()
std::unique_ptr<::mlir::PasscreateResolveRankedShapeTypeResultDimsPass ()
std::unique_ptr<::mlir::PasscreateResolveRankedShapeTypeResultDimsPass (ResolveRankedShapeTypeResultDimsPassOptions options)
std::unique_ptr<::mlir::PasscreateResolveShapedTypeResultDimsPass ()
std::unique_ptr<::mlir::PasscreateResolveShapedTypeResultDimsPass (ResolveShapedTypeResultDimsPassOptions options)
void registerExpandOpsPass ()
void registerExpandOpsPassPass ()
void registerExpandReallocPass ()
void registerExpandReallocPassPass ()
void registerExpandStridedMetadataPass ()
void registerExpandStridedMetadataPassPass ()
void registerFlattenMemrefsPass ()
void registerFlattenMemrefsPassPass ()
void registerFoldMemRefAliasOpsPass ()
void registerFoldMemRefAliasOpsPassPass ()
void registerMemRefEmulateWideInt ()
void registerMemRefEmulateWideIntPass ()
void registerNormalizeMemRefsPass ()
void registerNormalizeMemRefsPassPass ()
void registerReifyResultShapesPass ()
void registerReifyResultShapesPassPass ()
void registerResolveRankedShapeTypeResultDimsPass ()
void registerResolveRankedShapeTypeResultDimsPassPass ()
void registerResolveShapedTypeResultDimsPass ()
void registerResolveShapedTypeResultDimsPassPass ()
void registerMemRefPasses ()
void registerRuntimeVerifiableOpInterfaceExternalModels (DialectRegistry &registry)
void populateExpandOpsPatterns (RewritePatternSet &patterns)
 Collects a set of patterns to rewrite ops within the memref dialect.
void populateFoldMemRefAliasOpPatterns (RewritePatternSet &patterns)
 Appends patterns for folding memref aliasing ops into consumer load/store ops into patterns.
void populateResolveRankedShapedTypeResultDimsPatterns (RewritePatternSet &patterns)
 Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.
void populateResolveShapedTypeResultDimsPatterns (RewritePatternSet &patterns)
 Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface, in terms of shapes of its input operands.
void populateExpandStridedMetadataPatterns (RewritePatternSet &patterns)
 Appends patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs.
void populateResolveExtractStridedMetadataPatterns (RewritePatternSet &patterns)
 Appends patterns for resolving memref.extract_strided_metadata into memref.extract_strided_metadata of its source.
void populateExpandReallocPatterns (RewritePatternSet &patterns, bool emitDeallocs=true)
 Appends patterns for expanding memref.realloc operations.
void populateMemRefWideIntEmulationPatterns (const arith::WideIntEmulationConverter &typeConverter, RewritePatternSet &patterns)
 Appends patterns for emulating wide integer memref operations with ops over narrower integer types.
void populateMemRefWideIntEmulationConversions (arith::WideIntEmulationConverter &typeConverter)
 Appends type conversions for emulating wide integer memref operations with ops over narrowe integer types.
void populateMemRefNarrowTypeEmulationPatterns (const arith::NarrowTypeEmulationConverter &typeConverter, RewritePatternSet &patterns)
 Appends patterns for emulating memref operations over narrow types with ops over wider types.
void populateMemRefNarrowTypeEmulationConversions (arith::NarrowTypeEmulationConverter &typeConverter)
 Appends type conversions for emulating memref operations over narrow types with ops over wider types.
FailureOr< memref::AllocOp > multiBuffer (RewriterBase &rewriter, memref::AllocOp allocOp, unsigned multiplier, bool skipOverrideAnalysis=false)
 Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations.
FailureOr< memref::AllocOp > multiBuffer (memref::AllocOp allocOp, unsigned multiplier, bool skipOverrideAnalysis=false)
 Call into multiBuffer with locally constructed IRRewriter.
void populateExtractAddressComputationsPatterns (RewritePatternSet &patterns)
 Appends patterns for extracting address computations from the instructions with memory accesses such that these memory accesses use only a base pointer.
void populateFlattenVectorOpsOnMemrefPatterns (RewritePatternSet &patterns)
 Patterns for flattening multi-dimensional memref operations into one-dimensional memref operations.
void populateFlattenMemrefOpsPatterns (RewritePatternSet &patterns)
void populateFlattenMemrefsPatterns (RewritePatternSet &patterns)
FailureOr< ValuebuildIndependentOp (OpBuilder &b, AllocaOp allocaOp, ValueRange independencies)
 Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.
FailureOr< ValuereplaceWithIndependentOp (RewriterBase &rewriter, memref::AllocaOp allocaOp, ValueRange independencies)
 Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.
memref::AllocaOp allocToAlloca (RewriterBase &rewriter, memref::AllocOp alloc, function_ref< bool(memref::AllocOp, memref::DeallocOp)> filter=nullptr)
 Replaces the given alloc with the corresponding alloca and returns it if the following conditions are met:
bool isStaticShapeAndContiguousRowMajor (MemRefType type)
 Returns true, if the memref type has static shapes and represents a contiguous chunk of memory.
std::pair< LinearizedMemRefInfo, OpFoldResultgetLinearizedMemRefOffsetAndSize (OpBuilder &builder, Location loc, int srcBits, int dstBits, OpFoldResult offset, ArrayRef< OpFoldResult > sizes, ArrayRef< OpFoldResult > strides, ArrayRef< OpFoldResult > indices={})
LinearizedMemRefInfo getLinearizedMemRefOffsetAndSize (OpBuilder &builder, Location loc, int srcBits, int dstBits, OpFoldResult offset, ArrayRef< OpFoldResult > sizes)
 For a memref with offset and sizes, returns the offset and size to use for the linearized memref, assuming that the strides are computed from a row-major ordering of the sizes;.
void eraseDeadAllocAndStores (RewriterBase &rewriter, Operation *parentOp)
 Track temporary allocations that are never read from.
SmallVector< OpFoldResultcomputeSuffixProductIRBlock (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
 Given a set of sizes, return the suffix product.
SmallVector< OpFoldResultcomputeStridesIRBlock (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
MemrefValue skipFullyAliasingOperations (MemrefValue source)
 Walk up the source chain until an operation that changes/defines the view of memory is found (i.e.
bool isSameViewOrTrivialAlias (MemrefValue a, MemrefValue b)
 Checks if two (memref) values are the same or statically known to alias the same region of memory.
MemrefValue skipViewLikeOps (MemrefValue source)
 Walk up the source chain until we find an operation that is not a view of the source memref (i.e.
LogicalResult resolveSourceIndicesExpandShape (Location loc, PatternRewriter &rewriter, memref::ExpandShapeOp expandShapeOp, ValueRange indices, SmallVectorImpl< Value > &sourceIndices, bool startsInbounds)
 Given the 'indices' of a load/store operation where the memref is a result of a expand_shape op, returns the indices w.r.t to the source memref of the expand_shape op.
LogicalResult resolveSourceIndicesCollapseShape (Location loc, PatternRewriter &rewriter, memref::CollapseShapeOp collapseShapeOp, ValueRange indices, SmallVectorImpl< Value > &sourceIndices)
 Given the 'indices' of a load/store operation where the memref is a result of a collapse_shape op, returns the indices w.r.t to the source memref of the collapse_shape op.
static bool resultIsNotRead (Operation *op, std::vector< Operation * > &uses)
 Returns true if all the uses of op are not read/load.
static SmallVector< OpFoldResultcomputeSuffixProductIRBlockImpl (Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes, OpFoldResult unit)

Function Documentation

◆ allocToAlloca()

memref::AllocaOp mlir::memref::allocToAlloca ( RewriterBase & rewriter,
memref::AllocOp alloc,
function_ref< bool(memref::AllocOp, memref::DeallocOp)> filter = nullptr )

Replaces the given alloc with the corresponding alloca and returns it if the following conditions are met:

  • the corresponding dealloc is available in the same block as the alloc;
  • the filter, if provided, succeeds on the alloc/dealloc pair. Otherwise returns nullptr and leaves the IR unchanged.

Definition at line 178 of file IndependenceTransforms.cpp.

References mlir::RewriterBase::eraseOp(), mlir::getType(), mlir::RewriterBase::replaceOpWithNewOp(), and mlir::OpBuilder::setInsertionPoint().

◆ buildIndependentOp()

FailureOr< Value > mlir::memref::buildIndependentOp ( OpBuilder & b,
AllocaOp allocaOp,
ValueRange independencies )

Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.

If the op is already independent of all independencies, the same AllocaOp result is returned.

Failure indicates the no suitable upper bound for the dynamic sizes could be found.

References b.

Referenced by replaceWithIndependentOp().

◆ computeStridesIRBlock()

SmallVector< OpFoldResult > mlir::memref::computeStridesIRBlock ( Location loc,
OpBuilder & builder,
ArrayRef< OpFoldResult > sizes )
inline

Definition at line 100 of file MemRefUtils.h.

References computeSuffixProductIRBlock().

◆ computeSuffixProductIRBlock()

SmallVector< OpFoldResult > mlir::memref::computeSuffixProductIRBlock ( Location loc,
OpBuilder & builder,
ArrayRef< OpFoldResult > sizes )

Given a set of sizes, return the suffix product.

When applied to slicing, this is the calculation needed to derive the strides (i.e. the number of linear indices to skip along the (k-1) most minor dimensions to get the next k-slice).

This is the basis to linearize an n-D offset confined to [0 ... sizes].

Assuming sizes is [s0, .. sn], return the vector<Value> [s1 * ... * sn, s2 * ... * sn, ..., sn, 1].

It is the caller's responsibility to provide valid OpFoldResult type values and construct valid IR in the end.

sizes elements are asserted to be non-negative.

Return an empty vector if sizes is empty.

The function emits an IR block which computes suffix product for provided sizes.

Definition at line 190 of file MemRefUtils.cpp.

References computeSuffixProductIRBlockImpl(), and mlir::Builder::getIndexAttr().

Referenced by computeStridesIRBlock().

◆ computeSuffixProductIRBlockImpl()

SmallVector< OpFoldResult > mlir::memref::computeSuffixProductIRBlockImpl ( Location loc,
OpBuilder & builder,
ArrayRef< OpFoldResult > sizes,
OpFoldResult unit )
static

◆ createCanonicalRankReducingSubViewOp()

Value mlir::memref::createCanonicalRankReducingSubViewOp ( OpBuilder & b,
Location loc,
Value memref,
ArrayRef< int64_t > targetShape )

Create a rank-reducing SubViewOp @[0 .

. 0] with strides [1 .. 1] and appropriate sizes (i.e. memref.getSizes()) to reduce the rank of memref to that of targetShape.

Definition at line 3247 of file MemRefOps.cpp.

References b, and mlir::tensor::getMixedSizes().

◆ createExpandOpsPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createExpandOpsPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 88 of file ExpandOps.cpp.

References getContext(), mlir::patterns, populateExpandOpsPatterns(), and target.

◆ createExpandReallocPass() [1/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createExpandReallocPass ( )

◆ createExpandReallocPass() [2/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createExpandReallocPass ( ExpandReallocPassOptions options)

Definition at line 185 of file ExpandRealloc.cpp.

◆ createExpandStridedMetadataPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createExpandStridedMetadataPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 261 of file ExpandStridedMetadata.cpp.

References mlir::AffineExpr::floorDiv(), mlir::Builder::getAffineSymbolExpr(), mlir::Builder::getIndexAttr(), and mlir::affine::makeComposedFoldedAffineApply().

Referenced by mlir::sparse_tensor::buildSparsifier().

◆ createFlattenMemrefsPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createFlattenMemrefsPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 338 of file FlattenMemRefs.cpp.

◆ createFoldMemRefAliasOpsPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createFoldMemRefAliasOpsPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 415 of file FoldMemRefAliasOps.cpp.

◆ createMemRefEmulateWideInt() [1/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createMemRefEmulateWideInt ( )

Definition at line 506 of file EmulateWideInt.cpp.

◆ createMemRefEmulateWideInt() [2/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createMemRefEmulateWideInt ( MemRefEmulateWideIntOptions options)

Definition at line 510 of file EmulateWideInt.cpp.

◆ createNormalizeMemRefsPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createNormalizeMemRefsPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 585 of file NormalizeMemRefs.cpp.

◆ createReifyResultShapesPass()

std::unique_ptr<::mlir::Pass > mlir::memref::createReifyResultShapesPass ( )

We declare an explicit private instantiation because Pass classes should only be visible by the current library.

Definition at line 662 of file ReifyResultShapes.cpp.

◆ createResolveRankedShapeTypeResultDimsPass() [1/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createResolveRankedShapeTypeResultDimsPass ( )

Definition at line 754 of file ResolveShapedTypeResultDims.cpp.

◆ createResolveRankedShapeTypeResultDimsPass() [2/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createResolveRankedShapeTypeResultDimsPass ( ResolveRankedShapeTypeResultDimsPassOptions options)

Definition at line 758 of file ResolveShapedTypeResultDims.cpp.

◆ createResolveShapedTypeResultDimsPass() [1/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createResolveShapedTypeResultDimsPass ( )

Definition at line 851 of file ResolveShapedTypeResultDims.cpp.

◆ createResolveShapedTypeResultDimsPass() [2/2]

std::unique_ptr<::mlir::Pass > mlir::memref::createResolveShapedTypeResultDimsPass ( ResolveShapedTypeResultDimsPassOptions options)

Definition at line 855 of file ResolveShapedTypeResultDims.cpp.

◆ eraseDeadAllocAndStores()

void mlir::memref::eraseDeadAllocAndStores ( RewriterBase & rewriter,
Operation * parentOp )

Track temporary allocations that are never read from.

If this is the case it means both the allocations and associated stores can be removed.

Definition at line 159 of file MemRefUtils.cpp.

References mlir::RewriterBase::eraseOp(), resultIsNotRead(), and mlir::Operation::walk().

◆ findDealloc()

std::optional< Operation * > mlir::memref::findDealloc ( Value allocValue)

Finds a single dealloc operation for the given allocated value.

Finds the unique dealloc operation (if one exists) for allocValue.

If there are > 1 deallocates for allocValue, returns std::nullopt, else returns the single deallocate if it exists or nullptr.

Definition at line 64 of file MemRefDialect.cpp.

References mlir::Value::getUsers(), and mlir::hasEffect().

◆ foldMemRefCast()

LogicalResult mlir::memref::foldMemRefCast ( Operation * op,
Value inner = nullptr )

This is a common utility used for patterns of the form "someop(memref.cast) -> someop".

This is a common class used for patterns of the form "someop(memrefcast) -> someop".

It folds the source of any memref.cast into the root operation directly.

Definition at line 45 of file MemRefOps.cpp.

References mlir::Operation::getOpOperands(), and success().

Referenced by mlir::affine::AffineDmaStartOp::fold(), and mlir::affine::AffineDmaWaitOp::fold().

◆ getLinearizedMemRefOffsetAndSize() [1/2]

LinearizedMemRefInfo mlir::memref::getLinearizedMemRefOffsetAndSize ( OpBuilder & builder,
Location loc,
int srcBits,
int dstBits,
OpFoldResult offset,
ArrayRef< OpFoldResult > sizes )

For a memref with offset and sizes, returns the offset and size to use for the linearized memref, assuming that the strides are computed from a row-major ordering of the sizes;.

  • If the linearization is done for emulating load/stores of element type with bitwidth srcBits using element type with bitwidth dstBits, the linearized offset and size are scaled down by dstBits/srcBits.

Definition at line 113 of file MemRefUtils.cpp.

References mlir::bindSymbols(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), getLinearizedMemRefOffsetAndSize(), and mlir::affine::makeComposedFoldedAffineApply().

◆ getLinearizedMemRefOffsetAndSize() [2/2]

◆ getMixedSize()

OpFoldResult mlir::memref::getMixedSize ( OpBuilder & builder,
Location loc,
Value value,
int64_t dim )

Return the dimension of the given memref value.

Definition at line 68 of file MemRefOps.cpp.

References mlir::OpBuilder::createOrFold(), mlir::Builder::getIndexAttr(), and mlir::Value::getType().

Referenced by createInBoundsCond(), and getMixedSizes().

◆ getMixedSizes()

SmallVector< OpFoldResult > mlir::memref::getMixedSizes ( OpBuilder & builder,
Location loc,
Value value )

◆ getTensorTypeFromMemRefType()

Type mlir::memref::getTensorTypeFromMemRefType ( Type type)

Return an unranked/ranked tensor type for the given unranked/ranked memref type.

Definition at line 60 of file MemRefOps.cpp.

References mlir::Type::getContext().

Referenced by parseGlobalMemrefOpTypeAndInitialValue().

◆ isSameViewOrTrivialAlias()

bool mlir::memref::isSameViewOrTrivialAlias ( MemrefValue a,
MemrefValue b )
inline

Checks if two (memref) values are the same or statically known to alias the same region of memory.

Definition at line 111 of file MemRefUtils.h.

References b, and skipFullyAliasingOperations().

◆ isStaticShapeAndContiguousRowMajor()

bool mlir::memref::isStaticShapeAndContiguousRowMajor ( MemRefType type)

Returns true, if the memref type has static shapes and represents a contiguous chunk of memory.

Definition at line 23 of file MemRefUtils.cpp.

◆ multiBuffer() [1/2]

FailureOr< memref::AllocOp > mlir::memref::multiBuffer ( memref::AllocOp allocOp,
unsigned multiplier,
bool skipOverrideAnalysis = false )

Call into multiBuffer with locally constructed IRRewriter.

Definition at line 228 of file MultiBuffer.cpp.

References multiBuffer().

◆ multiBuffer() [2/2]

FailureOr< memref::AllocOp > mlir::memref::multiBuffer ( RewriterBase & rewriter,
memref::AllocOp allocOp,
unsigned multiplier,
bool skipOverrideAnalysis = false )

Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations.

It returns the new allocation if the original allocation was multi-buffered and returns failure() otherwise. When skipOverrideAnalysis, the pass will apply the transformation without checking thwt the buffer is overrided at the beginning of each iteration. This implies that user knows that there is no data carried across loop iterations. Example:

%0 = memref.alloc() : memref<4x128xf32>
scf.for %iv = %c1 to %c1024 step %c3 {
"some_use"(%0) : (memref<4x128xf32>) -> ()
}

into:

scf.for %iv = %c1 to %c1024 step %c3 {
%s = arith.subi %iv, %c1 : index
%d = arith.divsi %s, %c3 : index
%i = arith.remsi %d, %c5 : index
%sv = memref.subview %0[%i, 0, 0] [1, 4, 128] [1, 1, 1] :
memref.copy %1, %sv : memref<4x128xf32> to memref<4x128xf32, strided<...>>
"some_use"(%sv) : (memref<4x128xf32, strided<...>) -> ()
}

Make sure there is no loop-carried dependency on the allocation.

Definition at line 82 of file MultiBuffer.cpp.

References mlir::bindDims(), DBGS, mlir::DominanceInfo::dominates(), mlir::RewriterBase::eraseOp(), mlir::Builder::getContext(), mlir::Builder::getIndexAttr(), mlir::Operation::getUsers(), mlir::getValueOrCreateConstantIndexOp(), mlir::affine::makeComposedAffineApply(), overrideBuffer(), replaceUsesAndPropagateType(), mlir::OpBuilder::setInsertionPoint(), mlir::OpBuilder::setInsertionPointToStart(), mlir::MemRefType::Builder::setLayout(), and mlir::MemRefType::Builder::setShape().

Referenced by multiBuffer().

◆ populateComposeSubViewPatterns()

void mlir::memref::populateComposeSubViewPatterns ( RewritePatternSet & patterns,
MLIRContext * context )

Definition at line 139 of file ComposeSubView.cpp.

References mlir::patterns.

◆ populateExpandOpsPatterns()

void mlir::memref::populateExpandOpsPatterns ( RewritePatternSet & patterns)

Collects a set of patterns to rewrite ops within the memref dialect.

Definition at line 107 of file ExpandOps.cpp.

References mlir::patterns.

Referenced by createExpandOpsPass().

◆ populateExpandReallocPatterns()

void mlir::memref::populateExpandReallocPatterns ( RewritePatternSet & patterns,
bool emitDeallocs = true )

Appends patterns for expanding memref.realloc operations.

Definition at line 168 of file ExpandRealloc.cpp.

◆ populateExpandStridedMetadataPatterns()

void mlir::memref::populateExpandStridedMetadataPatterns ( RewritePatternSet & patterns)

Appends patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs.

Definition at line 1116 of file ExpandStridedMetadata.cpp.

References mlir::patterns.

◆ populateExtractAddressComputationsPatterns()

void mlir::memref::populateExtractAddressComputationsPatterns ( RewritePatternSet & patterns)

Appends patterns for extracting address computations from the instructions with memory accesses such that these memory accesses use only a base pointer.

For instance,

memref.load %base[%off0, ...]

Will be rewritten in:

%new_base = memref.subview %base[%off0,...][1,...][1,...]
memref.load %new_base[%c0,...]

Definition at line 282 of file ExtractAddressComputations.cpp.

References mlir::patterns.

◆ populateFlattenMemrefOpsPatterns()

void mlir::memref::populateFlattenMemrefOpsPatterns ( RewritePatternSet & patterns)

◆ populateFlattenMemrefsPatterns()

void mlir::memref::populateFlattenMemrefsPatterns ( RewritePatternSet & patterns)

Definition at line 293 of file FlattenMemRefs.cpp.

References mlir::patterns.

◆ populateFlattenVectorOpsOnMemrefPatterns()

void mlir::memref::populateFlattenVectorOpsOnMemrefPatterns ( RewritePatternSet & patterns)

Patterns for flattening multi-dimensional memref operations into one-dimensional memref operations.

Definition at line 274 of file FlattenMemRefs.cpp.

References mlir::patterns.

Referenced by mlir::memref::impl::FlattenMemrefsPassBase< DerivedT >::getArgumentName().

◆ populateFoldMemRefAliasOpPatterns()

void mlir::memref::populateFoldMemRefAliasOpPatterns ( RewritePatternSet & patterns)

Appends patterns for folding memref aliasing ops into consumer load/store ops into patterns.

Definition at line 644 of file FoldMemRefAliasOps.cpp.

References mlir::patterns.

◆ populateMemRefNarrowTypeEmulationConversions()

void mlir::memref::populateMemRefNarrowTypeEmulationConversions ( arith::NarrowTypeEmulationConverter & typeConverter)

Appends type conversions for emulating memref operations over narrow types with ops over wider types.

Definition at line 635 of file EmulateNarrowType.cpp.

References mlir::Type::getIntOrFloatBitWidth(), getLinearizedShape(), mlir::arith::NarrowTypeEmulationConverter::getLoadStoreBitwidth(), mlir::Type::isInteger(), and mlir::Type::isIntOrFloat().

◆ populateMemRefNarrowTypeEmulationPatterns()

void mlir::memref::populateMemRefNarrowTypeEmulationPatterns ( const arith::NarrowTypeEmulationConverter & typeConverter,
RewritePatternSet & patterns )

Appends patterns for emulating memref operations over narrow types with ops over wider types.

Definition at line 602 of file EmulateNarrowType.cpp.

References mlir::patterns, and populateResolveExtractStridedMetadataPatterns().

◆ populateMemRefWideIntEmulationConversions()

void mlir::memref::populateMemRefWideIntEmulationConversions ( arith::WideIntEmulationConverter & typeConverter)

Appends type conversions for emulating wide integer memref operations with ops over narrowe integer types.

Definition at line 148 of file EmulateWideInt.cpp.

References mlir::arith::WideIntEmulationConverter::getMaxTargetIntBitWidth().

◆ populateMemRefWideIntEmulationPatterns()

void mlir::memref::populateMemRefWideIntEmulationPatterns ( const arith::WideIntEmulationConverter & typeConverter,
RewritePatternSet & patterns )

Appends patterns for emulating wide integer memref operations with ops over narrower integer types.

Definition at line 140 of file EmulateWideInt.cpp.

References mlir::patterns.

◆ populateResolveExtractStridedMetadataPatterns()

void mlir::memref::populateResolveExtractStridedMetadataPatterns ( RewritePatternSet & patterns)

Appends patterns for resolving memref.extract_strided_metadata into memref.extract_strided_metadata of its source.

Definition at line 1137 of file ExpandStridedMetadata.cpp.

References mlir::patterns.

Referenced by populateMemRefNarrowTypeEmulationPatterns().

◆ populateResolveRankedShapedTypeResultDimsPatterns()

void mlir::memref::populateResolveRankedShapedTypeResultDimsPatterns ( RewritePatternSet & patterns)

Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the ReifyRankedShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 181 of file ResolveShapedTypeResultDims.cpp.

References mlir::patterns.

Referenced by populateFoldUnitExtentDimsViaReshapesPatterns(), and populateFoldUnitExtentDimsViaSlicesPatterns().

◆ populateResolveShapedTypeResultDimsPatterns()

void mlir::memref::populateResolveShapedTypeResultDimsPatterns ( RewritePatternSet & patterns)

Appends patterns that resolve memref.dim operations with values that are defined by operations that implement the InferShapedTypeOpInterface, in terms of shapes of its input operands.

Definition at line 188 of file ResolveShapedTypeResultDims.cpp.

References mlir::patterns.

Referenced by populateFoldUnitExtentDimsViaReshapesPatterns(), and populateFoldUnitExtentDimsViaSlicesPatterns().

◆ registerAllocationOpInterfaceExternalModels()

void mlir::memref::registerAllocationOpInterfaceExternalModels ( DialectRegistry & registry)

◆ registerBufferViewFlowOpInterfaceExternalModels()

void mlir::memref::registerBufferViewFlowOpInterfaceExternalModels ( DialectRegistry & registry)

◆ registerExpandOpsPass()

void mlir::memref::registerExpandOpsPass ( )
inline

Definition at line 879 of file Passes.h.

◆ registerExpandOpsPassPass()

void mlir::memref::registerExpandOpsPassPass ( )
inline

Definition at line 886 of file Passes.h.

◆ registerExpandReallocPass()

void mlir::memref::registerExpandReallocPass ( )
inline

Definition at line 900 of file Passes.h.

◆ registerExpandReallocPassPass()

void mlir::memref::registerExpandReallocPassPass ( )
inline

Definition at line 907 of file Passes.h.

◆ registerExpandStridedMetadataPass()

void mlir::memref::registerExpandStridedMetadataPass ( )
inline

Definition at line 921 of file Passes.h.

◆ registerExpandStridedMetadataPassPass()

void mlir::memref::registerExpandStridedMetadataPassPass ( )
inline

Definition at line 928 of file Passes.h.

◆ registerFlattenMemrefsPass()

void mlir::memref::registerFlattenMemrefsPass ( )
inline

Definition at line 942 of file Passes.h.

◆ registerFlattenMemrefsPassPass()

void mlir::memref::registerFlattenMemrefsPassPass ( )
inline

Definition at line 949 of file Passes.h.

◆ registerFoldMemRefAliasOpsPass()

void mlir::memref::registerFoldMemRefAliasOpsPass ( )
inline

Definition at line 963 of file Passes.h.

◆ registerFoldMemRefAliasOpsPassPass()

void mlir::memref::registerFoldMemRefAliasOpsPassPass ( )
inline

Definition at line 970 of file Passes.h.

◆ registerMemorySlotExternalModels()

void mlir::memref::registerMemorySlotExternalModels ( DialectRegistry & registry)

Definition at line 332 of file MemRefMemorySlot.cpp.

References mlir::DialectRegistry::addExtension().

Referenced by mlir::registerAllDialects().

◆ registerMemRefEmulateWideInt()

void mlir::memref::registerMemRefEmulateWideInt ( )
inline

Definition at line 984 of file Passes.h.

◆ registerMemRefEmulateWideIntPass()

void mlir::memref::registerMemRefEmulateWideIntPass ( )
inline

Definition at line 991 of file Passes.h.

◆ registerMemRefPasses()

void mlir::memref::registerMemRefPasses ( )
inline

Definition at line 1089 of file Passes.h.

Referenced by mlir::registerAllPasses().

◆ registerNormalizeMemRefsPass()

void mlir::memref::registerNormalizeMemRefsPass ( )
inline

Definition at line 1005 of file Passes.h.

◆ registerNormalizeMemRefsPassPass()

void mlir::memref::registerNormalizeMemRefsPassPass ( )
inline

Definition at line 1012 of file Passes.h.

◆ registerReifyResultShapesPass()

void mlir::memref::registerReifyResultShapesPass ( )
inline

Definition at line 1026 of file Passes.h.

◆ registerReifyResultShapesPassPass()

void mlir::memref::registerReifyResultShapesPassPass ( )
inline

Definition at line 1033 of file Passes.h.

◆ registerResolveRankedShapeTypeResultDimsPass()

void mlir::memref::registerResolveRankedShapeTypeResultDimsPass ( )
inline

Definition at line 1047 of file Passes.h.

◆ registerResolveRankedShapeTypeResultDimsPassPass()

void mlir::memref::registerResolveRankedShapeTypeResultDimsPassPass ( )
inline

Definition at line 1054 of file Passes.h.

◆ registerResolveShapedTypeResultDimsPass()

void mlir::memref::registerResolveShapedTypeResultDimsPass ( )
inline

Definition at line 1068 of file Passes.h.

◆ registerResolveShapedTypeResultDimsPassPass()

void mlir::memref::registerResolveShapedTypeResultDimsPassPass ( )
inline

Definition at line 1075 of file Passes.h.

◆ registerRuntimeVerifiableOpInterfaceExternalModels()

void mlir::memref::registerRuntimeVerifiableOpInterfaceExternalModels ( DialectRegistry & registry)

◆ registerTransformDialectExtension()

void mlir::memref::registerTransformDialectExtension ( DialectRegistry & registry)

◆ registerValueBoundsOpInterfaceExternalModels()

void mlir::memref::registerValueBoundsOpInterfaceExternalModels ( DialectRegistry & registry)

◆ replaceWithIndependentOp()

FailureOr< Value > mlir::memref::replaceWithIndependentOp ( RewriterBase & rewriter,
memref::AllocaOp allocaOp,
ValueRange independencies )

Build a new memref::AllocaOp whose dynamic sizes are independent of all given independencies.

If the op is already independent of all independencies, the same AllocaOp result is returned.

The original AllocaOp is replaced with the new one, wrapped in a SubviewOp. The result type of the replacement is different from the original allocation type: it has the same shape, but a different layout map. This function updates all users that do not have a memref result or memref region block argument, and some frequently used memref dialect ops (such as memref.subview). It does not update other uses such as the init_arg of an scf.for op. Such uses are wrapped in unrealized_conversion_cast.

Failure indicates the no suitable upper bound for the dynamic sizes could be found.

Example (make independent of iv):

scf.for %iv = %c0 to %sz step %c1 {
%0 = memref.alloca(%iv) : memref<?xf32>
%1 = memref.subview %0[0][5][1] : ...
linalg.generic outs(%1 : ...) ...
%2 = scf.for ... iter_arg(%arg0 = %0) ...
...
}

The above IR is rewritten to:

scf.for %iv = %c0 to %sz step %c1 {
%0 = memref.alloca(%sz - 1) : memref<?xf32>
%0_subview = memref.subview %0[0][%iv][1]
%1 = memref.subview %0_subview[0][5][1] : ...
linalg.generic outs(%1 : ...) ...
%cast = unrealized_conversion_cast %0_subview
%2 = scf.for ... iter_arg(%arg0 = %cast) ...
...
}

Definition at line 166 of file IndependenceTransforms.cpp.

References buildIndependentOp(), replaceAndPropagateMemRefType(), and replacement().

◆ resolveSourceIndicesCollapseShape()

LogicalResult mlir::memref::resolveSourceIndicesCollapseShape ( Location loc,
PatternRewriter & rewriter,
memref::CollapseShapeOp collapseShapeOp,
ValueRange indices,
SmallVectorImpl< Value > & sourceIndices )

Given the 'indices' of a load/store operation where the memref is a result of a collapse_shape op, returns the indices w.r.t to the source memref of the collapse_shape op.

For example

%0 = ... : memref<2x6x42xf32> %1 = memref.collapse_shape %0 [[0, 1], [2]] : memref<2x6x42xf32> into memref<12x42xf32> %2 = load %1[i1, i2] : memref<12x42xf32>

could be folded into

%2 = load %0[i1 / 6, i1 % 6, i2] : memref<2x6x42xf32>

Definition at line 253 of file MemRefUtils.cpp.

References mlir::delinearize(), mlir::Builder::getConstantAffineMap(), mlir::getValueOrCreateConstantIndexOp(), indices, mlir::affine::makeComposedFoldedAffineApply(), and success().

Referenced by mlir::amdgpu::foldMemrefViewOp(), and mlir::memref::impl::FoldMemRefAliasOpsPassBase< DerivedT >::getDependentDialects().

◆ resolveSourceIndicesExpandShape()

LogicalResult mlir::memref::resolveSourceIndicesExpandShape ( Location loc,
PatternRewriter & rewriter,
memref::ExpandShapeOp expandShapeOp,
ValueRange indices,
SmallVectorImpl< Value > & sourceIndices,
bool startsInbounds )

Given the 'indices' of a load/store operation where the memref is a result of a expand_shape op, returns the indices w.r.t to the source memref of the expand_shape op.

For example

%0 = ... : memref<12x42xf32> %1 = memref.expand_shape %0 [[0, 1], [2]] : memref<12x42xf32> into memref<2x6x42xf32> %2 = load %1[i1, i2, i3] : memref<2x6x42xf32

could be folded into

%2 = load %0[6 * i1 + i2, i3] : memref<12x42xf32>

Definition at line 226 of file MemRefUtils.cpp.

References indices, and success().

Referenced by mlir::amdgpu::foldMemrefViewOp().

◆ resultIsNotRead()

bool mlir::memref::resultIsNotRead ( Operation * op,
std::vector< Operation * > & uses )
static

Returns true if all the uses of op are not read/load.

There can be view-like-op users as long as all its users are also StoreOp/transfer_write. If return true it also fills out the uses, if it returns false uses is unchanged.

Definition at line 139 of file MemRefUtils.cpp.

References mlir::Operation::getNumRegions(), mlir::Operation::getNumResults(), mlir::Operation::getUses(), mlir::hasEffect(), mlir::Operation::mightHaveTrait(), and resultIsNotRead().

Referenced by eraseDeadAllocAndStores(), and resultIsNotRead().

◆ skipFullyAliasingOperations()

MemrefValue mlir::memref::skipFullyAliasingOperations ( MemrefValue source)

Walk up the source chain until an operation that changes/defines the view of memory is found (i.e.

skip operations that alias the entire view).

Definition at line 196 of file MemRefUtils.cpp.

Referenced by isSameViewOrTrivialAlias().

◆ skipViewLikeOps()

MemrefValue mlir::memref::skipViewLikeOps ( MemrefValue source)

Walk up the source chain until we find an operation that is not a view of the source memref (i.e.

implements ViewLikeOpInterface).

Definition at line 213 of file MemRefUtils.cpp.