MLIR
22.0.0git
|
Namespaces | |
detail | |
Classes | |
struct | ScalableValueBoundsConstraintSet |
A version of ValueBoundsConstraintSet that can solve for scalable bounds. More... | |
struct | VectorDim |
struct | LowerVectorsOptions |
Helper structure used to hold the different options of LowerVectorsOp. More... | |
struct | WarpExecuteOnLane0LoweringOptions |
struct | UnrollVectorOptions |
Options that control the vector unrolling. More... | |
struct | VectorTransformsOptions |
Structure to control the behavior of vector transform patterns. More... | |
struct | VscaleRange |
struct | MaskableOpRewritePattern |
A pattern for ops that implement MaskableOpInterface and that might be masked (i.e. More... | |
Typedefs | |
using | ConstantOrScalableBound = ScalableValueBoundsConstraintSet::ConstantOrScalableBound |
using | DistributionMapFn = std::function< AffineMap(Value)> |
using | UnrollVectorOpFn = function_ref< Value(PatternRewriter &, Location, VectorType, int64_t)> |
Generic utility for unrolling n-D vector operations to (n-1)-D operations. More... | |
Enumerations | |
enum class | ConstantMaskKind { AllFalse = 0 , AllTrue } |
Predefined constant_mask kinds. More... | |
enum class | BroadcastableToResult { Success = 0 , SourceRankHigher = 1 , DimensionMismatch = 2 , SourceTypeNotAVector = 3 } |
Return whether srcType can be broadcast to dstVectorType under the semantics of the vector.broadcast op. More... | |
Functions | |
void | registerConvertVectorToLLVMInterface (DialectRegistry ®istry) |
void | registerValueBoundsOpInterfaceExternalModels (DialectRegistry ®istry) |
void | buildTerminatedBody (OpBuilder &builder, Location loc) |
Default callback to build a region with a 'vector.yield' terminator with no arguments. More... | |
BroadcastableToResult | isBroadcastableTo (Type srcType, VectorType dstVectorType, std::pair< VectorDim, VectorDim > *mismatchingDims=nullptr) |
void | populateVectorToVectorCanonicalizationPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Collect a set of vector-to-vector canonicalization patterns. More... | |
void | populateFoldArithExtensionPatterns (RewritePatternSet &patterns) |
Collect a set of patterns that fold arithmetic extension on floating point into vector contract for the backends with native support. More... | |
void | populateElementwiseToVectorOpsPatterns (RewritePatternSet &patterns) |
Collect a set of patterns that fold elementwise op on vectors to the vector dialect. More... | |
IntegerType | getVectorSubscriptType (Builder &builder) |
Returns the integer type required for subscripts in the vector dialect. More... | |
ArrayAttr | getVectorSubscriptAttr (Builder &b, ArrayRef< int64_t > values) |
Returns an integer array attribute containing the given values using the integer type required for subscripts in the vector dialect. More... | |
Value | getVectorReductionOp (arith::AtomicRMWKind op, OpBuilder &builder, Location loc, Value vector) |
Returns the value obtained by reducing the vector into a scalar using the operation kind associated with a binary AtomicRMWKind op. More... | |
AffineMap | getTransferMinorIdentityMap (ShapedType shapedType, VectorType vectorType) |
Build the default minor identity map suitable for a vector transfer. More... | |
bool | checkSameValueRAW (TransferWriteOp defWrite, TransferReadOp read) |
Return true if the transfer_write fully writes the data accessed by the transfer_read. More... | |
bool | checkSameValueWAW (TransferWriteOp write, TransferWriteOp priorWrite) |
Return true if the write op fully over-write the priorWrite transfer_write op. More... | |
bool | isDisjointTransferIndices (VectorTransferOpInterface transferA, VectorTransferOpInterface transferB, bool testDynamicValueUsingBounds=false) |
Return true if we can prove that the transfer operations access disjoint memory, without requring the accessed tensor/memref to be the same. More... | |
bool | isDisjointTransferSet (VectorTransferOpInterface transferA, VectorTransferOpInterface transferB, bool testDynamicValueUsingBounds=false) |
Return true if we can prove that the transfer operations access disjoint memory, requiring the operations to access the same tensor/memref. More... | |
Value | makeArithReduction (OpBuilder &b, Location loc, CombiningKind kind, Value v1, Value acc, arith::FastMathFlagsAttr fastmath=nullptr, Value mask=nullptr) |
Returns the result value of reducing two scalar/vector values with the corresponding arith operation. More... | |
bool | isParallelIterator (Attribute attr) |
Returns true if attr has "parallel" iterator type semantics. More... | |
bool | isReductionIterator (Attribute attr) |
Returns true if attr has "reduction" iterator type semantics. More... | |
SmallVector< int64_t > | getAsIntegers (ArrayRef< Value > values) |
Returns the integer numbers in values . More... | |
SmallVector< int64_t > | getAsIntegers (ArrayRef< OpFoldResult > foldResults) |
Returns the integer numbers in foldResults . More... | |
SmallVector< Value > | getAsValues (OpBuilder &builder, Location loc, ArrayRef< OpFoldResult > foldResults) |
Convert foldResults into Values. More... | |
std::optional< int64_t > | getConstantVscaleMultiplier (Value value) |
If value is a constant multiple of vector.vscale (e.g. More... | |
VectorType | inferTransferOpMaskType (VectorType vecType, AffineMap permMap) |
Infers the mask type for a transfer op given its vector type and permutation map. More... | |
void | createMaskOpRegion (OpBuilder &builder, Operation *maskableOp) |
Create the vector.yield-ended region of a vector.mask op with maskableOp as masked operation. More... | |
Operation * | maskOperation (OpBuilder &builder, Operation *maskableOp, Value mask, Value passthru=Value()) |
Creates a vector.mask operation around a maskable operation. More... | |
Value | selectPassthru (OpBuilder &builder, Value mask, Value newValue, Value passthru) |
Creates a vector select operation that picks values from newValue or passthru for each result vector lane based on mask . More... | |
void | registerTransformDialectExtension (DialectRegistry ®istry) |
void | registerBufferizableOpInterfaceExternalModels (DialectRegistry ®istry) |
void | populateVectorContractLoweringPatterns (RewritePatternSet &patterns, VectorContractLowering vectorContractLoweringOption, PatternBenefit benefit=1, bool disableOuterProductLowering=false) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorOuterProductLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorMultiReductionLoweringPatterns (RewritePatternSet &patterns, VectorMultiReductionLowering options, PatternBenefit benefit=1) |
Collect a set of patterns to convert vector.multi_reduction op into a sequence of vector.reduction ops. More... | |
void | populateVectorBroadcastLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorMaskOpLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateScalarVectorTransferLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit, bool allowMultipleUses) |
Collects patterns that lower scalar vector transfer ops to memref loads and stores when beneficial. More... | |
void | populateVectorShapeCastLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorTransposeLoweringPatterns (RewritePatternSet &patterns, VectorTransposeLowering vectorTransposeLowering, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorTransferLoweringPatterns (RewritePatternSet &patterns, std::optional< unsigned > maxTransferRank=std::nullopt, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorTransferPermutationMapLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Collect a set of transfer read/write lowering patterns that simplify the permutation map (e.g., converting it to a minor identity map) by inserting broadcasts and transposes. More... | |
void | populateVectorScanLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorStepLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorGatherLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorGatherToConditionalLoadPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorMaskLoweringPatternsForSideEffectingOps (RewritePatternSet &patterns) |
Populates instances of MaskOpRewritePattern to lower masked operations with vector.mask . More... | |
void | populateVectorMaskedLoadStoreEmulationPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorInterleaveLoweringPatterns (RewritePatternSet &patterns, int64_t targetRank=1, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorInterleaveToShufflePatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
void | populateVectorBitCastLoweringPatterns (RewritePatternSet &patterns, int64_t targetRank=1, PatternBenefit benefit=1) |
Populates the pattern set with the following patterns: More... | |
void | populateVectorRankReducingFMAPattern (RewritePatternSet &patterns) |
Populates a pattern that rank-reduces n-D FMAs into (n-1)-D FMAs where n > 1. More... | |
void | populateVectorToFromElementsToShuffleTreePatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate patterns to rewrite sequences of vector.to_elements + vector.from_elements operations into a tree of vector.shuffle operations. More... | |
void | populateVectorFromElementsLoweringPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorContractToMatrixMultiply (RewritePatternSet &patterns, PatternBenefit benefit=100) |
Populate the pattern set with the following patterns: More... | |
void | populateVectorTransposeToFlatTranspose (RewritePatternSet &patterns, PatternBenefit benefit=100) |
Populate the pattern set with the following patterns: More... | |
std::unique_ptr< Pass > | createLowerVectorMaskPass () |
Creates an instance of the vector.mask lowering pass. More... | |
std::unique_ptr< Pass > | createLowerVectorMultiReductionPass (VectorMultiReductionLowering option=VectorMultiReductionLowering::InnerParallel) |
Creates an instance of the vector.multi_reduction lowering pass. More... | |
void | registerSubsetOpInterfaceExternalModels (DialectRegistry ®istry) |
void | populateWarpExecuteOnLane0OpToScfForPattern (RewritePatternSet &patterns, const WarpExecuteOnLane0LoweringOptions &options, PatternBenefit benefit=1) |
void | populateVectorContractCanonicalizeMatmulToMMT (RewritePatternSet &patterns, std::function< LogicalResult(vector::ContractionOp)> constraint=[](vector::ContractionOp) { return success();}, PatternBenefit=1) |
Canonicalization of a vector.contraction a, b, c with row-major matmul semantics to a contraction with MMT semantics (matrix matrix multiplication with the RHS transposed). More... | |
void | populateVectorReductionToContractPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Collect patterns to convert reduction op to vector.contract and fold transpose/broadcast ops into the contract. More... | |
void | populateVectorTransferFullPartialPatterns (RewritePatternSet &patterns, const VectorTransformsOptions &options) |
Populate patterns with the following patterns. More... | |
void | populateDropInnerMostUnitDimsXferOpPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Collect a set of patterns to collapse the most inner unit dims in xfer Ops. More... | |
void | populateSinkVectorOpsPatterns (RewritePatternSet &patterns, PatternBenefit benefit=1) |
Patterns that remove redundant Vector Ops by re-ordering them with e.g. More... | |
LogicalResult | splitFullAndPartialTransfer (RewriterBase &b, VectorTransferOpInterface xferOp, VectorTransformsOptions options=VectorTransformsOptions(), scf::IfOp *ifOp=nullptr) |
Split a vector.transfer operation into an in-bounds (i.e., no out-of-bounds masking) fastpath and a slowpath. More... | |
void | transferOpflowOpt (RewriterBase &rewriter, Operation *rootOp) |
Implements transfer op write to read forwarding and dead transfer write optimizations. More... | |
FailureOr< Value > | castAwayContractionLeadingOneDim (vector::ContractionOp contractOp, MaskingOpInterface maskingOp, RewriterBase &rewriter) |
Cast away the leading unit dim, if exists, for the given contract op. More... | |
void | eliminateVectorMasks (IRRewriter &rewriter, FunctionOpInterface function, std::optional< VscaleRange > vscaleRange={}) |
Attempts to eliminate redundant vector masks by replacing them with all-true constants at the top of the function (which results in the masks folding away). More... | |
Value | createOrFoldDimOp (OpBuilder &b, Location loc, Value source, int64_t dim) |
Helper function that creates a memref::DimOp or tensor::DimOp depending on the type of source . More... | |
FailureOr< std::pair< int, int > > | isTranspose2DSlice (vector::TransposeOp op) |
Returns two dims that are greater than one if the transposition is applied on a 2D slice. More... | |
bool | isContiguousSlice (MemRefType memrefType, VectorType vectorType) |
Return true if vectorType is a contiguous slice of memrefType , in the sense that it can be read/written from/to a contiguous area of the memref. More... | |
std::optional< StaticTileOffsetRange > | createUnrollIterator (VectorType vType, int64_t targetRank=1) |
Returns an iterator for all positions in the leading dimensions of vType up to the targetRank . More... | |
auto | makeVscaleConstantBuilder (PatternRewriter &rewriter, Location loc) |
Returns a functor (int64_t -> Value) which returns a constant vscale multiple. More... | |
auto | getDims (VectorType vType) |
Returns a range over the dims (size and scalability) of a VectorType. More... | |
SmallVector< OpFoldResult > | getMixedSizesXfer (bool hasTensorSemantics, Operation *xfer, RewriterBase &rewriter) |
A wrapper for getMixedSizes for vector.transfer_read and vector.transfer_write Ops (for source and destination, respectively). More... | |
bool | isLinearizableVector (VectorType type) |
Returns true if the input Vector type can be linearized. More... | |
Value | createReadOrMaskedRead (OpBuilder &builder, Location loc, Value source, ArrayRef< int64_t > inputVectorSizes, Value padValue, bool useInBoundsInsteadOfMasking=false, ArrayRef< bool > inputScalableVecDims={}) |
Creates a TransferReadOp from source . More... | |
LogicalResult | isValidMaskedInputVector (ArrayRef< int64_t > shape, ArrayRef< int64_t > inputVectorSizes) |
Returns success if inputVectorSizes is a valid masking configuraion for given shape , i.e., it meets: More... | |
LogicalResult | unrollVectorOp (Operation *op, PatternRewriter &rewriter, UnrollVectorOpFn unrollFn) |
using mlir::vector::ConstantOrScalableBound = typedef ScalableValueBoundsConstraintSet::ConstantOrScalableBound |
Definition at line 103 of file ScalableValueBoundsConstraintSet.h.
using mlir::vector::DistributionMapFn = typedef std::function<AffineMap(Value)> |
Definition at line 44 of file VectorDistribution.h.
using mlir::vector::UnrollVectorOpFn = typedef function_ref<Value(PatternRewriter &, Location, VectorType, int64_t)> |
Generic utility for unrolling n-D vector operations to (n-1)-D operations.
This handles the common pattern of:
Definition at line 252 of file VectorUtils.h.
|
strong |
Return whether srcType
can be broadcast to dstVectorType
under the semantics of the vector.broadcast
op.
Enumerator | |
---|---|
Success | |
SourceRankHigher | |
DimensionMismatch | |
SourceTypeNotAVector |
Definition at line 70 of file VectorOps.h.
|
strong |
Predefined constant_mask kinds.
Enumerator | |
---|---|
AllFalse | |
AllTrue |
Definition at line 62 of file VectorOps.h.
Default callback to build a region with a 'vector.yield' terminator with no arguments.
Definition at line 126 of file VectorOps.cpp.
FailureOr< Value > mlir::vector::castAwayContractionLeadingOneDim | ( | vector::ContractionOp | contractOp, |
MaskingOpInterface | maskingOp, | ||
RewriterBase & | rewriter | ||
) |
Cast away the leading unit dim, if exists, for the given contract op.
Return success if the transformation applies; return failure otherwise.
Definition at line 334 of file VectorDropLeadUnitDim.cpp.
References mlir::OpBuilder::createOrFold(), mlir::detail::enumerate(), mlir::AffineMap::get(), mlir::Builder::getAffineDimExpr(), mlir::Builder::getAffineMapArrayAttr(), mlir::Builder::getArrayAttr(), mlir::Operation::getResults(), isParallelIterator(), maskOperation(), and splatZero().
bool mlir::vector::checkSameValueRAW | ( | TransferWriteOp | defWrite, |
TransferReadOp | read | ||
) |
Return true if the transfer_write fully writes the data accessed by the transfer_read.
bool mlir::vector::checkSameValueWAW | ( | TransferWriteOp | write, |
TransferWriteOp | priorWrite | ||
) |
Return true if the write op fully over-write the priorWrite transfer_write op.
std::unique_ptr< Pass > mlir::vector::createLowerVectorMaskPass | ( | ) |
Creates an instance of the vector.mask
lowering pass.
Definition at line 310 of file LowerVectorMask.cpp.
std::unique_ptr<Pass> mlir::vector::createLowerVectorMultiReductionPass | ( | VectorMultiReductionLowering | option = VectorMultiReductionLowering::InnerParallel | ) |
Creates an instance of the vector.multi_reduction
lowering pass.
Create the vector.yield-ended region of a vector.mask op with maskableOp
as masked operation.
Helper function that creates a memref::DimOp or tensor::DimOp depending on the type of source
.
Definition at line 39 of file VectorUtils.cpp.
References mlir::OpBuilder::createOrFold(), and mlir::Value::getType().
Value mlir::vector::createReadOrMaskedRead | ( | OpBuilder & | builder, |
Location | loc, | ||
Value | source, | ||
ArrayRef< int64_t > | inputVectorSizes, | ||
Value | padValue, | ||
bool | useInBoundsInsteadOfMasking = false , |
||
ArrayRef< bool > | inputScalableVecDims = {} |
||
) |
Creates a TransferReadOp from source
.
The shape of the vector to read is specified via inputVectorSizes
. If the shape of the output vector differs from the shape of the value being read, masking is used to avoid out-of-bounds accesses. Set useInBoundsInsteadOfMasking
to true
to use the "in_bounds" attribute instead of explicit masks.
Note: all read offsets are set to 0.
Definition at line 319 of file VectorUtils.cpp.
References mlir::arith::ConstantIndexOp::create(), mlir::get(), mlir::Builder::getI1Type(), mlir::memref::getMixedSizes(), mlir::tensor::getMixedSizes(), mlir::Operation::getResult(), mlir::Value::getType(), and maskOperation().
Referenced by vectorizeAsInsertSliceOp(), vectorizeAsTensorPackOp(), vectorizeAsTensorPadOp(), and vectorizeAsTensorUnpackOp().
std::optional< StaticTileOffsetRange > mlir::vector::createUnrollIterator | ( | VectorType | vType, |
int64_t | targetRank = 1 |
||
) |
Returns an iterator for all positions in the leading dimensions of vType
up to the targetRank
.
If any leading dimension before the targetRank
is scalable (so cannot be unrolled), it will return an iterator for positions up to the first scalable dimension.
If no leading dimensions can be unrolled an empty optional will be returned.
Examples:
For vType = vector<2x3x4> and targetRank = 1
The resulting iterator will yield: [0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2]
For vType = vector<3x[4]x5> and targetRank = 0
The scalable dimension blocks unrolling so the iterator yields only: [0], [1], [2]
Definition at line 276 of file VectorUtils.cpp.
void mlir::vector::eliminateVectorMasks | ( | IRRewriter & | rewriter, |
FunctionOpInterface | function, | ||
std::optional< VscaleRange > | vscaleRange = {} |
||
) |
Attempts to eliminate redundant vector masks by replacing them with all-true constants at the top of the function (which results in the masks folding away).
Note: Currently, this only runs for vector.create_mask ops and requires vscaleRange
. If vscaleRange
is not provided this transform does nothing. This is because these redundant masks are much more likely for scalable code which requires memref/tensor dynamic sizes, whereas fixed-size code has static sizes, so simpler folds remove the masks.
Definition at line 95 of file VectorMaskElimination.cpp.
References mlir::OpBuilder::setInsertionPointToStart().
SmallVector< int64_t > mlir::vector::getAsIntegers | ( | ArrayRef< OpFoldResult > | foldResults | ) |
Returns the integer numbers in foldResults
.
foldResults
are expected to be constant operations.
Definition at line 357 of file VectorOps.cpp.
SmallVector< int64_t > mlir::vector::getAsIntegers | ( | ArrayRef< Value > | values | ) |
Returns the integer numbers in values
.
values
are expected to be constant operations.
Definition at line 345 of file VectorOps.cpp.
References mlir::Value::getDefiningOp().
SmallVector< Value > mlir::vector::getAsValues | ( | OpBuilder & | builder, |
Location | loc, | ||
ArrayRef< OpFoldResult > | foldResults | ||
) |
Convert foldResults
into Values.
Integer attributes are converted to constant op.
Definition at line 369 of file VectorOps.cpp.
References mlir::arith::ConstantIndexOp::create().
std::optional< int64_t > mlir::vector::getConstantVscaleMultiplier | ( | Value | value | ) |
If value
is a constant multiple of vector.vscale
(e.g.
cst * vector.vscale
), return the multiplier (cst
). Otherwise, return std::nullopt
.
Definition at line 384 of file VectorOps.cpp.
References mlir::getConstantIntValue(), and mlir::Value::getDefiningOp().
|
inline |
Returns a range over the dims (size and scalability) of a VectorType.
Definition at line 130 of file VectorUtils.h.
SmallVector< OpFoldResult > mlir::vector::getMixedSizesXfer | ( | bool | hasTensorSemantics, |
Operation * | xfer, | ||
RewriterBase & | rewriter | ||
) |
A wrapper for getMixedSizes for vector.transfer_read and vector.transfer_write Ops (for source and destination, respectively).
Tensor and MemRef types implement their own, very similar version of getMixedSizes. This method will call the appropriate version (depending on hasTensorSemantics
). It will also automatically extract the operand for which to call it on (source for "read" and destination for "write" ops).
Definition at line 298 of file VectorUtils.cpp.
References mlir::Operation::getLoc(), mlir::memref::getMixedSizes(), mlir::tensor::getMixedSizes(), and mlir::bufferization::hasTensorSemantics().
AffineMap mlir::vector::getTransferMinorIdentityMap | ( | ShapedType | shapedType, |
VectorType | vectorType | ||
) |
Build the default minor identity map suitable for a vector transfer.
This also handles the case memref<... x vector<...>> -> vector<...> in which the rank of the identity map must take the vector element type into account.
Definition at line 188 of file VectorOps.cpp.
References mlir::AffineMap::get(), mlir::getAffineConstantExpr(), getEffectiveVectorRankForXferOp(), and mlir::AffineMap::getMinorIdentityMap().
Value mlir::vector::getVectorReductionOp | ( | arith::AtomicRMWKind | op, |
OpBuilder & | builder, | ||
Location | loc, | ||
Value | vector | ||
) |
Returns the value obtained by reducing the vector into a scalar using the operation kind associated with a binary AtomicRMWKind op.
Definition at line 668 of file VectorOps.cpp.
References mlir::emitOptionalError(), mlir::Value::getLoc(), and MINUI.
Returns an integer array attribute containing the given values using the integer type required for subscripts in the vector dialect.
Definition at line 492 of file VectorOps.cpp.
References mlir::Builder::getI64ArrayAttr().
IntegerType mlir::vector::getVectorSubscriptType | ( | Builder & | builder | ) |
Returns the integer type required for subscripts in the vector dialect.
Definition at line 488 of file VectorOps.cpp.
References mlir::Builder::getIntegerType().
VectorType mlir::vector::inferTransferOpMaskType | ( | VectorType | vecType, |
AffineMap | permMap | ||
) |
Infers the mask type for a transfer op given its vector type and permutation map.
The mask in a transfer op operation applies to the tensor/buffer part of it and its type should match the vector shape before any permutation or broadcasting. For example,
vecType = vector<1x2x3xf32>, permMap = affine_map<(d0, d1, d2) -> (d1, d0)>
Has inferred mask type:
maskType = vector<2x1xi1>
Definition at line 4685 of file VectorOps.cpp.
References mlir::applyPermutationMap(), mlir::AffineMap::compose(), mlir::compressUnusedDims(), mlir::get(), mlir::AffineMap::getContext(), and mlir::inversePermutation().
BroadcastableToResult mlir::vector::isBroadcastableTo | ( | Type | srcType, |
VectorType | dstVectorType, | ||
std::pair< VectorDim, VectorDim > * | mismatchingDims = nullptr |
||
) |
Definition at line 2781 of file VectorOps.cpp.
References mlir::getElementTypeOrSelf().
Referenced by broadcastIfNeeded(), and foldBroadcastOfShapeCast().
bool mlir::vector::isContiguousSlice | ( | MemRefType | memrefType, |
VectorType | vectorType | ||
) |
Return true if vectorType
is a contiguous slice of memrefType
, in the sense that it can be read/written from/to a contiguous area of the memref.
The leading unit dimensions of the vector type are ignored as they are not relevant to the result. Let N be the number of the vector dimensions after ignoring a leading sequence of unit ones.
For vectorType
to be a contiguous slice of memrefType
a) the N trailing dimensions of memrefType
must be contiguous, and b) the N-1 trailing dimensions of vectorType
and memrefType
must match.
Examples:
Ex.1 contiguous slice, perfect match vector<4x3x2xi32> from memref<5x4x3x2xi32> Ex.2 contiguous slice, the leading dim does not match (2 != 4) vector<2x3x2xi32> from memref<5x4x3x2xi32> Ex.3 non-contiguous slice, 2 != 3 vector<2x2x2xi32> from memref<5x4x3x2xi32> Ex.4 contiguous slice, leading unit dimension of the vector ignored, 2 != 3 (allowed) vector<1x2x2xi32> from memref<5x4x3x2xi32> Ex.5. contiguous slice, leading two unit dims of the vector ignored, 2 != 3 (allowed) vector<1x1x2x2xi32> from memref<5x4x3x2xi32> Ex.6. non-contiguous slice, 2 != 3, no leading sequence of unit dims vector<2x1x2x2xi32> from memref<5x4x3x2xi32>) Ex.7 contiguous slice, memref needs to be contiguous only in the last dimension vector<1x1x2xi32> from memref<2x2x2xi32, strided<[8, 4, 1]>> Ex.8 non-contiguous slice, memref needs to be contiguous in the last two dimensions, and it isn't vector<1x2x2xi32> from memref<2x2x2xi32, strided<[8, 4, 1]>>
Definition at line 255 of file VectorUtils.cpp.
References vectorShape().
bool mlir::vector::isDisjointTransferIndices | ( | VectorTransferOpInterface | transferA, |
VectorTransferOpInterface | transferB, | ||
bool | testDynamicValueUsingBounds = false |
||
) |
Return true if we can prove that the transfer operations access disjoint memory, without requring the accessed tensor/memref to be the same.
If testDynamicValueUsingBounds
is true, tries to test dynamic values via ValueBoundsOpInterface.
Definition at line 250 of file VectorOps.cpp.
References mlir::presburger::abs(), mlir::ValueBoundsConstraintSet::areEqual(), mlir::ValueBoundsConstraintSet::computeConstantDelta(), mlir::affine::fullyComposeAndComputeConstantDelta(), and mlir::getConstantIntValue().
Referenced by isDisjointTransferSet().
bool mlir::vector::isDisjointTransferSet | ( | VectorTransferOpInterface | transferA, |
VectorTransferOpInterface | transferB, | ||
bool | testDynamicValueUsingBounds = false |
||
) |
Return true if we can prove that the transfer operations access disjoint memory, requiring the operations to access the same tensor/memref.
If testDynamicValueUsingBounds
is true, tries to test dynamic values via ValueBoundsOpInterface.
Definition at line 314 of file VectorOps.cpp.
References isDisjointTransferIndices().
Referenced by mlir::linalg::hoistRedundantVectorTransfers().
bool mlir::vector::isLinearizableVector | ( | VectorType | type | ) |
Returns true if the input Vector type can be linearized.
Linearization is meant in the sense of flattening vectors, e.g.:
Definition at line 315 of file VectorUtils.cpp.
|
inline |
Returns true if attr
has "parallel" iterator type semantics.
Definition at line 149 of file VectorOps.h.
Referenced by castAwayContractionLeadingOneDim(), contractSupportsMMAMatrixType(), and gpuMmaUnrollOrder().
|
inline |
Returns true if attr
has "reduction" iterator type semantics.
Definition at line 154 of file VectorOps.h.
Referenced by contractSupportsMMAMatrixType(), getReductionIndex(), and gpuMmaUnrollOrder().
FailureOr< std::pair< int, int > > mlir::vector::isTranspose2DSlice | ( | vector::TransposeOp | op | ) |
Returns two dims that are greater than one if the transposition is applied on a 2D slice.
Otherwise, returns a failure.
Definition at line 82 of file VectorUtils.cpp.
References areDimsTransposedIn2DSlice(), and mlir::detail::enumerate().
Referenced by TransposeOpLowering::matchAndRewrite().
LogicalResult mlir::vector::isValidMaskedInputVector | ( | ArrayRef< int64_t > | shape, |
ArrayRef< int64_t > | inputVectorSizes | ||
) |
Returns success if inputVectorSizes
is a valid masking configuraion for given shape
, i.e., it meets:
inputVectorSizes
does not have dynamic dimensions.inputVectorSizes
are greater than or equal to static sizes in shape
. Definition at line 370 of file VectorUtils.cpp.
Referenced by vectorizeLinalgOpPrecondition(), vectorizePackOpPrecondition(), vectorizePadOpPrecondition(), and vectorizeUnPackOpPrecondition().
Value mlir::vector::makeArithReduction | ( | OpBuilder & | b, |
Location | loc, | ||
CombiningKind | kind, | ||
Value | v1, | ||
Value | acc, | ||
arith::FastMathFlagsAttr | fastmath = nullptr , |
||
Value | mask = nullptr |
||
) |
Returns the result value of reducing two scalar/vector values with the corresponding arith operation.
Referenced by createContractArithOp().
|
inline |
Returns a functor (int64_t -> Value) which returns a constant vscale multiple.
Example:
Definition at line 118 of file VectorUtils.h.
References mlir::arith::ConstantIndexOp::create().
Operation* mlir::vector::maskOperation | ( | OpBuilder & | builder, |
Operation * | maskableOp, | ||
Value | mask, | ||
Value | passthru = Value() |
||
) |
Creates a vector.mask operation around a maskable operation.
Returns the vector.mask operation if the mask provided is valid. Otherwise, returns the maskable operation itself.
Referenced by castAwayContractionLeadingOneDim(), createReadOrMaskedRead(), and VectorizationState::maskOperation().
void mlir::vector::populateDropInnerMostUnitDimsXferOpPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Collect a set of patterns to collapse the most inner unit dims in xfer Ops.
These patters reduce the rank of the operands of vector transfer ops to operate on vectors without trailing unit dims. This helps reduce the rank of the operands, which can be helpful when lowering to dialects that only support 1D vector type such as LLVM.
Definition at line 2377 of file VectorTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateElementwiseToVectorOpsPatterns | ( | RewritePatternSet & | patterns | ) |
Collect a set of patterns that fold elementwise op on vectors to the vector dialect.
Definition at line 2412 of file VectorTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateFoldArithExtensionPatterns | ( | RewritePatternSet & | patterns | ) |
Collect a set of patterns that fold arithmetic extension on floating point into vector contract for the backends with native support.
Definition at line 2324 of file VectorTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateScalarVectorTransferLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit, | ||
bool | allowMultipleUses | ||
) |
Collects patterns that lower scalar vector transfer ops to memref loads and stores when beneficial.
If allowMultipleUses
is set to true, the patterns are applied to vector transfer reads with any number of uses. Otherwise, only vector transfer reads with a single use will be lowered.
Definition at line 915 of file VectorTransferOpTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateSinkVectorOpsPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Patterns that remove redundant Vector Ops by re-ordering them with e.g.
elementwise Ops:
gets converted to:
At the moment, these patterns are limited to vector.broadcast and vector.transpose.
Definition at line 2384 of file VectorTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateVectorBitCastLoweringPatterns | ( | RewritePatternSet & | patterns, |
int64_t | targetRank = 1 , |
||
PatternBenefit | benefit = 1 |
||
) |
Populates the pattern set with the following patterns:
[UnrollBitCastOp] A one-shot unrolling of BitCastOp to (one or more) ExtractOp + BitCastOp (of targetRank
) + InsertOp.
Definition at line 87 of file LowerVectorBitCast.cpp.
References mlir::patterns.
void mlir::vector::populateVectorBroadcastLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[TransferReadToVectorLoadLowering] Progressive lowering of BroadcastOp to ExtractOp + InsertOp + lower-D BroadcastOp until dim 1.
Definition at line 145 of file LowerVectorBroadcast.cpp.
References mlir::patterns.
void mlir::vector::populateVectorContractCanonicalizeMatmulToMMT | ( | RewritePatternSet & | patterns, |
std::function< LogicalResult(vector::ContractionOp)> | constraint = [](vector::ContractionOp) { return success(); } , |
||
PatternBenefit | benefit = 1 |
||
) |
Canonicalization of a vector.contraction a, b, c
with row-major matmul semantics to a contraction with MMT semantics (matrix matrix multiplication with the RHS transposed).
This specific form is meant to have the vector operands are organized such that the reduction dimension is contiguous. Example:
The constraint
predicate is used to decide which vector.contraction
ops to filter out.
Definition at line 2362 of file VectorTransforms.cpp.
References mlir::patterns.
Referenced by mlir::populatePrepareVectorToMMAPatterns().
void mlir::vector::populateVectorContractLoweringPatterns | ( | RewritePatternSet & | patterns, |
VectorContractLowering | vectorContractLoweringOption, | ||
PatternBenefit | benefit = 1 , |
||
bool | disableOuterProductLowering = false |
||
) |
Populate the pattern set with the following patterns:
[OuterProductOpLowering] Progressively lower a vector.outerproduct
to linearized vector.extract
+ vector.fma
+ vector.insert
.
[ContractionOpLowering] Progressive lowering of ContractionOp. One: x = vector.contract with at least one free/batch dimension is replaced by: a = vector.contract with one less free/batch dimension b = vector.contract with one less free/batch dimension
[ContractionOpToMatmulOpLowering] Progressively lower a vector.contract
with row-major matmul semantics to linearized vector.shape_cast
+ vector.matmul
on the way to llvm.matrix.multiply
.
[ContractionOpToDotLowering] Progressively lower a vector.contract
with row-major matmul semantics to linearized vector.extract
+ vector.reduce
+ vector.insert
.
[ContractionOpToOuterProductOpLowering] Progressively lower a vector.contract
with row-major matmul semantics to linearized vector.extract
+ vector.outerproduct
+ vector.insert
.
Definition at line 1222 of file LowerVectorContract.cpp.
References mlir::patterns.
void mlir::vector::populateVectorContractToMatrixMultiply | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 100 |
||
) |
Populate the pattern set with the following patterns:
[ContractionOpToMatmulOpLowering] Lowers vector.contract
to llvm.intr.matrix.multiply
.
Given the high benefit, this will be prioriotised over other contract-lowering patterns. As such, the convert-vector-to-llvm pass will only run this registration conditionally.
Definition at line 2161 of file ConvertVectorToLLVM.cpp.
References mlir::patterns.
void mlir::vector::populateVectorFromElementsLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[UnrollFromElements] Unrolls 2 or more dimensional vector.from_elements
ops by unrolling the outermost dimension.
Definition at line 62 of file LowerVectorFromElements.cpp.
References mlir::patterns.
void mlir::vector::populateVectorGatherLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[UnrollGather] Unrolls 2 or more dimensional vector.gather
ops by unrolling the outermost dimension.
Definition at line 246 of file LowerVectorGather.cpp.
References mlir::patterns.
void mlir::vector::populateVectorGatherToConditionalLoadPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[Gather1DToConditionalLoads] Turns 1-d vector.gather
into a scalarized sequence of vector.loads
or tensor.extract
s. To avoid out-of-bounds memory accesses, these loads/extracts are made conditional using scf.if
ops.
Definition at line 251 of file LowerVectorGather.cpp.
References mlir::patterns.
void mlir::vector::populateVectorInterleaveLoweringPatterns | ( | RewritePatternSet & | patterns, |
int64_t | targetRank = 1 , |
||
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[UnrollInterleaveOp] A one-shot unrolling of InterleaveOp to (one or more) ExtractOp + InterleaveOp (of targetRank
) + InsertOp.
Definition at line 185 of file LowerVectorInterleave.cpp.
References mlir::patterns.
void mlir::vector::populateVectorInterleaveToShufflePatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Definition at line 191 of file LowerVectorInterleave.cpp.
References mlir::patterns.
void mlir::vector::populateVectorMaskedLoadStoreEmulationPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[VectorMaskedLoadOpConverter] Turns vector.maskedload to scf.if + memref.load
[VectorMaskedStoreOpConverter] Turns vector.maskedstore to scf.if + memref.store
Definition at line 159 of file VectorEmulateMaskedLoadStore.cpp.
References mlir::patterns.
void mlir::vector::populateVectorMaskLoweringPatternsForSideEffectingOps | ( | RewritePatternSet & | patterns | ) |
Populates instances of MaskOpRewritePattern
to lower masked operations with vector.mask
.
Patterns should rewrite the vector.mask
operation and not its nested MaskableOpInterface
.
Definition at line 304 of file LowerVectorMask.cpp.
References mlir::patterns.
void mlir::vector::populateVectorMaskOpLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[CreateMaskOp] Progressive lowering of CreateMaskOp to lower-D CreateMaskOp until dim 1.
[ConstantMaskOp] Progressive lowering of ConstantMaskOp to lower-D ConstantMaskOp until dim 1.
Definition at line 163 of file LowerVectorMask.cpp.
References mlir::patterns.
void mlir::vector::populateVectorMultiReductionLoweringPatterns | ( | RewritePatternSet & | patterns, |
VectorMultiReductionLowering | options, | ||
PatternBenefit | benefit = 1 |
||
) |
Collect a set of patterns to convert vector.multi_reduction op into a sequence of vector.reduction ops.
The patterns comprise:
[InnerOuterDimReductionConversion] Rewrites vector.multi_reduction such that all reduction dimensions are either innermost or outermost, by adding the proper vector.transpose operations.
[ReduceMultiDimReductionRank] Once in innermost or outermost reduction form, rewrites n-D vector.multi_reduction into 2-D vector.multi_reduction, by introducing vector.shape_cast ops to collapse + multi-reduce + expand back.
[TwoDimMultiReductionToElementWise] Once in 2-D vector.multi_reduction form, with an outermost reduction dimension, unroll the outer dimension to obtain a sequence of 1-D vector ops. This also has an opportunity for tree-reduction (in the future).
[TwoDimMultiReductionToReduction] Once in 2-D vector.multi_reduction form, with an innermost reduction dimension, unroll the outer dimension to obtain a sequence of extract + vector.reduction + insert. This can further lower to horizontal reduction ops.
[OneDimMultiReductionToTwoDim] For cases that reduce to 1-D vector<k> reduction (and are thus missing either a parallel or a reduction), we lift them back up to 2-D with a simple vector.shape_cast to vector<1xk> so that the other patterns can kick in, thus fully exiting out of the vector.multi_reduction abstraction.
Definition at line 514 of file LowerVectorMultiReduction.cpp.
References options, and mlir::patterns.
void mlir::vector::populateVectorOuterProductLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[OuterProductOpLowering] Progressively lower a vector.outerproduct
to linearized vector.extract
+ vector.fma
+ vector.insert
.
Definition at line 1232 of file LowerVectorContract.cpp.
References mlir::patterns.
void mlir::vector::populateVectorRankReducingFMAPattern | ( | RewritePatternSet & | patterns | ) |
Populates a pattern that rank-reduces n-D FMAs into (n-1)-D FMAs where n > 1.
Definition at line 2156 of file ConvertVectorToLLVM.cpp.
References mlir::patterns.
void mlir::vector::populateVectorReductionToContractPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Collect patterns to convert reduction op to vector.contract and fold transpose/broadcast ops into the contract.
Definition at line 2370 of file VectorTransforms.cpp.
References mlir::patterns.
void mlir::vector::populateVectorScanLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[ScanToArithOps] Convert vector.scan op into arith ops and vector.insert_strided_slice / vector.extract_strided_slice.
Definition at line 181 of file LowerVectorScan.cpp.
References mlir::patterns.
void mlir::vector::populateVectorShapeCastLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[ShapeCastOp2DDownCastRewritePattern] ShapeOp 2D -> 1D downcast serves the purpose of flattening 2-D to 1-D vectors progressively.
[ShapeCastOp2DUpCastRewritePattern] ShapeOp 1D -> 2D upcast serves the purpose of unflattening 2-D from 1-D vectors progressively.
[ShapeCastOpRewritePattern] Reference lowering to fully unrolled sequences of single element ExtractOp + InsertOp. Note that applying this pattern can almost always be considered a performance bug.
Definition at line 474 of file LowerVectorShapeCast.cpp.
References mlir::patterns.
Referenced by mlir::spirv::unrollVectorsInFuncBodies().
void mlir::vector::populateVectorStepLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[StepToArithConstantOp] Convert vector.step op into arith ops if not using scalable vectors
Definition at line 46 of file LowerVectorStep.cpp.
References mlir::patterns.
Referenced by mlir::populateSparseVectorizationPatterns().
void mlir::vector::populateVectorToFromElementsToShuffleTreePatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Populate patterns to rewrite sequences of vector.to_elements
+ vector.from_elements
operations into a tree of vector.shuffle
operations.
Definition at line 740 of file LowerVectorToFromElementsToShuffleTree.cpp.
References mlir::patterns.
void mlir::vector::populateVectorToVectorCanonicalizationPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Collect a set of vector-to-vector canonicalization patterns.
void mlir::vector::populateVectorTransferFullPartialPatterns | ( | RewritePatternSet & | patterns, |
const VectorTransformsOptions & | options | ||
) |
Populate patterns
with the following patterns.
Split a vector.transfer operation into an in-bounds (i.e., no out-of-bounds masking) fast path and a slow path.
Example (a 2-D vector.transfer_read): ``` %1 = vector.transfer_read %0[...], pad : memref<A...>, vector<...> ``` is transformed into: ``` %1:3 = scf.if (inBounds) { // fast path, direct cast memref.cast A: memref<A...> to compatibleMemRefType scf.yield view : compatibleMemRefType, index, index } else { // slow path, not in-bounds vector.transfer or linalg.copy. memref.cast alloc: memref<B...> to compatibleMemRefType scf.yield %4 : compatibleMemRefType, index, index */ // } /** %0 = vector.transfer_read %1#0[%1#1, %1#2] {in_bounds = [true ... true]} `` where
alloc` is a top of the function alloca'ed buffer of one vector.
Preconditions:
xferOp.permutation_map()
must be a minor identity mapxferOp.memref()
and the rank of the xferOp.vector()
must be equal. This will be relaxed in the future but requires rank-reducing subviews. Definition at line 661 of file VectorTransferSplitRewritePatterns.cpp.
References options, and mlir::patterns.
void mlir::vector::populateVectorTransferLoweringPatterns | ( | RewritePatternSet & | patterns, |
std::optional< unsigned > | maxTransferRank = std::nullopt , |
||
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[TransferReadToVectorLoadLowering] Progressive lowering of transfer_read.This pattern supports lowering of vector.transfer_read
to a combination of vector.load
and vector.broadcast
[TransferWriteToVectorStoreLowering] Progressive lowering of transfer_write. This pattern supports lowering of vector.transfer_write
to vector.store
These patterns lower transfer ops to simpler ops like vector.load
, vector.store
and vector.broadcast
. Only transfers with a transfer rank of a most maxTransferRank
are lowered. This is useful when combined with VectorToSCF, which reduces the rank of vector transfer ops.
Definition at line 585 of file LowerVectorTransfer.cpp.
References mlir::patterns.
void mlir::vector::populateVectorTransferPermutationMapLoweringPatterns | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 1 |
||
) |
Collect a set of transfer read/write lowering patterns that simplify the permutation map (e.g., converting it to a minor identity map) by inserting broadcasts and transposes.
More specifically:
[TransferReadPermutationLowering] Lower transfer_read op with permutation into a transfer_read with a permutation map composed of leading zeros followed by a minor identity + vector.transpose op. Ex: vector.transfer_read ... permutation_map: (d0, d1, d2) -> (0, d1) into: v = vector.transfer_read ... permutation_map: (d0, d1, d2) -> (d1, 0) vector.transpose v, [1, 0]
vector.transfer_read ... permutation_map: (d0, d1, d2, d3) -> (0, 0, 0, d1, d3) into: v = vector.transfer_read ... permutation_map: (d0, d1, d2, d3) -> (0, 0, d1, 0, d3) vector.transpose v, [0, 1, 3, 2, 4] Note that an alternative is to transform it to linalg.transpose + vector.transfer_read to do the transpose in memory instead.
[TransferWritePermutationLowering] Lower transfer_write op with permutation into a transfer_write with a minor identity permutation map. (transfer_write ops cannot have broadcasts.) Ex: vector.transfer_write v ... permutation_map: (d0, d1, d2) -> (d2, d0, d1) into: tmp = vector.transpose v, [2, 0, 1] vector.transfer_write tmp ... permutation_map: (d0, d1, d2) -> (d0, d1, d2)
vector.transfer_write v ... permutation_map: (d0, d1, d2, d3) -> (d3, d2) into: tmp = vector.transpose v, [1, 0] v = vector.transfer_write tmp ... permutation_map: (d0, d1, d2, d3) -> (d2, d3)
[TransferOpReduceRank] Lower transfer_read op with broadcast in the leading dimensions into transfer_read of lower rank + vector.broadcast. Ex: vector.transfer_read ... permutation_map: (d0, d1, d2, d3) -> (0, d1, 0, d3) into: v = vector.transfer_read ... permutation_map: (d0, d1, d2, d3) -> (d1, 0, d3) vector.broadcast v
Definition at line 382 of file LowerVectorTransfer.cpp.
References mlir::patterns.
void mlir::vector::populateVectorTransposeLoweringPatterns | ( | RewritePatternSet & | patterns, |
VectorTransposeLowering | vectorTransposeLowering, | ||
PatternBenefit | benefit = 1 |
||
) |
Populate the pattern set with the following patterns:
[TransposeOp2DToShuffleLowering]
Definition at line 494 of file LowerVectorTranspose.cpp.
References mlir::patterns.
Referenced by mlir::spirv::unrollVectorsInFuncBodies().
void mlir::vector::populateVectorTransposeToFlatTranspose | ( | RewritePatternSet & | patterns, |
PatternBenefit | benefit = 100 |
||
) |
Populate the pattern set with the following patterns:
[TransposeOpLowering] Lowers vector.transpose
to llvm.intr.matrix.flat_transpose
.
Given the high benefit, this will be prioriotised over other transpose-lowering patterns. As such, the convert-vector-to-llvm pass will only run this registration conditionally.
Definition at line 2166 of file ConvertVectorToLLVM.cpp.
References mlir::patterns.
void mlir::vector::populateWarpExecuteOnLane0OpToScfForPattern | ( | RewritePatternSet & | patterns, |
const WarpExecuteOnLane0LoweringOptions & | options, | ||
PatternBenefit | benefit = 1 |
||
) |
Definition at line 2043 of file VectorDistribute.cpp.
References options, and mlir::patterns.
void mlir::vector::registerBufferizableOpInterfaceExternalModels | ( | DialectRegistry & | registry | ) |
Definition at line 330 of file BufferizableOpInterfaceImpl.cpp.
References mlir::DialectRegistry::addExtension().
void mlir::vector::registerConvertVectorToLLVMInterface | ( | DialectRegistry & | registry | ) |
Definition at line 2218 of file ConvertVectorToLLVM.cpp.
References mlir::DialectRegistry::addExtension().
Referenced by mlir::registerAllExtensions().
void mlir::vector::registerSubsetOpInterfaceExternalModels | ( | DialectRegistry & | registry | ) |
Definition at line 70 of file SubsetOpInterfaceImpl.cpp.
References mlir::DialectRegistry::addExtension().
void mlir::vector::registerTransformDialectExtension | ( | DialectRegistry & | registry | ) |
Definition at line 247 of file VectorTransformOps.cpp.
References mlir::DialectRegistry::addExtensions().
void mlir::vector::registerValueBoundsOpInterfaceExternalModels | ( | DialectRegistry & | registry | ) |
Definition at line 45 of file ValueBoundsOpInterfaceImpl.cpp.
References mlir::DialectRegistry::addExtension().
Value mlir::vector::selectPassthru | ( | OpBuilder & | builder, |
Value | mask, | ||
Value | newValue, | ||
Value | passthru | ||
) |
Creates a vector select operation that picks values from newValue
or passthru
for each result vector lane based on mask
.
This utility is used to propagate the pass-thru value for masked-out or expeculatively executed lanes. VP intrinsics do not support pass-thru values and every mask-out lane is set to poison. LLVM backends are usually able to match op + select patterns and fold them into a native target instructions.
Referenced by createContractArithOp().
LogicalResult mlir::vector::splitFullAndPartialTransfer | ( | RewriterBase & | b, |
VectorTransferOpInterface | xferOp, | ||
VectorTransformsOptions | options = VectorTransformsOptions() , |
||
scf::IfOp * | ifOp = nullptr |
||
) |
Split a vector.transfer operation into an in-bounds (i.e., no out-of-bounds masking) fastpath and a slowpath.
If ifOp
is not null and the result is success, the
ifOp` points to the newly created conditional upon function return. To accomodate for the fact that the original vector.transfer indexing may be arbitrary and the slow path indexes @[0...0] in the temporary buffer, the scf.if op returns a view and values of type index. At this time, only vector.transfer_read case is implemented.
Example (a 2-D vector.transfer_read): ``` %1 = vector.transfer_read %0[...], pad : memref<A...>, vector<...> ``` is transformed into: ``` %1:3 = scf.if (inBounds) { // fastpath, direct cast memref.cast A: memref<A...> to compatibleMemRefType scf.yield view : compatibleMemRefType, index, index } else { // slowpath, not in-bounds vector.transfer or linalg.copy. memref.cast alloc: memref<B...> to compatibleMemRefType scf.yield %4 : compatibleMemRefType, index, index */ // } /** %0 = vector.transfer_read %1#0[%1#1, %1#2] {in_bounds = [true ... true]} `` where
alloc` is a top of the function alloca'ed buffer of one vector.
Preconditions:
xferOp.permutation_map()
must be a minor identity mapxferOp.memref()
and the rank of the xferOp.vector()
must be equal. This will be relaxed in the future but requires rank-reducing subviews.For vector.transfer_read: If ifOp
is not null and the result is success, the
ifOp` points to the newly created conditional upon function return. To accomodate for the fact that the original vector.transfer indexing may be arbitrary and the slow path indexes @[0...0] in the temporary buffer, the scf.if op returns a view and values of type index.
Example (a 2-D vector.transfer_read): ``` %1 = vector.transfer_read %0[...], pad : memref<A...>, vector<...> ``` is transformed into: ``` %1:3 = scf.if (inBounds) { // fastpath, direct cast memref.cast A: memref<A...> to compatibleMemRefType scf.yield view : compatibleMemRefType, index, index } else { // slowpath, not in-bounds vector.transfer or linalg.copy. memref.cast alloc: memref<B...> to compatibleMemRefType scf.yield %4 : compatibleMemRefType, index, index */ // } /** %0 = vector.transfer_read %1#0[%1#1, %1#2] {in_bounds = [true ... true]} `` where
alloc` is a top of the function alloca'ed buffer of one vector.
For vector.transfer_write: There are 2 conditional blocks. First a block to decide which memref and indices to use for an unmasked, inbounds write. Then a conditional block to further copy a partial buffer into the final result in the slow path case.
Example (a 2-D vector.transfer_write): ``` vector.transfer_write arg, %0[...], pad : memref<A...>, vector<...> ``` is transformed into: ``` %1:3 = scf.if (inBounds) { memref.cast A: memref<A...> to compatibleMemRefType scf.yield view : compatibleMemRefType, index, index } else { memref.cast alloc: memref<B...> to compatibleMemRefType scf.yield %4 : compatibleMemRefType, index, index } %0 = vector.transfer_write arg, %1#0[%1#1, %1#2] {in_bounds = [true ... true]} scf.if (notInBounds) { // slowpath: not in-bounds vector.transfer or linalg.copy. } `` where
alloc` is a top of the function alloca'ed buffer of one vector.
Preconditions:
xferOp.getPermutationMap()
must be a minor identity mapxferOp.getBase()
and the rank of the xferOp.getVector()
must be equal. This will be relaxed in the future but requires rank-reducing subviews. Definition at line 509 of file VectorTransferSplitRewritePatterns.cpp.
References mlir::clone(), mlir::OpBuilder::clone(), createFullPartialLinalgCopy(), createFullPartialVectorTransferRead(), createFullPartialVectorTransferWrite(), createInBoundsCond(), mlir::RewriterBase::eraseOp(), mlir::Region::front(), mlir::get(), getAutomaticAllocationScope(), mlir::Builder::getBoolArrayAttr(), getCastCompatibleMemRefType(), mlir::Builder::getI64IntegerAttr(), mlir::Builder::getIndexType(), mlir::Operation::getLoc(), getLocationToWriteFullVec(), mlir::Operation::getNumRegions(), mlir::Operation::getRegion(), mlir::Value::getType(), mlir::IRMapping::map(), mlir::RewriterBase::modifyOpInPlace(), None, options, mlir::Operation::setAttr(), mlir::OpBuilder::setInsertionPoint(), mlir::OpBuilder::setInsertionPointToStart(), and splitFullAndPartialTransferPrecondition().
void mlir::vector::transferOpflowOpt | ( | RewriterBase & | rewriter, |
Operation * | rootOp | ||
) |
Implements transfer op write to read forwarding and dead transfer write optimizations.
Definition at line 898 of file VectorTransferOpTransforms.cpp.
References mlir::Operation::walk().
LogicalResult mlir::vector::unrollVectorOp | ( | Operation * | op, |
PatternRewriter & | rewriter, | ||
vector::UnrollVectorOpFn | unrollFn | ||
) |
Definition at line 396 of file VectorUtils.cpp.
References mlir::VectorType::Builder::dropDim(), mlir::Operation::getLoc(), mlir::Operation::getNumResults(), mlir::Operation::getResult(), mlir::Value::getType(), mlir::RewriterBase::notifyMatchFailure(), and mlir::RewriterBase::replaceOp().