MLIR  19.0.0git
Classes | Namespaces | Typedefs | Functions
Utils.h File Reference
#include "mlir/Dialect/Affine/Analysis/AffineAnalysis.h"
#include "mlir/Dialect/Affine/IR/AffineOps.h"
#include "mlir/IR/OpDefinition.h"
#include <optional>

Go to the source code of this file.

Classes

struct  mlir::affine::VectorizationStrategy
 Holds parameters to perform n-D vectorization on a single loop nest. More...
 
struct  mlir::affine::DivModValue
 Holds the result of (div a, b) and (mod a, b). More...
 
struct  mlir::affine::AffineValueExpr
 
struct  mlir::affine::AffineBuilder
 Helper struct to build simple AffineValueExprs with minimal type inference support. More...
 

Namespaces

 mlir
 Include the generated interface declarations.
 
 mlir::func
 
 mlir::memref
 
 mlir::affine
 

Typedefs

using mlir::affine::ReductionLoopMap = DenseMap< Operation *, SmallVector< LoopReduction, 2 > >
 

Functions

LogicalResult mlir::affine::affineParallelize (AffineForOp forOp, ArrayRef< LoopReduction > parallelReductions={}, AffineParallelOp *resOp=nullptr)
 Replaces a parallel affine.for op with a 1-d affine.parallel op. More...
 
LogicalResult mlir::affine::hoistAffineIfOp (AffineIfOp ifOp, bool *folded=nullptr)
 Hoists out affine.if/else to as high as possible, i.e., past all invariant affine.fors/parallel's. More...
 
void mlir::affine::affineScalarReplace (func::FuncOp f, DominanceInfo &domInfo, PostDominanceInfo &postDomInfo)
 Replace affine store and load accesses by scalars by forwarding stores to loads and eliminate invariant affine loads; consequently, eliminate dead allocs. More...
 
void mlir::affine::vectorizeAffineLoops (Operation *parentOp, llvm::DenseSet< Operation *, DenseMapInfo< Operation * >> &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops=ReductionLoopMap())
 Vectorizes affine loops in 'loops' using the n-D vectorization factors in 'vectorSizes'. More...
 
LogicalResult mlir::affine::vectorizeAffineLoopNest (std::vector< SmallVector< AffineForOp, 2 >> &loops, const VectorizationStrategy &strategy)
 External utility to vectorize affine loops from a single loop nest using an n-D vectorization strategy (see doc in VectorizationStrategy definition). More...
 
void mlir::affine::normalizeAffineParallel (AffineParallelOp op)
 Normalize a affine.parallel op so that lower bounds are 0 and steps are 1. More...
 
LogicalResult mlir::affine::normalizeAffineFor (AffineForOp op, bool promoteSingleIter=false)
 Normalize an affine.for op. More...
 
AffineExpr mlir::affine::substWithMin (AffineExpr e, AffineExpr dim, AffineExpr min, AffineExpr max, bool positivePath=true)
 Traverse e and return an AffineExpr where all occurrences of dim have been replaced by either: More...
 
LogicalResult mlir::affine::replaceAllMemRefUsesWith (Value oldMemRef, Value newMemRef, ArrayRef< Value > extraIndices={}, AffineMap indexRemap=AffineMap(), ArrayRef< Value > extraOperands={}, ArrayRef< Value > symbolOperands={}, Operation *domOpFilter=nullptr, Operation *postDomOpFilter=nullptr, bool allowNonDereferencingOps=false, bool replaceInDeallocOp=false)
 Replaces all "dereferencing" uses of oldMemRef with newMemRef while optionally remapping the old memref's indices using the supplied affine map, indexRemap. More...
 
LogicalResult mlir::affine::replaceAllMemRefUsesWith (Value oldMemRef, Value newMemRef, Operation *op, ArrayRef< Value > extraIndices={}, AffineMap indexRemap=AffineMap(), ArrayRef< Value > extraOperands={}, ArrayRef< Value > symbolOperands={}, bool allowNonDereferencingOps=false)
 Performs the same replacement as the other version above but only for the dereferencing uses of oldMemRef in op, except in cases where 'allowNonDereferencingOps' is set to true where we replace the non-dereferencing uses as well. More...
 
LogicalResult mlir::affine::normalizeMemRef (memref::AllocOp *op)
 Rewrites the memref defined by this alloc op to have an identity layout map and updates all its indexing uses. More...
 
MemRefType mlir::affine::normalizeMemRefType (MemRefType memrefType)
 Normalizes memrefType so that the affine layout map of the memref is transformed to an identity map with a new shape being computed for the normalized memref type and returns it. More...
 
void mlir::affine::createAffineComputationSlice (Operation *opInst, SmallVectorImpl< AffineApplyOp > *sliceOps)
 Given an operation, inserts one or more single result affine apply operations, results of which are exclusively used by this operation. More...
 
Value mlir::affine::expandAffineExpr (OpBuilder &builder, Location loc, AffineExpr expr, ValueRange dimValues, ValueRange symbolValues)
 Emit code that computes the given affine expression using standard arithmetic operations applied to the provided dimension and symbol values. More...
 
std::optional< SmallVector< Value, 8 > > mlir::affine::expandAffineMap (OpBuilder &builder, Location loc, AffineMap affineMap, ValueRange operands)
 Create a sequence of operations that implement the affineMap applied to the given operands (as it it were an AffineApplyOp). More...
 
DivModValue mlir::affine::getDivMod (OpBuilder &b, Location loc, Value lhs, Value rhs)
 Create IR to calculate (div lhs, rhs) and (mod lhs, rhs). More...
 
FailureOr< SmallVector< Value > > mlir::affine::delinearizeIndex (OpBuilder &b, Location loc, Value linearIndex, ArrayRef< Value > basis)
 Generate the IR to delinearize linearIndex given the basis and return the multi-index. More...
 
OpFoldResult mlir::affine::linearizeIndex (ArrayRef< OpFoldResult > multiIndex, ArrayRef< OpFoldResult > basis, ImplicitLocOpBuilder &builder)
 
template<typename EffectType , typename T >
bool mlir::affine::hasNoInterveningEffect (Operation *start, T memOp)
 Ensure that all operations that could be executed after start (noninclusive) and prior to memOp (e.g. More...