|
LogicalResult | mlir::affine::loopUnrollFull (AffineForOp forOp) |
| Unrolls this for operation completely if the trip count is known to be constant. More...
|
|
LogicalResult | mlir::affine::loopUnrollByFactor (AffineForOp forOp, uint64_t unrollFactor, function_ref< void(unsigned, Operation *, OpBuilder)> annotateFn=nullptr, bool cleanUpUnroll=false) |
| Unrolls this for operation by the specified unroll factor. More...
|
|
LogicalResult | mlir::affine::loopUnrollUpToFactor (AffineForOp forOp, uint64_t unrollFactor) |
| Unrolls this loop by the specified unroll factor or its trip count, whichever is lower. More...
|
|
bool LLVM_ATTRIBUTE_UNUSED | mlir::affine::isPerfectlyNested (ArrayRef< AffineForOp > loops) |
| Returns true if loops is a perfectly nested loop nest, where loops appear in it from outermost to innermost. More...
|
|
void | mlir::affine::getPerfectlyNestedLoops (SmallVectorImpl< AffineForOp > &nestedLoops, AffineForOp root) |
| Get perfectly nested sequence of loops starting at root of loop nest (the first op being another AffineFor, and the second op - a terminator). More...
|
|
LogicalResult | mlir::affine::loopUnrollJamByFactor (AffineForOp forOp, uint64_t unrollJamFactor) |
| Unrolls and jams this loop by the specified factor. More...
|
|
LogicalResult | mlir::affine::loopUnrollJamUpToFactor (AffineForOp forOp, uint64_t unrollJamFactor) |
| Unrolls and jams this loop by the specified factor or by the trip count (if constant), whichever is lower. More...
|
|
LogicalResult | mlir::affine::promoteIfSingleIteration (AffineForOp forOp) |
| Promotes the loop body of a AffineForOp to its containing block if the loop was known to have a single iteration. More...
|
|
void | mlir::affine::promoteSingleIterationLoops (func::FuncOp f) |
| Promotes all single iteration AffineForOp's in the Function, i.e., moves their body into the containing Block. More...
|
|
LogicalResult | mlir::affine::affineForOpBodySkew (AffineForOp forOp, ArrayRef< uint64_t > shifts, bool unrollPrologueEpilogue=false) |
| Skew the operations in an affine.for's body with the specified operation-wise shifts. More...
|
|
void | mlir::affine::getTileableBands (func::FuncOp f, std::vector< SmallVector< AffineForOp, 6 >> *bands) |
| Identify valid and profitable bands of loops to tile. More...
|
|
LogicalResult | mlir::affine::tilePerfectlyNested (MutableArrayRef< AffineForOp > input, ArrayRef< unsigned > tileSizes, SmallVectorImpl< AffineForOp > *tiledNest=nullptr) |
| Tiles the specified band of perfectly nested loops creating tile-space loops and intra-tile loops. More...
|
|
LogicalResult | mlir::affine::tilePerfectlyNestedParametric (MutableArrayRef< AffineForOp > input, ArrayRef< Value > tileSizes, SmallVectorImpl< AffineForOp > *tiledNest=nullptr) |
| Tiles the specified band of perfectly nested loops creating tile-space loops and intra-tile loops, using SSA values as tiling parameters. More...
|
|
void | mlir::affine::interchangeLoops (AffineForOp forOpA, AffineForOp forOpB) |
| Performs loop interchange on 'forOpA' and 'forOpB'. More...
|
|
bool | mlir::affine::isValidLoopInterchangePermutation (ArrayRef< AffineForOp > loops, ArrayRef< unsigned > loopPermMap) |
| Checks if the loop interchange permutation 'loopPermMap', of the perfectly nested sequence of loops in 'loops', would violate dependences (loop 'i' in 'loops' is mapped to location 'j = 'loopPermMap[i]' in the interchange). More...
|
|
unsigned | mlir::affine::permuteLoops (MutableArrayRef< AffineForOp > inputNest, ArrayRef< unsigned > permMap) |
| Performs a loop permutation on a perfectly nested loop nest inputNest (where the contained loops appear from outer to inner) as specified by the permutation permMap : loop 'i' in inputNest is mapped to location 'loopPermMap[i]', where positions 0, 1, ... More...
|
|
AffineForOp | mlir::affine::sinkSequentialLoops (AffineForOp forOp) |
|
SmallVector< SmallVector< AffineForOp, 8 >, 8 > | mlir::affine::tile (ArrayRef< AffineForOp > forOps, ArrayRef< uint64_t > sizes, ArrayRef< AffineForOp > targets) |
| Performs tiling fo imperfectly nested loops (with interchange) by strip-mining the forOps by sizes and sinking them, in their order of occurrence in forOps , under each of the targets . More...
|
|
SmallVector< AffineForOp, 8 > | mlir::affine::tile (ArrayRef< AffineForOp > forOps, ArrayRef< uint64_t > sizes, AffineForOp target) |
| Performs tiling (with interchange) by strip-mining the forOps by sizes and sinking them, in their order of occurrence in forOps , under target . More...
|
|
LogicalResult | mlir::affine::affineDataCopyGenerate (Block::iterator begin, Block::iterator end, const AffineCopyOptions ©Options, std::optional< Value > filterMemRef, DenseSet< Operation * > ©Nests) |
| Performs explicit copying for the contiguous sequence of operations in the block iterator range [‘begin’, ‘end’), where ‘end’ can't be past the terminator of the block (since additional operations are potentially inserted right before end . More...
|
|
LogicalResult | mlir::affine::affineDataCopyGenerate (AffineForOp forOp, const AffineCopyOptions ©Options, std::optional< Value > filterMemRef, DenseSet< Operation * > ©Nests) |
| A convenience version of affineDataCopyGenerate for all ops in the body of an AffineForOp. More...
|
|
LogicalResult | mlir::affine::generateCopyForMemRegion (const MemRefRegion &memrefRegion, Operation *analyzedOp, const AffineCopyOptions ©Options, CopyGenerateResult &result) |
| generateCopyForMemRegion is similar to affineDataCopyGenerate, but works with a single memref region. More...
|
|
LogicalResult | mlir::affine::coalesceLoops (MutableArrayRef< AffineForOp > loops) |
| Replace a perfect nest of "for" loops with a single linearized loop. More...
|
|
void | mlir::affine::mapLoopToProcessorIds (scf::ForOp forOp, ArrayRef< Value > processorId, ArrayRef< Value > numProcessors) |
| Maps forOp for execution on a parallel grid of virtual processorIds of size given by numProcessors . More...
|
|
void | mlir::affine::gatherLoops (func::FuncOp func, std::vector< SmallVector< AffineForOp, 2 >> &depthToLoops) |
| Gathers all AffineForOps in 'func.func' grouped by loop depth. More...
|
|
AffineForOp | mlir::affine::createCanonicalizedAffineForOp (OpBuilder b, Location loc, ValueRange lbOperands, AffineMap lbMap, ValueRange ubOperands, AffineMap ubMap, int64_t step=1) |
| Creates an AffineForOp while ensuring that the lower and upper bounds are canonicalized, i.e., unused and duplicate operands are removed, any constant operands propagated/folded in, and duplicate bound maps dropped. More...
|
|
LogicalResult | mlir::affine::separateFullTiles (MutableArrayRef< AffineForOp > nest, SmallVectorImpl< AffineForOp > *fullTileNest=nullptr) |
| Separates full tiles from partial tiles for a perfect nest nest by generating a conditional guard that selects between the full tile version and the partial tile version using an AffineIfOp. More...
|
|
LogicalResult | mlir::affine::coalescePerfectlyNestedAffineLoops (AffineForOp op) |
| Walk an affine.for to find a band to coalesce. More...
|
|