MLIR 23.0.0git
MemRefUtils.h
Go to the documentation of this file.
1//===- MemRefUtils.h - MemRef transformation utilities ----------*- C++ -*-===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// This header file defines prototypes for various transformation utilities for
10// the MemRefOps dialect. These are not passes by themselves but are used
11// either by passes, optimization sequences, or in turn by other transformation
12// utilities.
13//
14//===----------------------------------------------------------------------===//
15
16#ifndef MLIR_DIALECT_MEMREF_UTILS_MEMREFUTILS_H
17#define MLIR_DIALECT_MEMREF_UTILS_MEMREFUTILS_H
18
20
21namespace mlir {
22
23class MemRefType;
24
25/// A value with a memref type.
27
28namespace memref {
29
30/// Returns true, if the memref type has static shapes and represents a
31/// contiguous chunk of memory.
32bool isStaticShapeAndContiguousRowMajor(MemRefType type);
33
34/// Controls how the per-dimension contribution to `linearizedSize` is divided
35/// by `dstBits / srcBits` when scaling down to the emulated type. The offset
36/// and intra-data offset are unaffected; they always use floor division and
37/// remainder respectively.
38/// - `Floor`: round each `stride * size / scaler` down. Suitable for indexing
39/// computations where a partial trailing byte is not included.
40/// - `Ceil`: round up, matching the result-shape size used by narrow-type
41/// memref type conversion (see `getLinearizedShape`). Use this when the
42/// caller needs the linearized size to cover all source elements, e.g. when
43/// building the size attribute of a converted `memref.reinterpret_cast`.
45
46/// For a `memref` with `offset`, `sizes` and `strides`, returns the
47/// offset, size, and potentially the size padded at the front to use for the
48/// linearized `memref`.
49/// - If the linearization is done for emulating load/stores of
50/// element type with bitwidth `srcBits` using element type with
51/// bitwidth `dstBits`, the linearized offset and size are
52/// scaled down by `dstBits`/`srcBits`.
53/// - If `indices` is provided, it represents the position in the
54/// original `memref` being accessed. The method then returns the
55/// index to use in the linearized `memref`. The linearized index
56/// is also scaled down by `dstBits`/`srcBits`. If `indices` is not provided
57/// 0, is returned for the linearized index.
58/// - If the size of the load/store is smaller than the linearized memref
59/// load/store, the memory region emulated is larger than the actual memory
60/// region needed. `intraDataOffset` returns the element offset of the data
61/// relevant at the beginning.
62/// - `sizeDivKind` selects floor vs ceil rounding for the `linearizedSize`
63/// contribution from each dimension (see `LinearizedDivKind`).
69std::pair<LinearizedMemRefInfo, OpFoldResult> getLinearizedMemRefOffsetAndSize(
70 OpBuilder &builder, Location loc, int srcBits, int dstBits,
74
75/// For a `memref` with `offset` and `sizes`, returns the
76/// offset and size to use for the linearized `memref`, assuming that
77/// the strides are computed from a row-major ordering of the sizes;
78/// - If the linearization is done for emulating load/stores of
79/// element type with bitwidth `srcBits` using element type with
80/// bitwidth `dstBits`, the linearized offset and size are
81/// scaled down by `dstBits`/`srcBits`.
82/// - `sizeDivKind` selects floor vs ceil rounding for the `linearizedSize`
83/// contribution from each dimension (see `LinearizedDivKind`).
84LinearizedMemRefInfo getLinearizedMemRefOffsetAndSize(
85 OpBuilder &builder, Location loc, int srcBits, int dstBits,
88
89/// Track temporary allocations that are never read from. If this is the case
90/// it means both the allocations and associated stores can be removed.
91void eraseDeadAllocAndStores(RewriterBase &rewriter, Operation *parentOp);
92
93/// Given a set of sizes, return the suffix product.
94///
95/// When applied to slicing, this is the calculation needed to derive the
96/// strides (i.e. the number of linear indices to skip along the (k-1) most
97/// minor dimensions to get the next k-slice).
98///
99/// This is the basis to linearize an n-D offset confined to `[0 ... sizes]`.
100///
101/// Assuming `sizes` is `[s0, .. sn]`, return the vector<Value>
102/// `[s1 * ... * sn, s2 * ... * sn, ..., sn, 1]`.
103///
104/// It is the caller's responsibility to provide valid OpFoldResult type values
105/// and construct valid IR in the end.
106///
107/// `sizes` elements are asserted to be non-negative.
108///
109/// Return an empty vector if `sizes` is empty.
110///
111/// The function emits an IR block which computes suffix product for provided
112/// sizes.
119 return computeSuffixProductIRBlock(loc, builder, sizes);
120}
121
122/// Walk up the source chain until an operation that changes/defines the view of
123/// memory is found (i.e. skip operations that alias the entire view).
125
126/// Checks if two (memref) values are the same or statically known to alias
127/// the same region of memory.
131
132/// Walk up the source chain until we find an operation that is not a view of
133/// the source memref (i.e. implements ViewLikeOpInterface).
135
136/// Given the 'indices' of a load/store operation where the memref is a result
137/// of a expand_shape op, returns the indices w.r.t to the source memref of the
138/// expand_shape op into `sourceIndices`. For example
139///
140/// %0 = ... : memref<12x42xf32>
141/// %1 = memref.expand_shape %0 [[0, 1], [2]]
142/// : memref<12x42xf32> into memref<2x6x42xf32>
143/// %2 = load %1[%i1, %i2, %i3] : memref<2x6x42xf32
144///
145/// could be folded into
146///
147/// %2 = load %0[6 * i1 + i2, %i3] :
148/// memref<12x42xf32>
149///
150/// If `startsInbounds` is true, optimizations that rely on all indices being
151/// non-negative and less than the corresponding memref dimension may be
152/// performed.
154 memref::ExpandShapeOp expandShapeOp,
156 SmallVectorImpl<Value> &sourceIndices,
157 bool startsInbounds);
158
159/// Given the 'indices' of a load/store operation where the memref is a result
160/// of a collapse_shape op, returns the indices w.r.t to the source memref of
161/// the collapse_shape op, returing them into `sourceIndices`. For example
162///
163/// %0 = ... : memref<2x6x42xf32>
164/// %1 = memref.collapse_shape %0 [[0, 1], [2]]
165/// : memref<2x6x42xf32> into memref<12x42xf32>
166/// %2 = load %1[%i1, %i2] : memref<12x42xf32>
167///
168/// could be folded into
169///
170/// %2 = load %0[%i1 / 6, %i1 % 6, %i2] :
171/// memref<2x6x42xf32>
172///
173/// If `startsInbounds` is true, optimizations that rely on all indices being
174/// non-negative and less than the corresponding memref dimension may be
175/// performed.
177 memref::CollapseShapeOp collapseShapeOp,
179 SmallVectorImpl<Value> &sourceIndices,
180 bool startsInbounds);
181
182/// Given the 'indices' of a load/store operation where the memref is a result
183/// of a rank-reducing full subview op, returns the indices w.r.t to the source
184/// memref of the memref.subview op. For example
185///
186/// %alias = memref.subview %src[0, 0, 0][1, 2, 2][1, 1, 1]: memref<1x2x2xf32>
187/// to memref<2x2xf32>
188/// %val = memref.load %alias[%i, %j] : memref<2x2xf32>
189///
190/// could be folded into
191///
192/// %val = memref.load %src[0, %i, %j] : memref<1x2x2xf32>
194 Location loc, OpBuilder &b, memref::SubViewOp subViewOp, ValueRange indices,
195 SmallVectorImpl<Value> &sourceIndices);
196
197} // namespace memref
198} // namespace mlir
199
200#endif // MLIR_DIALECT_MEMREF_UTILS_MEMREFUTILS_H
b
Return true if permutation is a valid permutation of the outer_dims_perm (case OuterOrInnerPerm::Oute...
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition Location.h:76
This class helps build Operations.
Definition Builders.h:209
This class represents a single result from folding an operation.
Operation is the basic unit of execution within MLIR.
Definition Operation.h:88
A special type of RewriterBase that coordinates the application of a rewrite pattern on the current I...
This class coordinates the application of a rewrite on a set of IR, providing a way for clients to tr...
This class provides an abstraction over the different types of ranges over Values.
Definition ValueRange.h:389
std::pair< LinearizedMemRefInfo, OpFoldResult > getLinearizedMemRefOffsetAndSize(OpBuilder &builder, Location loc, int srcBits, int dstBits, OpFoldResult offset, ArrayRef< OpFoldResult > sizes, ArrayRef< OpFoldResult > strides, ArrayRef< OpFoldResult > indices={}, LinearizedDivKind sizeDivKind=LinearizedDivKind::Floor)
MemrefValue skipFullyAliasingOperations(MemrefValue source)
Walk up the source chain until an operation that changes/defines the view of memory is found (i....
bool isSameViewOrTrivialAlias(MemrefValue a, MemrefValue b)
Checks if two (memref) values are the same or statically known to alias the same region of memory.
void eraseDeadAllocAndStores(RewriterBase &rewriter, Operation *parentOp)
Track temporary allocations that are never read from.
MemrefValue skipViewLikeOps(MemrefValue source)
Walk up the source chain until we find an operation that is not a view of the source memref (i....
bool isStaticShapeAndContiguousRowMajor(MemRefType type)
Returns true, if the memref type has static shapes and represents a contiguous chunk of memory.
void resolveSourceIndicesCollapseShape(Location loc, PatternRewriter &rewriter, memref::CollapseShapeOp collapseShapeOp, ValueRange indices, SmallVectorImpl< Value > &sourceIndices, bool startsInbounds)
Given the 'indices' of a load/store operation where the memref is a result of a collapse_shape op,...
SmallVector< OpFoldResult > computeStridesIRBlock(Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
LinearizedDivKind
Controls how the per-dimension contribution to linearizedSize is divided by dstBits / srcBits when sc...
Definition MemRefUtils.h:44
LogicalResult resolveSourceIndicesRankReducingSubview(Location loc, OpBuilder &b, memref::SubViewOp subViewOp, ValueRange indices, SmallVectorImpl< Value > &sourceIndices)
Given the 'indices' of a load/store operation where the memref is a result of a rank-reducing full su...
SmallVector< OpFoldResult > computeSuffixProductIRBlock(Location loc, OpBuilder &builder, ArrayRef< OpFoldResult > sizes)
Given a set of sizes, return the suffix product.
void resolveSourceIndicesExpandShape(Location loc, PatternRewriter &rewriter, memref::ExpandShapeOp expandShapeOp, ValueRange indices, SmallVectorImpl< Value > &sourceIndices, bool startsInbounds)
Given the 'indices' of a load/store operation where the memref is a result of a expand_shape op,...
Include the generated interface declarations.
std::conditional_t< std::is_same_v< Ty, mlir::Type >, mlir::Value, detail::TypedValue< Ty > > TypedValue
If Ty is mlir::Type this will select Value instead of having a wrapper around it.
Definition Value.h:494
TypedValue< BaseMemRefType > MemrefValue
A value with a memref type.
Definition MemRefUtils.h:26
For a memref with offset, sizes and strides, returns the offset, size, and potentially the size padde...
Definition MemRefUtils.h:64