MLIR 22.0.0git
SuperVectorize.cpp
Go to the documentation of this file.
1//===- SuperVectorize.cpp - Vectorize Pass Impl ---------------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// This file implements vectorization of loops, operations and data types to
10// a target-independent, n-D super-vector abstraction.
11//
12//===----------------------------------------------------------------------===//
13
15
26#include "mlir/IR/IRMapping.h"
27#include "mlir/Pass/Pass.h"
28#include "mlir/Support/LLVM.h"
29#include "llvm/ADT/STLExtras.h"
30#include "llvm/Support/Debug.h"
31#include <optional>
32
33namespace mlir {
34namespace affine {
35#define GEN_PASS_DEF_AFFINEVECTORIZE
36#include "mlir/Dialect/Affine/Passes.h.inc"
37} // namespace affine
38} // namespace mlir
39
40using namespace mlir;
41using namespace affine;
42using namespace vector;
43
44///
45/// Implements a high-level vectorization strategy on a Function.
46/// The abstraction used is that of super-vectors, which provide a single,
47/// compact, representation in the vector types, information that is expected
48/// to reduce the impact of the phase ordering problem
49///
50/// Vector granularity:
51/// ===================
52/// This pass is designed to perform vectorization at a super-vector
53/// granularity. A super-vector is loosely defined as a vector type that is a
54/// multiple of a "good" vector size so the HW can efficiently implement a set
55/// of high-level primitives. Multiple is understood along any dimension; e.g.
56/// both vector<16xf32> and vector<2x8xf32> are valid super-vectors for a
57/// vector<8xf32> HW vector. Note that a "good vector size so the HW can
58/// efficiently implement a set of high-level primitives" is not necessarily an
59/// integer multiple of actual hardware registers. We leave details of this
60/// distinction unspecified for now.
61///
62/// Some may prefer the terminology a "tile of HW vectors". In this case, one
63/// should note that super-vectors implement an "always full tile" abstraction.
64/// They guarantee no partial-tile separation is necessary by relying on a
65/// high-level copy-reshape abstraction that we call vector.transfer. This
66/// copy-reshape operations is also responsible for performing layout
67/// transposition if necessary. In the general case this will require a scoped
68/// allocation in some notional local memory.
69///
70/// Whatever the mental model one prefers to use for this abstraction, the key
71/// point is that we burn into a single, compact, representation in the vector
72/// types, information that is expected to reduce the impact of the phase
73/// ordering problem. Indeed, a vector type conveys information that:
74/// 1. the associated loops have dependency semantics that do not prevent
75/// vectorization;
76/// 2. the associate loops have been sliced in chunks of static sizes that are
77/// compatible with vector sizes (i.e. similar to unroll-and-jam);
78/// 3. the inner loops, in the unroll-and-jam analogy of 2, are captured by
79/// the
80/// vector type and no vectorization hampering transformations can be
81/// applied to them anymore;
82/// 4. the underlying memrefs are accessed in some notional contiguous way
83/// that allows loading into vectors with some amount of spatial locality;
84/// In other words, super-vectorization provides a level of separation of
85/// concern by way of opacity to subsequent passes. This has the effect of
86/// encapsulating and propagating vectorization constraints down the list of
87/// passes until we are ready to lower further.
88///
89/// For a particular target, a notion of minimal n-d vector size will be
90/// specified and vectorization targets a multiple of those. In the following
91/// paragraph, let "k ." represent "a multiple of", to be understood as a
92/// multiple in the same dimension (e.g. vector<16 x k . 128> summarizes
93/// vector<16 x 128>, vector<16 x 256>, vector<16 x 1024>, etc).
94///
95/// Some non-exhaustive notable super-vector sizes of interest include:
96/// - CPU: vector<k . HW_vector_size>,
97/// vector<k' . core_count x k . HW_vector_size>,
98/// vector<socket_count x k' . core_count x k . HW_vector_size>;
99/// - GPU: vector<k . warp_size>,
100/// vector<k . warp_size x float2>,
101/// vector<k . warp_size x float4>,
102/// vector<k . warp_size x 4 x 4x 4> (for tensor_core sizes).
103///
104/// Loops and operations are emitted that operate on those super-vector shapes.
105/// Subsequent lowering passes will materialize to actual HW vector sizes. These
106/// passes are expected to be (gradually) more target-specific.
107///
108/// At a high level, a vectorized load in a loop will resemble:
109/// ```mlir
110/// affine.for %i = ? to ? step ? {
111/// %v_a = vector.transfer_read A[%i] : memref<?xf32>, vector<128xf32>
112/// }
113/// ```
114/// It is the responsibility of the implementation of vector.transfer_read to
115/// materialize vector registers from the original scalar memrefs. A later (more
116/// target-dependent) lowering pass will materialize to actual HW vector sizes.
117/// This lowering may be occur at different times:
118/// 1. at the MLIR level into a combination of loops, unrolling, DmaStartOp +
119/// DmaWaitOp + vectorized operations for data transformations and shuffle;
120/// thus opening opportunities for unrolling and pipelining. This is an
121/// instance of library call "whiteboxing"; or
122/// 2. later in the a target-specific lowering pass or hand-written library
123/// call; achieving full separation of concerns. This is an instance of
124/// library call; or
125/// 3. a mix of both, e.g. based on a model.
126/// In the future, these operations will expose a contract to constrain the
127/// search on vectorization patterns and sizes.
128///
129/// Occurrence of super-vectorization in the compiler flow:
130/// =======================================================
131/// This is an active area of investigation. We start with 2 remarks to position
132/// super-vectorization in the context of existing ongoing work: LLVM VPLAN
133/// and LLVM SLP Vectorizer.
134///
135/// LLVM VPLAN:
136/// -----------
137/// The astute reader may have noticed that in the limit, super-vectorization
138/// can be applied at a similar time and with similar objectives than VPLAN.
139/// For instance, in the case of a traditional, polyhedral compilation-flow (for
140/// instance, the PPCG project uses ISL to provide dependence analysis,
141/// multi-level(scheduling + tiling), lifting footprint to fast memory,
142/// communication synthesis, mapping, register optimizations) and before
143/// unrolling. When vectorization is applied at this *late* level in a typical
144/// polyhedral flow, and is instantiated with actual hardware vector sizes,
145/// super-vectorization is expected to match (or subsume) the type of patterns
146/// that LLVM's VPLAN aims at targeting. The main difference here is that MLIR
147/// is higher level and our implementation should be significantly simpler. Also
148/// note that in this mode, recursive patterns are probably a bit of an overkill
149/// although it is reasonable to expect that mixing a bit of outer loop and
150/// inner loop vectorization + unrolling will provide interesting choices to
151/// MLIR.
152///
153/// LLVM SLP Vectorizer:
154/// --------------------
155/// Super-vectorization however is not meant to be usable in a similar fashion
156/// to the SLP vectorizer. The main difference lies in the information that
157/// both vectorizers use: super-vectorization examines contiguity of memory
158/// references along fastest varying dimensions and loops with recursive nested
159/// patterns capturing imperfectly-nested loop nests; the SLP vectorizer, on
160/// the other hand, performs flat pattern matching inside a single unrolled loop
161/// body and stitches together pieces of load and store operations into full
162/// 1-D vectors. We envision that the SLP vectorizer is a good way to capture
163/// innermost loop, control-flow dependent patterns that super-vectorization may
164/// not be able to capture easily. In other words, super-vectorization does not
165/// aim at replacing the SLP vectorizer and the two solutions are complementary.
166///
167/// Ongoing investigations:
168/// -----------------------
169/// We discuss the following *early* places where super-vectorization is
170/// applicable and touch on the expected benefits and risks . We list the
171/// opportunities in the context of the traditional polyhedral compiler flow
172/// described in PPCG. There are essentially 6 places in the MLIR pass pipeline
173/// we expect to experiment with super-vectorization:
174/// 1. Right after language lowering to MLIR: this is the earliest time where
175/// super-vectorization is expected to be applied. At this level, all the
176/// language/user/library-level annotations are available and can be fully
177/// exploited. Examples include loop-type annotations (such as parallel,
178/// reduction, scan, dependence distance vector, vectorizable) as well as
179/// memory access annotations (such as non-aliasing writes guaranteed,
180/// indirect accesses that are permutations by construction) accesses or
181/// that a particular operation is prescribed atomic by the user. At this
182/// level, anything that enriches what dependence analysis can do should be
183/// aggressively exploited. At this level we are close to having explicit
184/// vector types in the language, except we do not impose that burden on the
185/// programmer/library: we derive information from scalar code + annotations.
186/// 2. After dependence analysis and before polyhedral scheduling: the
187/// information that supports vectorization does not need to be supplied by a
188/// higher level of abstraction. Traditional dependence analysis is available
189/// in MLIR and will be used to drive vectorization and cost models.
190///
191/// Let's pause here and remark that applying super-vectorization as described
192/// in 1. and 2. presents clear opportunities and risks:
193/// - the opportunity is that vectorization is burned in the type system and
194/// is protected from the adverse effect of loop scheduling, tiling, loop
195/// interchange and all passes downstream. Provided that subsequent passes are
196/// able to operate on vector types; the vector shapes, associated loop
197/// iterator properties, alignment, and contiguity of fastest varying
198/// dimensions are preserved until we lower the super-vector types. We expect
199/// this to significantly rein in on the adverse effects of phase ordering.
200/// - the risks are that a. all passes after super-vectorization have to work
201/// on elemental vector types (not that this is always true, wherever
202/// vectorization is applied) and b. that imposing vectorization constraints
203/// too early may be overall detrimental to loop fusion, tiling and other
204/// transformations because the dependence distances are coarsened when
205/// operating on elemental vector types. For this reason, the pattern
206/// profitability analysis should include a component that also captures the
207/// maximal amount of fusion available under a particular pattern. This is
208/// still at the stage of rough ideas but in this context, search is our
209/// friend as the Tensor Comprehensions and auto-TVM contributions
210/// demonstrated previously.
211/// Bottom-line is we do not yet have good answers for the above but aim at
212/// making it easy to answer such questions.
213///
214/// Back to our listing, the last places where early super-vectorization makes
215/// sense are:
216/// 3. right after polyhedral-style scheduling: PLUTO-style algorithms are known
217/// to improve locality, parallelism and be configurable (e.g. max-fuse,
218/// smart-fuse etc). They can also have adverse effects on contiguity
219/// properties that are required for vectorization but the vector.transfer
220/// copy-reshape-pad-transpose abstraction is expected to help recapture
221/// these properties.
222/// 4. right after polyhedral-style scheduling+tiling;
223/// 5. right after scheduling+tiling+rescheduling: points 4 and 5 represent
224/// probably the most promising places because applying tiling achieves a
225/// separation of concerns that allows rescheduling to worry less about
226/// locality and more about parallelism and distribution (e.g. min-fuse).
227///
228/// At these levels the risk-reward looks different: on one hand we probably
229/// lost a good deal of language/user/library-level annotation; on the other
230/// hand we gained parallelism and locality through scheduling and tiling.
231/// However we probably want to ensure tiling is compatible with the
232/// full-tile-only abstraction used in super-vectorization or suffer the
233/// consequences. It is too early to place bets on what will win but we expect
234/// super-vectorization to be the right abstraction to allow exploring at all
235/// these levels. And again, search is our friend.
236///
237/// Lastly, we mention it again here:
238/// 6. as a MLIR-based alternative to VPLAN.
239///
240/// Lowering, unrolling, pipelining:
241/// ================================
242/// TODO: point to the proper places.
243///
244/// Algorithm:
245/// ==========
246/// The algorithm proceeds in a few steps:
247/// 1. defining super-vectorization patterns and matching them on the tree of
248/// AffineForOp. A super-vectorization pattern is defined as a recursive
249/// data structures that matches and captures nested, imperfectly-nested
250/// loops that have a. conformable loop annotations attached (e.g. parallel,
251/// reduction, vectorizable, ...) as well as b. all contiguous load/store
252/// operations along a specified minor dimension (not necessarily the
253/// fastest varying) ;
254/// 2. analyzing those patterns for profitability (TODO: and
255/// interference);
256/// 3. then, for each pattern in order:
257/// a. applying iterative rewriting of the loops and all their nested
258/// operations in topological order. Rewriting is implemented by
259/// coarsening the loops and converting operations and operands to their
260/// vector forms. Processing operations in topological order is relatively
261/// simple due to the structured nature of the control-flow
262/// representation. This order ensures that all the operands of a given
263/// operation have been vectorized before the operation itself in a single
264/// traversal, except for operands defined outside of the loop nest. The
265/// algorithm can convert the following operations to their vector form:
266/// * Affine load and store operations are converted to opaque vector
267/// transfer read and write operations.
268/// * Scalar constant operations/operands are converted to vector
269/// constant operations (splat).
270/// * Uniform operands (only induction variables of loops not mapped to
271/// a vector dimension, or operands defined outside of the loop nest
272/// for now) are broadcasted to a vector.
273/// TODO: Support more uniform cases.
274/// * Affine for operations with 'iter_args' are vectorized by
275/// vectorizing their 'iter_args' operands and results.
276/// TODO: Support more complex loops with divergent lbs and/or ubs.
277/// * The remaining operations in the loop nest are vectorized by
278/// widening their scalar types to vector types.
279/// b. if everything under the root AffineForOp in the current pattern
280/// is vectorized properly, we commit that loop to the IR and remove the
281/// scalar loop. Otherwise, we discard the vectorized loop and keep the
282/// original scalar loop.
283/// c. vectorization is applied on the next pattern in the list. Because
284/// pattern interference avoidance is not yet implemented and that we do
285/// not support further vectorizing an already vector load we need to
286/// re-verify that the pattern is still vectorizable. This is expected to
287/// make cost models more difficult to write and is subject to improvement
288/// in the future.
289///
290/// Choice of loop transformation to support the algorithm:
291/// =======================================================
292/// The choice of loop transformation to apply for coarsening vectorized loops
293/// is still subject to exploratory tradeoffs. In particular, say we want to
294/// vectorize by a factor 128, we want to transform the following input:
295/// ```mlir
296/// affine.for %i = %M to %N {
297/// %a = affine.load %A[%i] : memref<?xf32>
298/// }
299/// ```
300///
301/// Traditionally, one would vectorize late (after scheduling, tiling,
302/// memory promotion etc) say after stripmining (and potentially unrolling in
303/// the case of LLVM's SLP vectorizer):
304/// ```mlir
305/// affine.for %i = floor(%M, 128) to ceil(%N, 128) {
306/// affine.for %ii = max(%M, 128 * %i) to min(%N, 128*%i + 127) {
307/// %a = affine.load %A[%ii] : memref<?xf32>
308/// }
309/// }
310/// ```
311///
312/// Instead, we seek to vectorize early and freeze vector types before
313/// scheduling, so we want to generate a pattern that resembles:
314/// ```mlir
315/// affine.for %i = ? to ? step ? {
316/// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
317/// }
318/// ```
319///
320/// i. simply dividing the lower / upper bounds by 128 creates issues
321/// when representing expressions such as ii + 1 because now we only
322/// have access to original values that have been divided. Additional
323/// information is needed to specify accesses at below-128 granularity;
324/// ii. another alternative is to coarsen the loop step but this may have
325/// consequences on dependence analysis and fusability of loops: fusable
326/// loops probably need to have the same step (because we don't want to
327/// stripmine/unroll to enable fusion).
328/// As a consequence, we choose to represent the coarsening using the loop
329/// step for now and reevaluate in the future. Note that we can renormalize
330/// loop steps later if/when we have evidence that they are problematic.
331///
332/// For the simple strawman example above, vectorizing for a 1-D vector
333/// abstraction of size 128 returns code similar to:
334/// ```mlir
335/// affine.for %i = %M to %N step 128 {
336/// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
337/// }
338/// ```
339///
340/// Unsupported cases, extensions, and work in progress (help welcome :-) ):
341/// ========================================================================
342/// 1. lowering to concrete vector types for various HW;
343/// 2. reduction support for n-D vectorization and non-unit steps;
344/// 3. non-effecting padding during vector.transfer_read and filter during
345/// vector.transfer_write;
346/// 4. misalignment support vector.transfer_read / vector.transfer_write
347/// (hopefully without read-modify-writes);
348/// 5. control-flow support;
349/// 6. cost-models, heuristics and search;
350/// 7. Op implementation, extensions and implication on memref views;
351/// 8. many TODOs left around.
352///
353/// Examples:
354/// =========
355/// Consider the following Function:
356/// ```mlir
357/// func @vector_add_2d(%M : index, %N : index) -> f32 {
358/// %A = alloc (%M, %N) : memref<?x?xf32, 0>
359/// %B = alloc (%M, %N) : memref<?x?xf32, 0>
360/// %C = alloc (%M, %N) : memref<?x?xf32, 0>
361/// %f1 = arith.constant 1.0 : f32
362/// %f2 = arith.constant 2.0 : f32
363/// affine.for %i0 = 0 to %M {
364/// affine.for %i1 = 0 to %N {
365/// // non-scoped %f1
366/// affine.store %f1, %A[%i0, %i1] : memref<?x?xf32, 0>
367/// }
368/// }
369/// affine.for %i2 = 0 to %M {
370/// affine.for %i3 = 0 to %N {
371/// // non-scoped %f2
372/// affine.store %f2, %B[%i2, %i3] : memref<?x?xf32, 0>
373/// }
374/// }
375/// affine.for %i4 = 0 to %M {
376/// affine.for %i5 = 0 to %N {
377/// %a5 = affine.load %A[%i4, %i5] : memref<?x?xf32, 0>
378/// %b5 = affine.load %B[%i4, %i5] : memref<?x?xf32, 0>
379/// %s5 = arith.addf %a5, %b5 : f32
380/// // non-scoped %f1
381/// %s6 = arith.addf %s5, %f1 : f32
382/// // non-scoped %f2
383/// %s7 = arith.addf %s5, %f2 : f32
384/// // diamond dependency.
385/// %s8 = arith.addf %s7, %s6 : f32
386/// affine.store %s8, %C[%i4, %i5] : memref<?x?xf32, 0>
387/// }
388/// }
389/// %c7 = arith.constant 7 : index
390/// %c42 = arith.constant 42 : index
391/// %res = load %C[%c7, %c42] : memref<?x?xf32, 0>
392/// return %res : f32
393/// }
394/// ```
395///
396/// The -affine-super-vectorize pass with the following arguments:
397/// ```
398/// -affine-super-vectorize="virtual-vector-size=256 test-fastest-varying=0"
399/// ```
400///
401/// produces this standard innermost-loop vectorized code:
402/// ```mlir
403/// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
404/// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
405/// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
406/// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
407/// %cst = arith.constant 1.0 : f32
408/// %cst_0 = arith.constant 2.0 : f32
409/// affine.for %i0 = 0 to %arg0 {
410/// affine.for %i1 = 0 to %arg1 step 256 {
411/// %cst_1 = arith.constant dense<vector<256xf32>, 1.0> :
412/// vector<256xf32>
413/// vector.transfer_write %cst_1, %0[%i0, %i1] :
414/// vector<256xf32>, memref<?x?xf32>
415/// }
416/// }
417/// affine.for %i2 = 0 to %arg0 {
418/// affine.for %i3 = 0 to %arg1 step 256 {
419/// %cst_2 = arith.constant dense<vector<256xf32>, 2.0> :
420/// vector<256xf32>
421/// vector.transfer_write %cst_2, %1[%i2, %i3] :
422/// vector<256xf32>, memref<?x?xf32>
423/// }
424/// }
425/// affine.for %i4 = 0 to %arg0 {
426/// affine.for %i5 = 0 to %arg1 step 256 {
427/// %3 = vector.transfer_read %0[%i4, %i5] :
428/// memref<?x?xf32>, vector<256xf32>
429/// %4 = vector.transfer_read %1[%i4, %i5] :
430/// memref<?x?xf32>, vector<256xf32>
431/// %5 = arith.addf %3, %4 : vector<256xf32>
432/// %cst_3 = arith.constant dense<vector<256xf32>, 1.0> :
433/// vector<256xf32>
434/// %6 = arith.addf %5, %cst_3 : vector<256xf32>
435/// %cst_4 = arith.constant dense<vector<256xf32>, 2.0> :
436/// vector<256xf32>
437/// %7 = arith.addf %5, %cst_4 : vector<256xf32>
438/// %8 = arith.addf %7, %6 : vector<256xf32>
439/// vector.transfer_write %8, %2[%i4, %i5] :
440/// vector<256xf32>, memref<?x?xf32>
441/// }
442/// }
443/// %c7 = arith.constant 7 : index
444/// %c42 = arith.constant 42 : index
445/// %9 = load %2[%c7, %c42] : memref<?x?xf32>
446/// return %9 : f32
447/// }
448/// ```
449///
450/// The -affine-super-vectorize pass with the following arguments:
451/// ```
452/// -affine-super-vectorize="virtual-vector-size=32,256 \
453/// test-fastest-varying=1,0"
454/// ```
455///
456/// produces this more interesting mixed outer-innermost-loop vectorized code:
457/// ```mlir
458/// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
459/// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
460/// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
461/// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
462/// %cst = arith.constant 1.0 : f32
463/// %cst_0 = arith.constant 2.0 : f32
464/// affine.for %i0 = 0 to %arg0 step 32 {
465/// affine.for %i1 = 0 to %arg1 step 256 {
466/// %cst_1 = arith.constant dense<vector<32x256xf32>, 1.0> :
467/// vector<32x256xf32>
468/// vector.transfer_write %cst_1, %0[%i0, %i1] :
469/// vector<32x256xf32>, memref<?x?xf32>
470/// }
471/// }
472/// affine.for %i2 = 0 to %arg0 step 32 {
473/// affine.for %i3 = 0 to %arg1 step 256 {
474/// %cst_2 = arith.constant dense<vector<32x256xf32>, 2.0> :
475/// vector<32x256xf32>
476/// vector.transfer_write %cst_2, %1[%i2, %i3] :
477/// vector<32x256xf32>, memref<?x?xf32>
478/// }
479/// }
480/// affine.for %i4 = 0 to %arg0 step 32 {
481/// affine.for %i5 = 0 to %arg1 step 256 {
482/// %3 = vector.transfer_read %0[%i4, %i5] :
483/// memref<?x?xf32> vector<32x256xf32>
484/// %4 = vector.transfer_read %1[%i4, %i5] :
485/// memref<?x?xf32>, vector<32x256xf32>
486/// %5 = arith.addf %3, %4 : vector<32x256xf32>
487/// %cst_3 = arith.constant dense<vector<32x256xf32>, 1.0> :
488/// vector<32x256xf32>
489/// %6 = arith.addf %5, %cst_3 : vector<32x256xf32>
490/// %cst_4 = arith.constant dense<vector<32x256xf32>, 2.0> :
491/// vector<32x256xf32>
492/// %7 = arith.addf %5, %cst_4 : vector<32x256xf32>
493/// %8 = arith.addf %7, %6 : vector<32x256xf32>
494/// vector.transfer_write %8, %2[%i4, %i5] :
495/// vector<32x256xf32>, memref<?x?xf32>
496/// }
497/// }
498/// %c7 = arith.constant 7 : index
499/// %c42 = arith.constant 42 : index
500/// %9 = load %2[%c7, %c42] : memref<?x?xf32>
501/// return %9 : f32
502/// }
503/// ```
504///
505/// Of course, much more intricate n-D imperfectly-nested patterns can be
506/// vectorized too and specified in a fully declarative fashion.
507///
508/// Reduction:
509/// ==========
510/// Vectorizing reduction loops along the reduction dimension is supported if:
511/// - the reduction kind is supported,
512/// - the vectorization is 1-D, and
513/// - the step size of the loop equals to one.
514///
515/// Comparing to the non-vector-dimension case, two additional things are done
516/// during vectorization of such loops:
517/// - The resulting vector returned from the loop is reduced to a scalar using
518/// `vector.reduce`.
519/// - In some cases a mask is applied to the vector yielded at the end of the
520/// loop to prevent garbage values from being written to the accumulator.
521///
522/// Reduction vectorization is switched off by default, it can be enabled by
523/// passing a map from loops to reductions to utility functions, or by passing
524/// `vectorize-reductions=true` to the vectorization pass.
525///
526/// Consider the following example:
527/// ```mlir
528/// func @vecred(%in: memref<512xf32>) -> f32 {
529/// %cst = arith.constant 0.000000e+00 : f32
530/// %sum = affine.for %i = 0 to 500 iter_args(%part_sum = %cst) -> (f32) {
531/// %ld = affine.load %in[%i] : memref<512xf32>
532/// %cos = math.cos %ld : f32
533/// %add = arith.addf %part_sum, %cos : f32
534/// affine.yield %add : f32
535/// }
536/// return %sum : f32
537/// }
538/// ```
539///
540/// The -affine-super-vectorize pass with the following arguments:
541/// ```
542/// -affine-super-vectorize="virtual-vector-size=128 test-fastest-varying=0 \
543/// vectorize-reductions=true"
544/// ```
545/// produces the following output:
546/// ```mlir
547/// #map = affine_map<(d0) -> (-d0 + 500)>
548/// func @vecred(%arg0: memref<512xf32>) -> f32 {
549/// %cst = arith.constant 0.000000e+00 : f32
550/// %cst_0 = arith.constant dense<0.000000e+00> : vector<128xf32>
551/// %0 = affine.for %arg1 = 0 to 500 step 128 iter_args(%arg2 = %cst_0)
552/// -> (vector<128xf32>) {
553/// // %2 is the number of iterations left in the original loop.
554/// %2 = affine.apply #map(%arg1)
555/// %3 = vector.create_mask %2 : vector<128xi1>
556/// %cst_1 = arith.constant 0.000000e+00 : f32
557/// %4 = vector.transfer_read %arg0[%arg1], %cst_1 :
558/// memref<512xf32>, vector<128xf32>
559/// %5 = math.cos %4 : vector<128xf32>
560/// %6 = arith.addf %arg2, %5 : vector<128xf32>
561/// // We filter out the effect of last 12 elements using the mask.
562/// %7 = select %3, %6, %arg2 : vector<128xi1>, vector<128xf32>
563/// affine.yield %7 : vector<128xf32>
564/// }
565/// %1 = vector.reduction <add>, %0 : vector<128xf32> into f32
566/// return %1 : f32
567/// }
568/// ```
569///
570/// Note that because of loop misalignment we needed to apply a mask to prevent
571/// last 12 elements from affecting the final result. The mask is full of ones
572/// in every iteration except for the last one, in which it has the form
573/// `11...100...0` with 116 ones and 12 zeros.
574
575#define DEBUG_TYPE "early-vect"
576
577using llvm::dbgs;
578
579/// Forward declaration.
582 int fastestVaryingMemRefDimension);
583
584/// Creates a vectorization pattern from the command line arguments.
585/// Up to 3-D patterns are supported.
586/// If the command line argument requests a pattern of higher order, returns an
587/// empty pattern list which will conservatively result in no vectorization.
588static std::optional<NestedPattern>
589makePattern(const DenseSet<Operation *> &parallelLoops, int vectorRank,
590 ArrayRef<int64_t> fastestVaryingPattern) {
592 int64_t d0 = fastestVaryingPattern.empty() ? -1 : fastestVaryingPattern[0];
593 int64_t d1 = fastestVaryingPattern.size() < 2 ? -1 : fastestVaryingPattern[1];
594 int64_t d2 = fastestVaryingPattern.size() < 3 ? -1 : fastestVaryingPattern[2];
595 switch (vectorRank) {
596 case 1:
597 return For(isVectorizableLoopPtrFactory(parallelLoops, d0));
598 case 2:
599 return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
600 For(isVectorizableLoopPtrFactory(parallelLoops, d1)));
601 case 3:
602 return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
603 For(isVectorizableLoopPtrFactory(parallelLoops, d1),
604 For(isVectorizableLoopPtrFactory(parallelLoops, d2))));
605 default: {
606 return std::nullopt;
607 }
608 }
609}
610
612 static auto pattern = affine::matcher::Op(
613 llvm::IsaPred<vector::TransferReadOp, vector::TransferWriteOp>);
614 return pattern;
615}
616
617namespace {
618
619/// Base state for the vectorize pass.
620/// Command line arguments are preempted by non-empty pass arguments.
621struct Vectorize : public affine::impl::AffineVectorizeBase<Vectorize> {
622 using Base::Base;
623
624 void runOnOperation() override;
625};
626
627} // namespace
628
629static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern,
630 unsigned patternDepth,
631 VectorizationStrategy *strategy) {
632 assert(patternDepth > depthInPattern &&
633 "patternDepth is greater than depthInPattern");
634 if (patternDepth - depthInPattern > strategy->vectorSizes.size()) {
635 // Don't vectorize this loop
636 return;
637 }
638 strategy->loopToVectorDim[loop] =
639 strategy->vectorSizes.size() - (patternDepth - depthInPattern);
640}
641
642/// Implements a simple strawman strategy for vectorization.
643/// Given a matched pattern `matches` of depth `patternDepth`, this strategy
644/// greedily assigns the fastest varying dimension ** of the vector ** to the
645/// innermost loop in the pattern.
646/// When coupled with a pattern that looks for the fastest varying dimension in
647/// load/store MemRefs, this creates a generic vectorization strategy that works
648/// for any loop in a hierarchy (outermost, innermost or intermediate).
649///
650/// TODO: In the future we should additionally increase the power of the
651/// profitability analysis along 3 directions:
652/// 1. account for loop extents (both static and parametric + annotations);
653/// 2. account for data layout permutations;
654/// 3. account for impact of vectorization on maximal loop fusion.
655/// Then we can quantify the above to build a cost model and search over
656/// strategies.
657static LogicalResult analyzeProfitability(ArrayRef<NestedMatch> matches,
658 unsigned depthInPattern,
659 unsigned patternDepth,
660 VectorizationStrategy *strategy) {
661 for (auto m : matches) {
662 if (failed(analyzeProfitability(m.getMatchedChildren(), depthInPattern + 1,
663 patternDepth, strategy))) {
664 return failure();
665 }
666 vectorizeLoopIfProfitable(m.getMatchedOperation(), depthInPattern,
667 patternDepth, strategy);
668 }
669 return success();
670}
671
672///// end TODO: Hoist to a VectorizationStrategy.cpp when appropriate /////
673
674namespace {
675
676struct VectorizationState {
677
678 VectorizationState(MLIRContext *context) : builder(context) {}
679
680 /// Registers the vector replacement of a scalar operation and its result
681 /// values. Both operations must have the same number of results.
682 ///
683 /// This utility is used to register the replacement for the vast majority of
684 /// the vectorized operations.
685 ///
686 /// Example:
687 /// * 'replaced': %0 = arith.addf %1, %2 : f32
688 /// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
689 void registerOpVectorReplacement(Operation *replaced, Operation *replacement);
690
691 /// Registers the vector replacement of a scalar value. The replacement
692 /// operation should have a single result, which replaces the scalar value.
693 ///
694 /// This utility is used to register the vector replacement of block arguments
695 /// and operation results which are not directly vectorized (i.e., their
696 /// scalar version still exists after vectorization), like uniforms.
697 ///
698 /// Example:
699 /// * 'replaced': block argument or operation outside of the vectorized
700 /// loop.
701 /// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
702 void registerValueVectorReplacement(Value replaced, Operation *replacement);
703
704 /// Registers the vector replacement of a block argument (e.g., iter_args).
705 ///
706 /// Example:
707 /// * 'replaced': 'iter_arg' block argument.
708 /// * 'replacement': vectorized 'iter_arg' block argument.
709 void registerBlockArgVectorReplacement(BlockArgument replaced,
710 BlockArgument replacement);
711
712 /// Registers the scalar replacement of a scalar value. 'replacement' must be
713 /// scalar.
714 ///
715 /// This utility is used to register the replacement of block arguments
716 /// or affine.apply results that are within the loop be vectorized and will
717 /// continue being scalar within the vector loop.
718 ///
719 /// Example:
720 /// * 'replaced': induction variable of a loop to be vectorized.
721 /// * 'replacement': new induction variable in the new vector loop.
722 void registerValueScalarReplacement(Value replaced, Value replacement);
723
724 /// Registers the scalar replacement of a scalar result returned from a
725 /// reduction loop. 'replacement' must be scalar.
726 ///
727 /// This utility is used to register the replacement for scalar results of
728 /// vectorized reduction loops with iter_args.
729 ///
730 /// Example 2:
731 /// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
732 /// * 'replacement': %1 = vector.reduction <add>, %0 : vector<4xf32> into
733 /// f32
734 void registerLoopResultScalarReplacement(Value replaced, Value replacement);
735
736 /// Returns in 'replacedVals' the scalar replacement for values in
737 /// 'inputVals'.
738 void getScalarValueReplacementsFor(ValueRange inputVals,
739 SmallVectorImpl<Value> &replacedVals);
740
741 /// Erases the scalar loop nest after its successful vectorization.
742 void finishVectorizationPattern(AffineForOp rootLoop);
743
744 // Used to build and insert all the new operations created. The insertion
745 // point is preserved and updated along the vectorization process.
746 OpBuilder builder;
747
748 // Maps input scalar operations to their vector counterparts.
749 DenseMap<Operation *, Operation *> opVectorReplacement;
750 // Maps input scalar values to their vector counterparts.
751 IRMapping valueVectorReplacement;
752 // Maps input scalar values to their new scalar counterparts in the vector
753 // loop nest.
754 IRMapping valueScalarReplacement;
755 // Maps results of reduction loops to their new scalar counterparts.
756 DenseMap<Value, Value> loopResultScalarReplacement;
757
758 // Maps the newly created vector loops to their vector dimension.
759 DenseMap<Operation *, unsigned> vecLoopToVecDim;
760
761 // Maps the new vectorized loops to the corresponding vector masks if it is
762 // required.
763 DenseMap<Operation *, Value> vecLoopToMask;
764
765 // The strategy drives which loop to vectorize by which amount.
766 const VectorizationStrategy *strategy = nullptr;
767
768private:
769 /// Internal implementation to map input scalar values to new vector or scalar
770 /// values.
771 void registerValueVectorReplacementImpl(Value replaced, Value replacement);
772};
773
774} // namespace
775
776/// Registers the vector replacement of a scalar operation and its result
777/// values. Both operations must have the same number of results.
778///
779/// This utility is used to register the replacement for the vast majority of
780/// the vectorized operations.
781///
782/// Example:
783/// * 'replaced': %0 = arith.addf %1, %2 : f32
784/// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
785void VectorizationState::registerOpVectorReplacement(Operation *replaced,
787 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ commit vectorized op:\n");
788 LLVM_DEBUG(dbgs() << *replaced << "\n");
789 LLVM_DEBUG(dbgs() << "into\n");
790 LLVM_DEBUG(dbgs() << *replacement << "\n");
791
792 assert(replaced->getNumResults() == replacement->getNumResults() &&
793 "Unexpected replaced and replacement results");
794 assert(opVectorReplacement.count(replaced) == 0 && "already registered");
795 opVectorReplacement[replaced] = replacement;
796
797 for (auto resultTuple :
798 llvm::zip(replaced->getResults(), replacement->getResults()))
799 registerValueVectorReplacementImpl(std::get<0>(resultTuple),
800 std::get<1>(resultTuple));
801}
802
803/// Registers the vector replacement of a scalar value. The replacement
804/// operation should have a single result, which replaces the scalar value.
805///
806/// This utility is used to register the vector replacement of block arguments
807/// and operation results which are not directly vectorized (i.e., their
808/// scalar version still exists after vectorization), like uniforms.
809///
810/// Example:
811/// * 'replaced': block argument or operation outside of the vectorized loop.
812/// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
813void VectorizationState::registerValueVectorReplacement(
814 Value replaced, Operation *replacement) {
815 assert(replacement->getNumResults() == 1 &&
816 "Expected single-result replacement");
817 if (Operation *defOp = replaced.getDefiningOp())
818 registerOpVectorReplacement(defOp, replacement);
819 else
820 registerValueVectorReplacementImpl(replaced, replacement->getResult(0));
821}
822
823/// Registers the vector replacement of a block argument (e.g., iter_args).
824///
825/// Example:
826/// * 'replaced': 'iter_arg' block argument.
827/// * 'replacement': vectorized 'iter_arg' block argument.
828void VectorizationState::registerBlockArgVectorReplacement(
829 BlockArgument replaced, BlockArgument replacement) {
830 registerValueVectorReplacementImpl(replaced, replacement);
831}
832
833void VectorizationState::registerValueVectorReplacementImpl(Value replaced,
834 Value replacement) {
835 assert(!valueVectorReplacement.contains(replaced) &&
836 "Vector replacement already registered");
837 assert(isa<VectorType>(replacement.getType()) &&
838 "Expected vector type in vector replacement");
839 valueVectorReplacement.map(replaced, replacement);
840}
841
842/// Registers the scalar replacement of a scalar value. 'replacement' must be
843/// scalar.
844///
845/// This utility is used to register the replacement of block arguments
846/// or affine.apply results that are within the loop be vectorized and will
847/// continue being scalar within the vector loop.
848///
849/// Example:
850/// * 'replaced': induction variable of a loop to be vectorized.
851/// * 'replacement': new induction variable in the new vector loop.
852void VectorizationState::registerValueScalarReplacement(Value replaced,
853 Value replacement) {
854 assert(!valueScalarReplacement.contains(replaced) &&
855 "Scalar value replacement already registered");
856 assert(!isa<VectorType>(replacement.getType()) &&
857 "Expected scalar type in scalar replacement");
858 valueScalarReplacement.map(replaced, replacement);
859}
860
861/// Registers the scalar replacement of a scalar result returned from a
862/// reduction loop. 'replacement' must be scalar.
863///
864/// This utility is used to register the replacement for scalar results of
865/// vectorized reduction loops with iter_args.
866///
867/// Example 2:
868/// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
869/// * 'replacement': %1 = vector.reduction <add>, %0 : vector<4xf32> into f32
870void VectorizationState::registerLoopResultScalarReplacement(
871 Value replaced, Value replacement) {
872 assert(isa<AffineForOp>(replaced.getDefiningOp()));
873 assert(loopResultScalarReplacement.count(replaced) == 0 &&
874 "already registered");
875 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ will replace a result of the loop "
876 "with scalar: "
877 << replacement);
878 loopResultScalarReplacement[replaced] = replacement;
879}
880
881/// Returns in 'replacedVals' the scalar replacement for values in 'inputVals'.
882void VectorizationState::getScalarValueReplacementsFor(
883 ValueRange inputVals, SmallVectorImpl<Value> &replacedVals) {
884 for (Value inputVal : inputVals)
885 replacedVals.push_back(valueScalarReplacement.lookupOrDefault(inputVal));
886}
887
888/// Erases a loop nest, including all its nested operations.
889static void eraseLoopNest(AffineForOp forOp) {
890 LLVM_DEBUG(dbgs() << "[early-vect]+++++ erasing:\n" << forOp << "\n");
891 forOp.erase();
892}
893
894/// Erases the scalar loop nest after its successful vectorization.
895void VectorizationState::finishVectorizationPattern(AffineForOp rootLoop) {
896 LLVM_DEBUG(dbgs() << "\n[early-vect] Finalizing vectorization\n");
897 eraseLoopNest(rootLoop);
898}
899
900// Apply 'map' with 'mapOperands' returning resulting values in 'results'.
902 ValueRange mapOperands,
903 VectorizationState &state,
904 SmallVectorImpl<Value> &results) {
905 for (auto resultExpr : map.getResults()) {
906 auto singleResMap =
907 AffineMap::get(map.getNumDims(), map.getNumSymbols(), resultExpr);
908 auto afOp = AffineApplyOp::create(state.builder, op->getLoc(), singleResMap,
909 mapOperands);
910 results.push_back(afOp);
911 }
912}
913
914/// Returns a FilterFunctionType that can be used in NestedPattern to match a
915/// loop whose underlying load/store accesses are either invariant or all
916// varying along the `fastestVaryingMemRefDimension`.
919 int fastestVaryingMemRefDimension) {
920 return [&parallelLoops, fastestVaryingMemRefDimension](Operation &forOp) {
921 auto loop = cast<AffineForOp>(forOp);
922 if (!parallelLoops.contains(loop))
923 return false;
924 int memRefDim = -1;
925 auto vectorizableBody =
927 if (!vectorizableBody)
928 return false;
929 return memRefDim == -1 || fastestVaryingMemRefDimension == -1 ||
930 memRefDim == fastestVaryingMemRefDimension;
931 };
934/// Returns the vector type resulting from applying the provided vectorization
935/// strategy on the scalar type.
936static VectorType getVectorType(Type scalarTy,
937 const VectorizationStrategy *strategy) {
938 assert(!isa<VectorType>(scalarTy) && "Expected scalar type");
939 return VectorType::get(strategy->vectorSizes, scalarTy);
940}
942/// Tries to transform a scalar constant into a vector constant. Returns the
943/// vector constant if the scalar type is valid vector element type. Returns
944/// nullptr, otherwise.
945static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp,
946 VectorizationState &state) {
947 Type scalarTy = constOp.getType();
948 if (!VectorType::isValidElementType(scalarTy))
949 return nullptr;
951 auto vecTy = getVectorType(scalarTy, state.strategy);
952 auto vecAttr = DenseElementsAttr::get(vecTy, constOp.getValue());
953
954 OpBuilder::InsertionGuard guard(state.builder);
955 Operation *parentOp = state.builder.getInsertionBlock()->getParentOp();
956 // Find the innermost vectorized ancestor loop to insert the vector constant.
957 while (parentOp && !state.vecLoopToVecDim.count(parentOp))
958 parentOp = parentOp->getParentOp();
959 assert(parentOp && state.vecLoopToVecDim.count(parentOp) &&
960 isa<AffineForOp>(parentOp) && "Expected a vectorized for op");
961 auto vecForOp = cast<AffineForOp>(parentOp);
962 state.builder.setInsertionPointToStart(vecForOp.getBody());
963 auto newConstOp =
964 arith::ConstantOp::create(state.builder, constOp.getLoc(), vecAttr);
965
966 // Register vector replacement for future uses in the scope.
967 state.registerOpVectorReplacement(constOp, newConstOp);
968 return newConstOp;
970
971/// We have no need to vectorize affine.apply. However, we still need to
972/// generate it and replace the operands with values in valueScalarReplacement.
973static Operation *vectorizeAffineApplyOp(AffineApplyOp applyOp,
974 VectorizationState &state) {
975 SmallVector<Value, 8> updatedOperands;
976 for (Value operand : applyOp.getOperands()) {
977 if (state.valueVectorReplacement.contains(operand)) {
978 LLVM_DEBUG(
979 dbgs() << "\n[early-vect]+++++ affine.apply on vector operand\n");
980 return nullptr;
981 }
982 Value updatedOperand = state.valueScalarReplacement.lookupOrNull(operand);
983 if (!updatedOperand)
984 updatedOperand = operand;
985 updatedOperands.push_back(updatedOperand);
986 }
987
988 auto newApplyOp = AffineApplyOp::create(
989 state.builder, applyOp.getLoc(), applyOp.getAffineMap(), updatedOperands);
991 // Register the new affine.apply result.
992 state.registerValueScalarReplacement(applyOp.getResult(),
993 newApplyOp.getResult());
994 return newApplyOp;
995}
996
997/// Creates a constant vector filled with the neutral elements of the given
998/// reduction. The scalar type of vector elements will be taken from
999/// `oldOperand`.
1000static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind,
1001 Value oldOperand,
1002 VectorizationState &state) {
1003 Type scalarTy = oldOperand.getType();
1004 if (!VectorType::isValidElementType(scalarTy))
1005 return nullptr;
1006
1007 Attribute valueAttr = getIdentityValueAttr(
1008 reductionKind, scalarTy, state.builder, oldOperand.getLoc());
1009 auto vecTy = getVectorType(scalarTy, state.strategy);
1010 auto vecAttr = DenseElementsAttr::get(vecTy, valueAttr);
1011 auto newConstOp =
1012 arith::ConstantOp::create(state.builder, oldOperand.getLoc(), vecAttr);
1013
1014 return newConstOp;
1015}
1016
1017/// Creates a mask used to filter out garbage elements in the last iteration
1018/// of unaligned loops. If a mask is not required then `nullptr` is returned.
1019/// The mask will be a vector of booleans representing meaningful vector
1020/// elements in the current iteration. It is filled with ones for each iteration
1021/// except for the last one, where it has the form `11...100...0` with the
1022/// number of ones equal to the number of meaningful elements (i.e. the number
1023/// of iterations that would be left in the original loop).
1024static Value createMask(AffineForOp vecForOp, VectorizationState &state) {
1025 assert(state.strategy->vectorSizes.size() == 1 &&
1026 "Creating a mask non-1-D vectors is not supported.");
1027 assert(vecForOp.getStep() == state.strategy->vectorSizes[0] &&
1028 "Creating a mask for loops with non-unit original step size is not "
1029 "supported.");
1030
1031 // Check if we have already created the mask.
1032 if (Value mask = state.vecLoopToMask.lookup(vecForOp))
1033 return mask;
1034
1035 // If the loop has constant bounds and the original number of iterations is
1036 // divisable by the vector size then we don't need a mask.
1037 if (vecForOp.hasConstantBounds()) {
1038 int64_t originalTripCount =
1039 vecForOp.getConstantUpperBound() - vecForOp.getConstantLowerBound();
1040 if (originalTripCount % vecForOp.getStepAsInt() == 0)
1041 return nullptr;
1042 }
1043
1044 OpBuilder::InsertionGuard guard(state.builder);
1045 state.builder.setInsertionPointToStart(vecForOp.getBody());
1046
1047 // We generate the mask using the `vector.create_mask` operation which accepts
1048 // the number of meaningful elements (i.e. the length of the prefix of 1s).
1049 // To compute the number of meaningful elements we subtract the current value
1050 // of the iteration variable from the upper bound of the loop. Example:
1051 //
1052 // // 500 is the upper bound of the loop
1053 // #map = affine_map<(d0) -> (500 - d0)>
1054 // %elems_left = affine.apply #map(%iv)
1055 // %mask = vector.create_mask %elems_left : vector<128xi1>
1056
1057 Location loc = vecForOp.getLoc();
1058
1059 // First we get the upper bound of the loop using `affine.apply` or
1060 // `affine.min`.
1061 AffineMap ubMap = vecForOp.getUpperBoundMap();
1062 Value ub;
1063 if (ubMap.getNumResults() == 1)
1064 ub = AffineApplyOp::create(state.builder, loc, vecForOp.getUpperBoundMap(),
1065 vecForOp.getUpperBoundOperands());
1066 else
1067 ub = AffineMinOp::create(state.builder, loc, vecForOp.getUpperBoundMap(),
1068 vecForOp.getUpperBoundOperands());
1069 // Then we compute the number of (original) iterations left in the loop.
1070 AffineExpr subExpr =
1071 state.builder.getAffineDimExpr(0) - state.builder.getAffineDimExpr(1);
1072 Value itersLeft =
1073 makeComposedAffineApply(state.builder, loc, AffineMap::get(2, 0, subExpr),
1074 {ub, vecForOp.getInductionVar()});
1075 // If the affine maps were successfully composed then `ub` is unneeded.
1076 if (ub.use_empty())
1077 ub.getDefiningOp()->erase();
1078 // Finally we create the mask.
1079 Type maskTy = VectorType::get(state.strategy->vectorSizes,
1080 state.builder.getIntegerType(1));
1081 Value mask =
1082 vector::CreateMaskOp::create(state.builder, loc, maskTy, itersLeft);
1083
1084 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a mask:\n"
1085 << itersLeft << "\n"
1086 << mask << "\n");
1087
1088 state.vecLoopToMask[vecForOp] = mask;
1089 return mask;
1090}
1091
1092/// Returns true if the provided value is vector uniform given the vectorization
1093/// strategy.
1094// TODO: For now, only values that are induction variables of loops not in
1095// `loopToVectorDim` or invariants to all the loops in the vectorization
1096// strategy are considered vector uniforms.
1097static bool isUniformDefinition(Value value,
1098 const VectorizationStrategy *strategy) {
1099 AffineForOp forOp = getForInductionVarOwner(value);
1100 if (forOp && strategy->loopToVectorDim.count(forOp) == 0)
1101 return true;
1102
1103 for (auto loopToDim : strategy->loopToVectorDim) {
1104 auto loop = cast<AffineForOp>(loopToDim.first);
1105 if (!loop.isDefinedOutsideOfLoop(value))
1106 return false;
1107 }
1108
1109 return value.getType().isIntOrIndexOrFloat();
1110}
1111
1112/// Generates a broadcast op for the provided uniform value using the
1113/// vectorization strategy in 'state'.
1115 VectorizationState &state) {
1116 OpBuilder::InsertionGuard guard(state.builder);
1117 Value uniformScalarRepl =
1118 state.valueScalarReplacement.lookupOrDefault(uniformVal);
1119 state.builder.setInsertionPointAfterValue(uniformScalarRepl);
1120
1121 auto vectorTy = getVectorType(uniformVal.getType(), state.strategy);
1122 auto bcastOp = BroadcastOp::create(state.builder, uniformVal.getLoc(),
1123 vectorTy, uniformScalarRepl);
1124 state.registerValueVectorReplacement(uniformVal, bcastOp);
1125 return bcastOp;
1126}
1127
1128/// Tries to vectorize a given `operand` by applying the following logic:
1129/// 1. if the defining operation has been already vectorized, `operand` is
1130/// already in the proper vector form;
1131/// 2. if the `operand` is a constant, returns the vectorized form of the
1132/// constant;
1133/// 3. if the `operand` is uniform, returns a vector broadcast of the `op`;
1134/// 4. otherwise, the vectorization of `operand` is not supported.
1135/// Newly created vector operations are registered in `state` as replacement
1136/// for their scalar counterparts.
1137/// In particular this logic captures some of the use cases where definitions
1138/// that are not scoped under the current pattern are needed to vectorize.
1139/// One such example is top level function constants that need to be splatted.
1140///
1141/// Returns an operand that has been vectorized to match `state`'s strategy if
1142/// vectorization is possible with the above logic. Returns nullptr otherwise.
1143///
1144/// TODO: handle more complex cases.
1145static Value vectorizeOperand(Value operand, VectorizationState &state) {
1146 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorize operand: " << operand);
1147 // If this value is already vectorized, we are done.
1148 if (Value vecRepl = state.valueVectorReplacement.lookupOrNull(operand)) {
1149 LLVM_DEBUG(dbgs() << " -> already vectorized: " << vecRepl);
1150 return vecRepl;
1151 }
1152
1153 // An vector operand that is not in the replacement map should never reach
1154 // this point. Reaching this point could mean that the code was already
1155 // vectorized and we shouldn't try to vectorize already vectorized code.
1156 assert(!isa<VectorType>(operand.getType()) &&
1157 "Vector op not found in replacement map");
1158
1159 // Vectorize constant.
1160 if (auto constOp = operand.getDefiningOp<arith::ConstantOp>()) {
1161 auto vecConstant = vectorizeConstant(constOp, state);
1162 LLVM_DEBUG(dbgs() << "-> constant: " << vecConstant);
1163 return vecConstant.getResult();
1164 }
1165
1166 // Vectorize uniform values.
1167 if (isUniformDefinition(operand, state.strategy)) {
1168 Operation *vecUniform = vectorizeUniform(operand, state);
1169 LLVM_DEBUG(dbgs() << "-> uniform: " << *vecUniform);
1170 return vecUniform->getResult(0);
1171 }
1172
1173 // Check for unsupported block argument scenarios. A supported block argument
1174 // should have been vectorized already.
1175 if (!operand.getDefiningOp())
1176 LLVM_DEBUG(dbgs() << "-> unsupported block argument\n");
1177 else
1178 // Generic unsupported case.
1179 LLVM_DEBUG(dbgs() << "-> non-vectorizable\n");
1180
1181 return nullptr;
1182}
1183
1184/// Returns true if any vectorized loop IV drives more than one index.
1187 const DenseMap<Operation *, unsigned> &loopToVectorDim) {
1188 for (auto &kvp : loopToVectorDim) {
1189 AffineForOp forOp = cast<AffineForOp>(kvp.first);
1190 // Find which indices are invariant w.r.t. this loop IV.
1191 llvm::DenseSet<Value> invariants =
1192 affine::getInvariantAccesses(forOp.getInductionVar(), indices);
1193 // Count how many vary (i.e. are not invariant).
1194 unsigned nonInvariant = 0;
1195 for (Value idx : indices) {
1196 if (invariants.count(idx))
1197 continue;
1198
1199 if (++nonInvariant > 1) {
1200 LLVM_DEBUG(dbgs() << "[early‑vect] Bail out: IV "
1201 << forOp.getInductionVar() << " drives "
1202 << nonInvariant << " indices\n");
1203 return true;
1204 }
1205 }
1206 }
1207 return false;
1208}
1209
1210/// Vectorizes an affine load with the vectorization strategy in 'state' by
1211/// generating a 'vector.transfer_read' op with the proper permutation map
1212/// inferred from the indices of the load. The new 'vector.transfer_read' is
1213/// registered as replacement of the scalar load. Returns the newly created
1214/// 'vector.transfer_read' if vectorization was successful. Returns nullptr,
1215/// otherwise.
1216static Operation *vectorizeAffineLoad(AffineLoadOp loadOp,
1217 VectorizationState &state) {
1218 MemRefType memRefType = loadOp.getMemRefType();
1219 Type elementType = memRefType.getElementType();
1220 auto vectorType = VectorType::get(state.strategy->vectorSizes, elementType);
1221
1222 // Replace map operands with operands from the vector loop nest.
1223 SmallVector<Value, 8> mapOperands;
1224 state.getScalarValueReplacementsFor(loadOp.getMapOperands(), mapOperands);
1225
1226 // Compute indices for the transfer op. AffineApplyOp's may be generated.
1228 indices.reserve(memRefType.getRank());
1229 if (loadOp.getAffineMap() !=
1230 state.builder.getMultiDimIdentityMap(memRefType.getRank())) {
1231 // Check the operand in loadOp affine map does not come from AffineApplyOp.
1232 for (auto op : mapOperands) {
1233 if (op.getDefiningOp<AffineApplyOp>())
1234 return nullptr;
1235 }
1236 computeMemoryOpIndices(loadOp, loadOp.getAffineMap(), mapOperands, state,
1237 indices);
1238 } else {
1239 indices.append(mapOperands.begin(), mapOperands.end());
1240 }
1241
1242 if (isIVMappedToMultipleIndices(indices, state.vecLoopToVecDim))
1243 return nullptr;
1244
1245 // Compute permutation map using the information of new vector loops.
1246 auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1247 indices, state.vecLoopToVecDim);
1248 if (!permutationMap) {
1249 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ can't compute permutationMap\n");
1250 return nullptr;
1251 }
1252 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1253 LLVM_DEBUG(permutationMap.print(dbgs()));
1254
1255 auto transfer = vector::TransferReadOp::create(
1256 state.builder, loadOp.getLoc(), vectorType, loadOp.getMemRef(), indices,
1257 /*padding=*/std::nullopt, permutationMap);
1258
1259 // Register replacement for future uses in the scope.
1260 state.registerOpVectorReplacement(loadOp, transfer);
1261 return transfer;
1262}
1263
1264/// Vectorizes an affine store with the vectorization strategy in 'state' by
1265/// generating a 'vector.transfer_write' op with the proper permutation map
1266/// inferred from the indices of the store. The new 'vector.transfer_store' is
1267/// registered as replacement of the scalar load. Returns the newly created
1268/// 'vector.transfer_write' if vectorization was successful. Returns nullptr,
1269/// otherwise.
1270static Operation *vectorizeAffineStore(AffineStoreOp storeOp,
1271 VectorizationState &state) {
1272 MemRefType memRefType = storeOp.getMemRefType();
1273 Value vectorValue = vectorizeOperand(storeOp.getValueToStore(), state);
1274 if (!vectorValue)
1275 return nullptr;
1276
1277 // Replace map operands with operands from the vector loop nest.
1278 SmallVector<Value, 8> mapOperands;
1279 state.getScalarValueReplacementsFor(storeOp.getMapOperands(), mapOperands);
1280
1281 // Compute indices for the transfer op. AffineApplyOp's may be generated.
1283 indices.reserve(memRefType.getRank());
1284 if (storeOp.getAffineMap() !=
1285 state.builder.getMultiDimIdentityMap(memRefType.getRank()))
1286 computeMemoryOpIndices(storeOp, storeOp.getAffineMap(), mapOperands, state,
1287 indices);
1288 else
1289 indices.append(mapOperands.begin(), mapOperands.end());
1290
1291 if (isIVMappedToMultipleIndices(indices, state.vecLoopToVecDim))
1292 return nullptr;
1293
1294 // Compute permutation map using the information of new vector loops.
1295 auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1296 indices, state.vecLoopToVecDim);
1297 if (!permutationMap)
1298 return nullptr;
1299 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1300 LLVM_DEBUG(permutationMap.print(dbgs()));
1301
1302 auto transfer = vector::TransferWriteOp::create(
1303 state.builder, storeOp.getLoc(), vectorValue, storeOp.getMemRef(),
1304 indices, permutationMap);
1305 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorized store: " << transfer);
1306
1307 // Register replacement for future uses in the scope.
1308 state.registerOpVectorReplacement(storeOp, transfer);
1309 return transfer;
1310}
1311
1312/// Returns true if `value` is a constant equal to the neutral element of the
1313/// given vectorizable reduction.
1314static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind,
1315 Value value, VectorizationState &state) {
1316 Type scalarTy = value.getType();
1317 if (!VectorType::isValidElementType(scalarTy))
1318 return false;
1319 Attribute valueAttr = getIdentityValueAttr(reductionKind, scalarTy,
1320 state.builder, value.getLoc());
1321 if (auto constOp = value.getDefiningOp<arith::ConstantOp>())
1322 return constOp.getValue() == valueAttr;
1323 return false;
1324}
1325
1326/// Vectorizes a loop with the vectorization strategy in 'state'. A new loop is
1327/// created and registered as replacement for the scalar loop. The builder's
1328/// insertion point is set to the new loop's body so that subsequent vectorized
1329/// operations are inserted into the new loop. If the loop is a vector
1330/// dimension, the step of the newly created loop will reflect the vectorization
1331/// factor used to vectorized that dimension.
1332static Operation *vectorizeAffineForOp(AffineForOp forOp,
1333 VectorizationState &state) {
1334 const VectorizationStrategy &strategy = *state.strategy;
1335 auto loopToVecDimIt = strategy.loopToVectorDim.find(forOp);
1336 bool isLoopVecDim = loopToVecDimIt != strategy.loopToVectorDim.end();
1337
1338 // TODO: Vectorization of reduction loops is not supported for non-unit steps.
1339 if (isLoopVecDim && forOp.getNumIterOperands() > 0 && forOp.getStep() != 1) {
1340 LLVM_DEBUG(
1341 dbgs()
1342 << "\n[early-vect]+++++ unsupported step size for reduction loop: "
1343 << forOp.getStep() << "\n");
1344 return nullptr;
1345 }
1346
1347 // If we are vectorizing a vector dimension, compute a new step for the new
1348 // vectorized loop using the vectorization factor for the vector dimension.
1349 // Otherwise, propagate the step of the scalar loop.
1350 unsigned newStep;
1351 if (isLoopVecDim) {
1352 unsigned vectorDim = loopToVecDimIt->second;
1353 assert(vectorDim < strategy.vectorSizes.size() && "vector dim overflow");
1354 int64_t forOpVecFactor = strategy.vectorSizes[vectorDim];
1355 newStep = forOp.getStepAsInt() * forOpVecFactor;
1356 } else {
1357 newStep = forOp.getStepAsInt();
1358 }
1359
1360 // Get information about reduction kinds.
1361 ArrayRef<LoopReduction> reductions;
1362 if (isLoopVecDim && forOp.getNumIterOperands() > 0) {
1363 auto it = strategy.reductionLoops.find(forOp);
1364 assert(it != strategy.reductionLoops.end() &&
1365 "Reduction descriptors not found when vectorizing a reduction loop");
1366 reductions = it->second;
1367 assert(reductions.size() == forOp.getNumIterOperands() &&
1368 "The size of reductions array must match the number of iter_args");
1369 }
1370
1371 // Vectorize 'iter_args'.
1372 SmallVector<Value, 8> vecIterOperands;
1373 if (!isLoopVecDim) {
1374 for (auto operand : forOp.getInits())
1375 vecIterOperands.push_back(vectorizeOperand(operand, state));
1376 } else {
1377 // For reduction loops we need to pass a vector of neutral elements as an
1378 // initial value of the accumulator. We will add the original initial value
1379 // later.
1380 for (auto redAndOperand : llvm::zip(reductions, forOp.getInits())) {
1381 vecIterOperands.push_back(createInitialVector(
1382 std::get<0>(redAndOperand).kind, std::get<1>(redAndOperand), state));
1383 }
1384 }
1385
1386 auto vecForOp = AffineForOp::create(
1387 state.builder, forOp.getLoc(), forOp.getLowerBoundOperands(),
1388 forOp.getLowerBoundMap(), forOp.getUpperBoundOperands(),
1389 forOp.getUpperBoundMap(), newStep, vecIterOperands,
1390 /*bodyBuilder=*/[](OpBuilder &, Location, Value, ValueRange) {
1391 // Make sure we don't create a default terminator in the loop body as
1392 // the proper terminator will be added during vectorization.
1393 });
1394
1395 // Register loop-related replacements:
1396 // 1) The new vectorized loop is registered as vector replacement of the
1397 // scalar loop.
1398 // 2) The new iv of the vectorized loop is registered as scalar replacement
1399 // since a scalar copy of the iv will prevail in the vectorized loop.
1400 // TODO: A vector replacement will also be added in the future when
1401 // vectorization of linear ops is supported.
1402 // 3) The new 'iter_args' region arguments are registered as vector
1403 // replacements since they have been vectorized.
1404 // 4) If the loop performs a reduction along the vector dimension, a
1405 // `vector.reduction` or similar op is inserted for each resulting value
1406 // of the loop and its scalar value replaces the corresponding scalar
1407 // result of the loop.
1408 state.registerOpVectorReplacement(forOp, vecForOp);
1409 state.registerValueScalarReplacement(forOp.getInductionVar(),
1410 vecForOp.getInductionVar());
1411 for (auto iterTuple :
1412 llvm ::zip(forOp.getRegionIterArgs(), vecForOp.getRegionIterArgs()))
1413 state.registerBlockArgVectorReplacement(std::get<0>(iterTuple),
1414 std::get<1>(iterTuple));
1415
1416 if (isLoopVecDim) {
1417 for (unsigned i = 0; i < vecForOp.getNumIterOperands(); ++i) {
1418 // First, we reduce the vector returned from the loop into a scalar.
1419 Value reducedRes =
1420 getVectorReductionOp(reductions[i].kind, state.builder,
1421 vecForOp.getLoc(), vecForOp.getResult(i));
1422 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a vector reduction: "
1423 << reducedRes);
1424 // Then we combine it with the original (scalar) initial value unless it
1425 // is equal to the neutral element of the reduction.
1426 Value origInit = forOp.getOperand(forOp.getNumControlOperands() + i);
1427 Value finalRes = reducedRes;
1428 if (!isNeutralElementConst(reductions[i].kind, origInit, state))
1429 finalRes =
1430 arith::getReductionOp(reductions[i].kind, state.builder,
1431 reducedRes.getLoc(), reducedRes, origInit);
1432 state.registerLoopResultScalarReplacement(forOp.getResult(i), finalRes);
1433 }
1434 }
1435
1436 if (isLoopVecDim)
1437 state.vecLoopToVecDim[vecForOp] = loopToVecDimIt->second;
1438
1439 // Change insertion point so that upcoming vectorized instructions are
1440 // inserted into the vectorized loop's body.
1441 state.builder.setInsertionPointToStart(vecForOp.getBody());
1442
1443 // If this is a reduction loop then we may need to create a mask to filter out
1444 // garbage in the last iteration.
1445 if (isLoopVecDim && forOp.getNumIterOperands() > 0)
1446 createMask(vecForOp, state);
1447
1448 return vecForOp;
1449}
1450
1451/// Vectorizes arbitrary operation by plain widening. We apply generic type
1452/// widening of all its results and retrieve the vector counterparts for all its
1453/// operands.
1454static Operation *widenOp(Operation *op, VectorizationState &state) {
1455 SmallVector<Type, 8> vectorTypes;
1456 for (Value result : op->getResults())
1457 vectorTypes.push_back(
1458 VectorType::get(state.strategy->vectorSizes, result.getType()));
1459
1460 SmallVector<Value, 8> vectorOperands;
1461 for (Value operand : op->getOperands()) {
1462 Value vecOperand = vectorizeOperand(operand, state);
1463 if (!vecOperand) {
1464 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ an operand failed vectorize\n");
1465 return nullptr;
1466 }
1467 vectorOperands.push_back(vecOperand);
1468 }
1469
1470 // Create a clone of the op with the proper operands and return types.
1471 // TODO: The following assumes there is always an op with a fixed
1472 // name that works both in scalar mode and vector mode.
1473 // TODO: Is it worth considering an Operation.clone operation which
1474 // changes the type so we can promote an Operation with less boilerplate?
1475 Operation *vecOp =
1476 state.builder.create(op->getLoc(), op->getName().getIdentifier(),
1477 vectorOperands, vectorTypes, op->getAttrs());
1478 state.registerOpVectorReplacement(op, vecOp);
1479 return vecOp;
1480}
1481
1482/// Vectorizes a yield operation by widening its types. The builder's insertion
1483/// point is set after the vectorized parent op to continue vectorizing the
1484/// operations after the parent op. When vectorizing a reduction loop a mask may
1485/// be used to prevent adding garbage values to the accumulator.
1486static Operation *vectorizeAffineYieldOp(AffineYieldOp yieldOp,
1487 VectorizationState &state) {
1488 Operation *newYieldOp = widenOp(yieldOp, state);
1489 Operation *newParentOp = state.builder.getInsertionBlock()->getParentOp();
1490
1491 // If there is a mask for this loop then we must prevent garbage values from
1492 // being added to the accumulator by inserting `select` operations, for
1493 // example:
1494 //
1495 // %val_masked = select %mask, %val, %neutralCst : vector<128xi1>,
1496 // vector<128xf32>
1497 // %res = arith.addf %acc, %val_masked : vector<128xf32>
1498 // affine.yield %res : vector<128xf32>
1499 //
1500 if (Value mask = state.vecLoopToMask.lookup(newParentOp)) {
1501 state.builder.setInsertionPoint(newYieldOp);
1502 for (unsigned i = 0; i < newYieldOp->getNumOperands(); ++i) {
1503 SmallVector<Operation *> combinerOps;
1504 Value reducedVal = matchReduction(
1505 cast<AffineForOp>(newParentOp).getRegionIterArgs(), i, combinerOps);
1506 assert(reducedVal && "expect non-null value for parallel reduction loop");
1507 assert(combinerOps.size() == 1 && "expect only one combiner op");
1508 // IterOperands are neutral element vectors.
1509 Value neutralVal = cast<AffineForOp>(newParentOp).getInits()[i];
1510 state.builder.setInsertionPoint(combinerOps.back());
1511 Value maskedReducedVal = arith::SelectOp::create(
1512 state.builder, reducedVal.getLoc(), mask, reducedVal, neutralVal);
1513 LLVM_DEBUG(
1514 dbgs() << "\n[early-vect]+++++ masking an input to a binary op that"
1515 "produces value for a yield Op: "
1516 << maskedReducedVal);
1517 combinerOps.back()->replaceUsesOfWith(reducedVal, maskedReducedVal);
1518 }
1519 }
1520
1521 state.builder.setInsertionPointAfter(newParentOp);
1522 return newYieldOp;
1523}
1524
1525/// Encodes Operation-specific behavior for vectorization. In general we
1526/// assume that all operands of an op must be vectorized but this is not
1527/// always true. In the future, it would be nice to have a trait that
1528/// describes how a particular operation vectorizes. For now we implement the
1529/// case distinction here. Returns a vectorized form of an operation or
1530/// nullptr if vectorization fails.
1531// TODO: consider adding a trait to Op to describe how it gets vectorized.
1532// Maybe some Ops are not vectorizable or require some tricky logic, we cannot
1533// do one-off logic here; ideally it would be TableGen'd.
1535 VectorizationState &state) {
1536 // Sanity checks.
1537 assert(!isa<vector::TransferReadOp>(op) &&
1538 "vector.transfer_read cannot be further vectorized");
1539 assert(!isa<vector::TransferWriteOp>(op) &&
1540 "vector.transfer_write cannot be further vectorized");
1541
1542 if (auto loadOp = dyn_cast<AffineLoadOp>(op))
1543 return vectorizeAffineLoad(loadOp, state);
1544 if (auto storeOp = dyn_cast<AffineStoreOp>(op))
1545 return vectorizeAffineStore(storeOp, state);
1546 if (auto forOp = dyn_cast<AffineForOp>(op))
1547 return vectorizeAffineForOp(forOp, state);
1548 if (auto yieldOp = dyn_cast<AffineYieldOp>(op))
1549 return vectorizeAffineYieldOp(yieldOp, state);
1550 if (auto constant = dyn_cast<arith::ConstantOp>(op))
1551 return vectorizeConstant(constant, state);
1552 if (auto applyOp = dyn_cast<AffineApplyOp>(op))
1553 return vectorizeAffineApplyOp(applyOp, state);
1554
1555 // Other ops with regions are not supported.
1556 if (op->getNumRegions() != 0)
1557 return nullptr;
1558
1559 return widenOp(op, state);
1560}
1561
1562/// Recursive implementation to convert all the nested loops in 'match' to a 2D
1563/// vector container that preserves the relative nesting level of each loop with
1564/// respect to the others in 'match'. 'currentLevel' is the nesting level that
1565/// will be assigned to the loop in the current 'match'.
1566static void
1567getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel,
1568 std::vector<SmallVector<AffineForOp, 2>> &loops) {
1569 // Add a new empty level to the output if it doesn't exist already.
1570 assert(currentLevel <= loops.size() && "Unexpected currentLevel");
1571 if (currentLevel == loops.size())
1572 loops.emplace_back();
1573
1574 // Add current match and recursively visit its children.
1575 loops[currentLevel].push_back(cast<AffineForOp>(match.getMatchedOperation()));
1576 for (auto childMatch : match.getMatchedChildren()) {
1577 getMatchedAffineLoopsRec(childMatch, currentLevel + 1, loops);
1578 }
1579}
1580
1581/// Converts all the nested loops in 'match' to a 2D vector container that
1582/// preserves the relative nesting level of each loop with respect to the others
1583/// in 'match'. This means that every loop in 'loops[i]' will have a parent loop
1584/// in 'loops[i-1]'. A loop in 'loops[i]' may or may not have a child loop in
1585/// 'loops[i+1]'.
1586static void
1588 std::vector<SmallVector<AffineForOp, 2>> &loops) {
1589 getMatchedAffineLoopsRec(match, /*currLoopDepth=*/0, loops);
1590}
1591
1592/// Internal implementation to vectorize affine loops from a single loop nest
1593/// using an n-D vectorization strategy.
1594static LogicalResult
1596 const VectorizationStrategy &strategy) {
1597 assert(loops[0].size() == 1 && "Expected single root loop");
1598 AffineForOp rootLoop = loops[0][0];
1599 VectorizationState state(rootLoop.getContext());
1600 state.builder.setInsertionPointAfter(rootLoop);
1601 state.strategy = &strategy;
1602
1603 // Since patterns are recursive, they can very well intersect.
1604 // Since we do not want a fully greedy strategy in general, we decouple
1605 // pattern matching, from profitability analysis, from application.
1606 // As a consequence we must check that each root pattern is still
1607 // vectorizable. If a pattern is not vectorizable anymore, we just skip it.
1608 // TODO: implement a non-greedy profitability analysis that keeps only
1609 // non-intersecting patterns.
1611 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ loop is not vectorizable");
1612 return failure();
1613 }
1614
1615 //////////////////////////////////////////////////////////////////////////////
1616 // Vectorize the scalar loop nest following a topological order. A new vector
1617 // loop nest with the vectorized operations is created along the process. If
1618 // vectorization succeeds, the scalar loop nest is erased. If vectorization
1619 // fails, the vector loop nest is erased and the scalar loop nest is not
1620 // modified.
1621 //////////////////////////////////////////////////////////////////////////////
1622
1623 auto opVecResult = rootLoop.walk<WalkOrder::PreOrder>([&](Operation *op) {
1624 LLVM_DEBUG(dbgs() << "[early-vect]+++++ Vectorizing: " << *op);
1625 Operation *vectorOp = vectorizeOneOperation(op, state);
1626 if (!vectorOp) {
1627 LLVM_DEBUG(
1628 dbgs() << "[early-vect]+++++ failed vectorizing the operation: "
1629 << *op << "\n");
1630 return WalkResult::interrupt();
1631 }
1632
1633 return WalkResult::advance();
1634 });
1635
1636 if (opVecResult.wasInterrupted()) {
1637 LLVM_DEBUG(dbgs() << "[early-vect]+++++ failed vectorization for: "
1638 << rootLoop << "\n");
1639 // Erase vector loop nest if it was created.
1640 auto vecRootLoopIt = state.opVectorReplacement.find(rootLoop);
1641 if (vecRootLoopIt != state.opVectorReplacement.end())
1642 eraseLoopNest(cast<AffineForOp>(vecRootLoopIt->second));
1643
1644 return failure();
1645 }
1646
1647 // Replace results of reduction loops with the scalar values computed using
1648 // `vector.reduce` or similar ops.
1649 for (auto resPair : state.loopResultScalarReplacement)
1650 resPair.first.replaceAllUsesWith(resPair.second);
1651
1652 assert(state.opVectorReplacement.count(rootLoop) == 1 &&
1653 "Expected vector replacement for loop nest");
1654 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ success vectorizing pattern");
1655 LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorization result:\n"
1656 << *state.opVectorReplacement[rootLoop]);
1657
1658 // Finish this vectorization pattern.
1659 state.finishVectorizationPattern(rootLoop);
1660 return success();
1661}
1662
1663/// Extracts the matched loops and vectorizes them following a topological
1664/// order. A new vector loop nest will be created if vectorization succeeds. The
1665/// original loop nest won't be modified in any case.
1666static LogicalResult vectorizeRootMatch(NestedMatch m,
1667 const VectorizationStrategy &strategy) {
1668 std::vector<SmallVector<AffineForOp, 2>> loopsToVectorize;
1669 getMatchedAffineLoops(m, loopsToVectorize);
1670 return vectorizeLoopNest(loopsToVectorize, strategy);
1671}
1672
1673/// Traverses all the loop matches and classifies them into intersection
1674/// buckets. Two matches intersect if any of them encloses the other one. A
1675/// match intersects with a bucket if the match intersects with the root
1676/// (outermost) loop in that bucket.
1678 ArrayRef<NestedMatch> matches,
1679 std::vector<SmallVector<NestedMatch, 8>> &intersectionBuckets) {
1680 assert(intersectionBuckets.empty() && "Expected empty output");
1681 // Keeps track of the root (outermost) loop of each bucket.
1682 SmallVector<AffineForOp, 8> bucketRoots;
1683
1684 for (const NestedMatch &match : matches) {
1685 AffineForOp matchRoot = cast<AffineForOp>(match.getMatchedOperation());
1686 bool intersects = false;
1687 for (int i = 0, end = intersectionBuckets.size(); i < end; ++i) {
1688 AffineForOp bucketRoot = bucketRoots[i];
1689 // Add match to the bucket if the bucket root encloses the match root.
1690 if (bucketRoot->isAncestor(matchRoot)) {
1691 intersectionBuckets[i].push_back(match);
1692 intersects = true;
1693 break;
1694 }
1695 // Add match to the bucket if the match root encloses the bucket root. The
1696 // match root becomes the new bucket root.
1697 if (matchRoot->isAncestor(bucketRoot)) {
1698 bucketRoots[i] = matchRoot;
1699 intersectionBuckets[i].push_back(match);
1700 intersects = true;
1701 break;
1702 }
1703 }
1704
1705 // Match doesn't intersect with any existing bucket. Create a new bucket for
1706 // it.
1707 if (!intersects) {
1708 bucketRoots.push_back(matchRoot);
1709 intersectionBuckets.emplace_back();
1710 intersectionBuckets.back().push_back(match);
1711 }
1712 }
1713}
1714
1715/// Internal implementation to vectorize affine loops in 'loops' using the n-D
1716/// vectorization factors in 'vectorSizes'. By default, each vectorization
1717/// factor is applied inner-to-outer to the loops of each loop nest.
1718/// 'fastestVaryingPattern' can be optionally used to provide a different loop
1719/// vectorization order. `reductionLoops` can be provided to specify loops which
1720/// can be vectorized along the reduction dimension.
1721static void vectorizeLoops(Operation *parentOp, DenseSet<Operation *> &loops,
1722 ArrayRef<int64_t> vectorSizes,
1723 ArrayRef<int64_t> fastestVaryingPattern,
1724 const ReductionLoopMap &reductionLoops) {
1725 assert((reductionLoops.empty() || vectorSizes.size() == 1) &&
1726 "Vectorizing reductions is supported only for 1-D vectors");
1727
1728 // Compute 1-D, 2-D or 3-D loop pattern to be matched on the target loops.
1729 std::optional<NestedPattern> pattern =
1730 makePattern(loops, vectorSizes.size(), fastestVaryingPattern);
1731 if (!pattern) {
1732 LLVM_DEBUG(dbgs() << "\n[early-vect] pattern couldn't be computed\n");
1733 return;
1734 }
1735
1736 LLVM_DEBUG(dbgs() << "\n******************************************");
1737 LLVM_DEBUG(dbgs() << "\n******************************************");
1738 LLVM_DEBUG(dbgs() << "\n[early-vect] new pattern on parent op\n");
1739 LLVM_DEBUG(dbgs() << *parentOp << "\n");
1740
1741 unsigned patternDepth = pattern->getDepth();
1742
1743 // Compute all the pattern matches and classify them into buckets of
1744 // intersecting matches.
1746 pattern->match(parentOp, &allMatches);
1747 std::vector<SmallVector<NestedMatch, 8>> intersectionBuckets;
1748 computeIntersectionBuckets(allMatches, intersectionBuckets);
1749
1750 // Iterate over all buckets and vectorize the matches eagerly. We can only
1751 // vectorize one match from each bucket since all the matches within a bucket
1752 // intersect.
1753 for (auto &intersectingMatches : intersectionBuckets) {
1754 for (NestedMatch &match : intersectingMatches) {
1755 VectorizationStrategy strategy;
1756 // TODO: depending on profitability, elect to reduce the vector size.
1757 strategy.vectorSizes.assign(vectorSizes.begin(), vectorSizes.end());
1758 strategy.reductionLoops = reductionLoops;
1759 if (failed(analyzeProfitability(match.getMatchedChildren(), 1,
1760 patternDepth, &strategy))) {
1761 continue;
1762 }
1763 vectorizeLoopIfProfitable(match.getMatchedOperation(), 0, patternDepth,
1764 &strategy);
1765 // Vectorize match. Skip the rest of intersecting matches in the bucket if
1766 // vectorization succeeded.
1767 // TODO: if pattern does not apply, report it; alter the cost/benefit.
1768 // TODO: some diagnostics if failure to vectorize occurs.
1769 if (succeeded(vectorizeRootMatch(match, strategy)))
1770 break;
1771 }
1772 }
1773
1774 LLVM_DEBUG(dbgs() << "\n");
1775}
1776
1777/// Applies vectorization to the current function by searching over a bunch of
1778/// predetermined patterns.
1779void Vectorize::runOnOperation() {
1780 func::FuncOp f = getOperation();
1781 if (!fastestVaryingPattern.empty() &&
1782 fastestVaryingPattern.size() != vectorSizes.size()) {
1783 f.emitRemark("Fastest varying pattern specified with different size than "
1784 "the vector size.");
1785 return signalPassFailure();
1786 }
1787
1788 if (vectorizeReductions && vectorSizes.size() != 1) {
1789 f.emitError("Vectorizing reductions is supported only for 1-D vectors.");
1790 return signalPassFailure();
1791 }
1792
1793 if (llvm::any_of(vectorSizes, [](int64_t size) { return size <= 0; })) {
1794 f.emitError("Vectorization factor must be greater than zero.");
1795 return signalPassFailure();
1796 }
1797
1798 DenseSet<Operation *> parallelLoops;
1799 ReductionLoopMap reductionLoops;
1800
1801 // If 'vectorize-reduction=true' is provided, we also populate the
1802 // `reductionLoops` map.
1803 if (vectorizeReductions) {
1804 f.walk([&parallelLoops, &reductionLoops](AffineForOp loop) {
1805 SmallVector<LoopReduction, 2> reductions;
1806 if (isLoopParallel(loop, &reductions)) {
1807 parallelLoops.insert(loop);
1808 // If it's not a reduction loop, adding it to the map is not necessary.
1809 if (!reductions.empty())
1810 reductionLoops[loop] = reductions;
1811 }
1812 });
1813 } else {
1814 f.walk([&parallelLoops](AffineForOp loop) {
1815 if (isLoopParallel(loop))
1816 parallelLoops.insert(loop);
1817 });
1818 }
1819
1820 // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1821 NestedPatternContext mlContext;
1822 vectorizeLoops(f, parallelLoops, vectorSizes, fastestVaryingPattern,
1823 reductionLoops);
1824}
1825
1826/// Verify that affine loops in 'loops' meet the nesting criteria expected by
1827/// SuperVectorizer:
1828/// * There must be at least one loop.
1829/// * There must be a single root loop (nesting level 0).
1830/// * Each loop at a given nesting level must be nested in a loop from a
1831/// previous nesting level.
1832static LogicalResult
1834 // Expected at least one loop.
1835 if (loops.empty())
1836 return failure();
1837
1838 // Expected only one root loop.
1839 if (loops[0].size() != 1)
1840 return failure();
1841
1842 // Traverse loops outer-to-inner to check some invariants.
1843 for (int i = 1, end = loops.size(); i < end; ++i) {
1844 for (AffineForOp loop : loops[i]) {
1845 // Check that each loop at this level is nested in one of the loops from
1846 // the previous level.
1847 if (none_of(loops[i - 1], [&](AffineForOp maybeParent) {
1848 return maybeParent->isProperAncestor(loop);
1849 }))
1850 return failure();
1851
1852 // Check that each loop at this level is not nested in another loop from
1853 // this level.
1854 for (AffineForOp sibling : loops[i]) {
1855 if (sibling->isProperAncestor(loop))
1856 return failure();
1857 }
1858 }
1859 }
1860
1861 return success();
1862}
1863
1864/// External utility to vectorize affine loops in 'loops' using the n-D
1865/// vectorization factors in 'vectorSizes'. By default, each vectorization
1866/// factor is applied inner-to-outer to the loops of each loop nest.
1867/// 'fastestVaryingPattern' can be optionally used to provide a different loop
1868/// vectorization order.
1869/// If `reductionLoops` is not empty, the given reduction loops may be
1870/// vectorized along the reduction dimension.
1871/// TODO: Vectorizing reductions is supported only for 1-D vectorization.
1872void mlir::affine::vectorizeAffineLoops(
1873 Operation *parentOp, DenseSet<Operation *> &loops,
1874 ArrayRef<int64_t> vectorSizes, ArrayRef<int64_t> fastestVaryingPattern,
1875 const ReductionLoopMap &reductionLoops) {
1876 // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1877 NestedPatternContext mlContext;
1878 vectorizeLoops(parentOp, loops, vectorSizes, fastestVaryingPattern,
1879 reductionLoops);
1880}
1881
1882/// External utility to vectorize affine loops from a single loop nest using an
1883/// n-D vectorization strategy (see doc in VectorizationStrategy definition).
1884/// Loops are provided in a 2D vector container. The first dimension represents
1885/// the nesting level relative to the loops to be vectorized. The second
1886/// dimension contains the loops. This means that:
1887/// a) every loop in 'loops[i]' must have a parent loop in 'loops[i-1]',
1888/// b) a loop in 'loops[i]' may or may not have a child loop in 'loops[i+1]'.
1889///
1890/// For example, for the following loop nest:
1891///
1892/// func @vec2d(%in0: memref<64x128x512xf32>, %in1: memref<64x128x128xf32>,
1893/// %out0: memref<64x128x512xf32>,
1894/// %out1: memref<64x128x128xf32>) {
1895/// affine.for %i0 = 0 to 64 {
1896/// affine.for %i1 = 0 to 128 {
1897/// affine.for %i2 = 0 to 512 {
1898/// %ld = affine.load %in0[%i0, %i1, %i2] : memref<64x128x512xf32>
1899/// affine.store %ld, %out0[%i0, %i1, %i2] : memref<64x128x512xf32>
1900/// }
1901/// affine.for %i3 = 0 to 128 {
1902/// %ld = affine.load %in1[%i0, %i1, %i3] : memref<64x128x128xf32>
1903/// affine.store %ld, %out1[%i0, %i1, %i3] : memref<64x128x128xf32>
1904/// }
1905/// }
1906/// }
1907/// return
1908/// }
1909///
1910/// loops = {{%i0}, {%i2, %i3}}, to vectorize the outermost and the two
1911/// innermost loops;
1912/// loops = {{%i1}, {%i2, %i3}}, to vectorize the middle and the two innermost
1913/// loops;
1914/// loops = {{%i2}}, to vectorize only the first innermost loop;
1915/// loops = {{%i3}}, to vectorize only the second innermost loop;
1916/// loops = {{%i1}}, to vectorize only the middle loop.
1917LogicalResult mlir::affine::vectorizeAffineLoopNest(
1918 std::vector<SmallVector<AffineForOp, 2>> &loops,
1919 const VectorizationStrategy &strategy) {
1920 // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1921 NestedPatternContext mlContext;
1922 if (failed(verifyLoopNesting(loops)))
1923 return failure();
1924 return vectorizeLoopNest(loops, strategy);
1925}
return success()
*if copies could not be generated due to yet unimplemented cases *copyInPlacementStart and copyOutPlacementStart in copyPlacementBlock *specify the insertion points where the incoming copies and outgoing should be the output argument nBegin is set to its * replacement(set to `begin` if no invalidation happens). Since outgoing *copies could have been inserted at `end`
static Operation * vectorizeUniform(Value uniformVal, VectorizationState &state)
Generates a broadcast op for the provided uniform value using the vectorization strategy in 'state'.
static std::optional< NestedPattern > makePattern(const DenseSet< Operation * > &parallelLoops, int vectorRank, ArrayRef< int64_t > fastestVaryingPattern)
Creates a vectorization pattern from the command line arguments.
static LogicalResult vectorizeRootMatch(NestedMatch m, const VectorizationStrategy &strategy)
Extracts the matched loops and vectorizes them following a topological order.
static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
static LogicalResult verifyLoopNesting(const std::vector< SmallVector< AffineForOp, 2 > > &loops)
Verify that affine loops in 'loops' meet the nesting criteria expected by SuperVectorizer:
static Operation * vectorizeOneOperation(Operation *op, VectorizationState &state)
Encodes Operation-specific behavior for vectorization.
static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind, Value value, VectorizationState &state)
Returns true if value is a constant equal to the neutral element of the given vectorizable reduction.
static LogicalResult vectorizeLoopNest(std::vector< SmallVector< AffineForOp, 2 > > &loops, const VectorizationStrategy &strategy)
Internal implementation to vectorize affine loops from a single loop nest using an n-D vectorization ...
static Operation * vectorizeAffineLoad(AffineLoadOp loadOp, VectorizationState &state)
Vectorizes an affine load with the vectorization strategy in 'state' by generating a 'vector....
static Operation * vectorizeAffineForOp(AffineForOp forOp, VectorizationState &state)
Vectorizes a loop with the vectorization strategy in 'state'.
static Operation * vectorizeAffineApplyOp(AffineApplyOp applyOp, VectorizationState &state)
We have no need to vectorize affine.apply.
static LogicalResult analyzeProfitability(ArrayRef< NestedMatch > matches, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
Implements a simple strawman strategy for vectorization.
static FilterFunctionType isVectorizableLoopPtrFactory(const DenseSet< Operation * > &parallelLoops, int fastestVaryingMemRefDimension)
Forward declaration.
static bool isIVMappedToMultipleIndices(ArrayRef< Value > indices, const DenseMap< Operation *, unsigned > &loopToVectorDim)
Returns true if any vectorized loop IV drives more than one index.
static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp, VectorizationState &state)
Tries to transform a scalar constant into a vector constant.
static bool isUniformDefinition(Value value, const VectorizationStrategy *strategy)
Returns true if the provided value is vector uniform given the vectorization strategy.
static void eraseLoopNest(AffineForOp forOp)
Erases a loop nest, including all its nested operations.
static VectorType getVectorType(Type scalarTy, const VectorizationStrategy *strategy)
Returns the vector type resulting from applying the provided vectorization strategy on the scalar typ...
static void getMatchedAffineLoops(NestedMatch match, std::vector< SmallVector< AffineForOp, 2 > > &loops)
Converts all the nested loops in 'match' to a 2D vector container that preserves the relative nesting...
static Value vectorizeOperand(Value operand, VectorizationState &state)
Tries to vectorize a given operand by applying the following logic:
static void getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel, std::vector< SmallVector< AffineForOp, 2 > > &loops)
Recursive implementation to convert all the nested loops in 'match' to a 2D vector container that pre...
static Operation * vectorizeAffineYieldOp(AffineYieldOp yieldOp, VectorizationState &state)
Vectorizes a yield operation by widening its types.
static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind, Value oldOperand, VectorizationState &state)
Creates a constant vector filled with the neutral elements of the given reduction.
static Operation * widenOp(Operation *op, VectorizationState &state)
Vectorizes arbitrary operation by plain widening.
static Operation * vectorizeAffineStore(AffineStoreOp storeOp, VectorizationState &state)
Vectorizes an affine store with the vectorization strategy in 'state' by generating a 'vector....
static NestedPattern & vectorTransferPattern()
static void vectorizeLoops(Operation *parentOp, DenseSet< Operation * > &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops)
Internal implementation to vectorize affine loops in 'loops' using the n-D vectorization factors in '...
static void computeMemoryOpIndices(Operation *op, AffineMap map, ValueRange mapOperands, VectorizationState &state, SmallVectorImpl< Value > &results)
static void computeIntersectionBuckets(ArrayRef< NestedMatch > matches, std::vector< SmallVector< NestedMatch, 8 > > &intersectionBuckets)
Traverses all the loop matches and classifies them into intersection buckets.
static Value createMask(AffineForOp vecForOp, VectorizationState &state)
Creates a mask used to filter out garbage elements in the last iteration of unaligned loops.
static AffineMap makePermutationMap(ArrayRef< Value > indices, const DenseMap< Operation *, unsigned > &enclosingLoopToVectorDim)
Constructs a permutation map from memref indices to vector dimension.
Base type for affine expression.
Definition AffineExpr.h:68
A multi-dimensional affine map Affine map's are immutable like Type's, and they are uniqued.
Definition AffineMap.h:46
static AffineMap get(MLIRContext *context)
Returns a zero result affine map with no dimensions or symbols: () -> ().
unsigned getNumSymbols() const
unsigned getNumDims() const
ArrayRef< AffineExpr > getResults() const
unsigned getNumResults() const
Attributes are known-constant values of operations.
Definition Attributes.h:25
Operation * getParentOp()
Returns the closest surrounding operation that contains this block.
Definition Block.cpp:31
AffineMap getMultiDimIdentityMap(unsigned rank)
Definition Builders.cpp:387
IntegerType getIntegerType(unsigned width)
Definition Builders.cpp:67
AffineExpr getAffineDimExpr(unsigned position)
Definition Builders.cpp:364
static DenseElementsAttr get(ShapedType type, ArrayRef< Attribute > values)
Constructs a dense elements attribute from an array of element values.
auto lookupOrDefault(T from) const
Lookup a mapped value within the map.
Definition IRMapping.h:65
void map(Value from, Value to)
Inserts a new mapping for 'from' to 'to'.
Definition IRMapping.h:30
bool contains(T from) const
Checks to see if a mapping for 'from' exists.
Definition IRMapping.h:51
auto lookupOrNull(T from) const
Lookup a mapped value within the map.
Definition IRMapping.h:58
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition Location.h:76
RAII guard to reset the insertion point of the builder when destroyed.
Definition Builders.h:348
This class helps build Operations.
Definition Builders.h:207
void setInsertionPointToStart(Block *block)
Sets the insertion point to the start of the specified block.
Definition Builders.h:431
void setInsertionPoint(Block *block, Block::iterator insertPoint)
Set the insertion point to the specified location.
Definition Builders.h:398
Block * getInsertionBlock() const
Return the block the current insertion point belongs to.
Definition Builders.h:442
void setInsertionPointAfterValue(Value val)
Sets the insertion point to the node after the specified value.
Definition Builders.h:421
Operation * create(const OperationState &state)
Creates an operation given the fields represented as an OperationState.
Definition Builders.cpp:457
void setInsertionPointAfter(Operation *op)
Sets the insertion point to the node after the specified operation, which will cause subsequent inser...
Definition Builders.h:412
StringAttr getIdentifier() const
Return the name of this operation as a StringAttr.
Operation is the basic unit of execution within MLIR.
Definition Operation.h:88
ArrayRef< NamedAttribute > getAttrs()
Return all of the attributes on this operation.
Definition Operation.h:512
OpResult getResult(unsigned idx)
Get the 'idx'th result of this operation.
Definition Operation.h:407
unsigned getNumRegions()
Returns the number of regions held by this operation.
Definition Operation.h:674
Location getLoc()
The source location the operation was defined or derived from.
Definition Operation.h:223
Operation * getParentOp()
Returns the closest surrounding operation that contains this operation or nullptr if this is a top-le...
Definition Operation.h:234
unsigned getNumOperands()
Definition Operation.h:346
OperationName getName()
The name of an operation is the key identifier for it.
Definition Operation.h:119
operand_range getOperands()
Returns an iterator on the underlying Value's.
Definition Operation.h:378
result_range getResults()
Definition Operation.h:415
unsigned getNumResults()
Return the number of results held by this operation.
Definition Operation.h:404
Instances of the Type class are uniqued, have an immutable identifier and an optional mutable compone...
Definition Types.h:74
bool isIntOrIndexOrFloat() const
Return true if this is an integer (of any signedness), index, or float type.
Definition Types.cpp:120
This class provides an abstraction over the different types of ranges over Values.
Definition ValueRange.h:387
This class represents an instance of an SSA value in the MLIR system, representing a computable value...
Definition Value.h:96
Type getType() const
Return the type of this value.
Definition Value.h:105
Location getLoc() const
Return the location of this value.
Definition Value.cpp:24
Operation * getDefiningOp() const
If this value is the result of an operation, return the operation that defines it.
Definition Value.cpp:18
static WalkResult advance()
Definition WalkResult.h:47
static WalkResult interrupt()
Definition WalkResult.h:46
An NestedPattern captures nested patterns in the IR.
ArrayRef< NestedMatch > getMatchedChildren()
Operation * getMatchedOperation() const
NestedPattern For(const NestedPattern &child)
NestedPattern Op(FilterFunctionType filter=defaultFilterFunction)
AffineApplyOp makeComposedAffineApply(OpBuilder &b, Location loc, AffineMap map, ArrayRef< OpFoldResult > operands, bool composeAffineMin=false)
Returns a composed AffineApplyOp by composing map and operands with other AffineApplyOps supplying th...
DenseMap< Operation *, SmallVector< LoopReduction, 2 > > ReductionLoopMap
Definition Utils.h:43
bool isVectorizableLoopBody(AffineForOp loop, NestedPattern &vectorTransferMatcher)
Checks whether the loop is structurally vectorizable; i.e.:
DenseSet< Value, DenseMapInfo< Value > > getInvariantAccesses(Value iv, ArrayRef< Value > indices)
Given an induction variable iv of type AffineForOp and indices of type IndexType, returns the set of ...
AffineForOp getForInductionVarOwner(Value val)
Returns the loop parent of an induction variable.
std::function< bool(Operation &)> FilterFunctionType
A NestedPattern is a nested operation walker that:
Value getReductionOp(AtomicRMWKind op, OpBuilder &builder, Location loc, Value lhs, Value rhs)
Returns the value obtained by applying the reduction operation kind associated with a binary AtomicRM...
detail::InFlightRemark failed(Location loc, RemarkOpts opts)
Report an optimization remark that failed.
Definition Remarks.h:561
Value getVectorReductionOp(arith::AtomicRMWKind op, OpBuilder &builder, Location loc, Value vector)
Returns the value obtained by reducing the vector into a scalar using the operation kind associated w...
Include the generated interface declarations.
llvm::DenseSet< ValueT, ValueInfoT > DenseSet
Definition LLVM.h:128
Value matchReduction(ArrayRef< BlockArgument > iterCarriedArgs, unsigned redPos, SmallVectorImpl< Operation * > &combinerOps)
Utility to match a generic reduction given a list of iteration-carried arguments, iterCarriedArgs and...
llvm::DenseMap< KeyT, ValueT, KeyInfoT, BucketT > DenseMap
Definition LLVM.h:126
VectorizationState(RewriterBase &rewriter)