MLIR  19.0.0git
SuperVectorize.cpp
Go to the documentation of this file.
1 //===- SuperVectorize.cpp - Vectorize Pass Impl ---------------------------===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // This file implements vectorization of loops, operations and data types to
10 // a target-independent, n-D super-vector abstraction.
11 //
12 //===----------------------------------------------------------------------===//
13 
15 
26 #include "mlir/IR/IRMapping.h"
27 #include "mlir/Pass/Pass.h"
28 #include "mlir/Support/LLVM.h"
29 #include "llvm/ADT/STLExtras.h"
30 #include "llvm/Support/Debug.h"
31 #include <optional>
32 
33 namespace mlir {
34 namespace affine {
35 #define GEN_PASS_DEF_AFFINEVECTORIZE
36 #include "mlir/Dialect/Affine/Passes.h.inc"
37 } // namespace affine
38 } // namespace mlir
39 
40 using namespace mlir;
41 using namespace affine;
42 using namespace vector;
43 
44 ///
45 /// Implements a high-level vectorization strategy on a Function.
46 /// The abstraction used is that of super-vectors, which provide a single,
47 /// compact, representation in the vector types, information that is expected
48 /// to reduce the impact of the phase ordering problem
49 ///
50 /// Vector granularity:
51 /// ===================
52 /// This pass is designed to perform vectorization at a super-vector
53 /// granularity. A super-vector is loosely defined as a vector type that is a
54 /// multiple of a "good" vector size so the HW can efficiently implement a set
55 /// of high-level primitives. Multiple is understood along any dimension; e.g.
56 /// both vector<16xf32> and vector<2x8xf32> are valid super-vectors for a
57 /// vector<8xf32> HW vector. Note that a "good vector size so the HW can
58 /// efficiently implement a set of high-level primitives" is not necessarily an
59 /// integer multiple of actual hardware registers. We leave details of this
60 /// distinction unspecified for now.
61 ///
62 /// Some may prefer the terminology a "tile of HW vectors". In this case, one
63 /// should note that super-vectors implement an "always full tile" abstraction.
64 /// They guarantee no partial-tile separation is necessary by relying on a
65 /// high-level copy-reshape abstraction that we call vector.transfer. This
66 /// copy-reshape operations is also responsible for performing layout
67 /// transposition if necessary. In the general case this will require a scoped
68 /// allocation in some notional local memory.
69 ///
70 /// Whatever the mental model one prefers to use for this abstraction, the key
71 /// point is that we burn into a single, compact, representation in the vector
72 /// types, information that is expected to reduce the impact of the phase
73 /// ordering problem. Indeed, a vector type conveys information that:
74 /// 1. the associated loops have dependency semantics that do not prevent
75 /// vectorization;
76 /// 2. the associate loops have been sliced in chunks of static sizes that are
77 /// compatible with vector sizes (i.e. similar to unroll-and-jam);
78 /// 3. the inner loops, in the unroll-and-jam analogy of 2, are captured by
79 /// the
80 /// vector type and no vectorization hampering transformations can be
81 /// applied to them anymore;
82 /// 4. the underlying memrefs are accessed in some notional contiguous way
83 /// that allows loading into vectors with some amount of spatial locality;
84 /// In other words, super-vectorization provides a level of separation of
85 /// concern by way of opacity to subsequent passes. This has the effect of
86 /// encapsulating and propagating vectorization constraints down the list of
87 /// passes until we are ready to lower further.
88 ///
89 /// For a particular target, a notion of minimal n-d vector size will be
90 /// specified and vectorization targets a multiple of those. In the following
91 /// paragraph, let "k ." represent "a multiple of", to be understood as a
92 /// multiple in the same dimension (e.g. vector<16 x k . 128> summarizes
93 /// vector<16 x 128>, vector<16 x 256>, vector<16 x 1024>, etc).
94 ///
95 /// Some non-exhaustive notable super-vector sizes of interest include:
96 /// - CPU: vector<k . HW_vector_size>,
97 /// vector<k' . core_count x k . HW_vector_size>,
98 /// vector<socket_count x k' . core_count x k . HW_vector_size>;
99 /// - GPU: vector<k . warp_size>,
100 /// vector<k . warp_size x float2>,
101 /// vector<k . warp_size x float4>,
102 /// vector<k . warp_size x 4 x 4x 4> (for tensor_core sizes).
103 ///
104 /// Loops and operations are emitted that operate on those super-vector shapes.
105 /// Subsequent lowering passes will materialize to actual HW vector sizes. These
106 /// passes are expected to be (gradually) more target-specific.
107 ///
108 /// At a high level, a vectorized load in a loop will resemble:
109 /// ```mlir
110 /// affine.for %i = ? to ? step ? {
111 /// %v_a = vector.transfer_read A[%i] : memref<?xf32>, vector<128xf32>
112 /// }
113 /// ```
114 /// It is the responsibility of the implementation of vector.transfer_read to
115 /// materialize vector registers from the original scalar memrefs. A later (more
116 /// target-dependent) lowering pass will materialize to actual HW vector sizes.
117 /// This lowering may be occur at different times:
118 /// 1. at the MLIR level into a combination of loops, unrolling, DmaStartOp +
119 /// DmaWaitOp + vectorized operations for data transformations and shuffle;
120 /// thus opening opportunities for unrolling and pipelining. This is an
121 /// instance of library call "whiteboxing"; or
122 /// 2. later in the a target-specific lowering pass or hand-written library
123 /// call; achieving full separation of concerns. This is an instance of
124 /// library call; or
125 /// 3. a mix of both, e.g. based on a model.
126 /// In the future, these operations will expose a contract to constrain the
127 /// search on vectorization patterns and sizes.
128 ///
129 /// Occurrence of super-vectorization in the compiler flow:
130 /// =======================================================
131 /// This is an active area of investigation. We start with 2 remarks to position
132 /// super-vectorization in the context of existing ongoing work: LLVM VPLAN
133 /// and LLVM SLP Vectorizer.
134 ///
135 /// LLVM VPLAN:
136 /// -----------
137 /// The astute reader may have noticed that in the limit, super-vectorization
138 /// can be applied at a similar time and with similar objectives than VPLAN.
139 /// For instance, in the case of a traditional, polyhedral compilation-flow (for
140 /// instance, the PPCG project uses ISL to provide dependence analysis,
141 /// multi-level(scheduling + tiling), lifting footprint to fast memory,
142 /// communication synthesis, mapping, register optimizations) and before
143 /// unrolling. When vectorization is applied at this *late* level in a typical
144 /// polyhedral flow, and is instantiated with actual hardware vector sizes,
145 /// super-vectorization is expected to match (or subsume) the type of patterns
146 /// that LLVM's VPLAN aims at targeting. The main difference here is that MLIR
147 /// is higher level and our implementation should be significantly simpler. Also
148 /// note that in this mode, recursive patterns are probably a bit of an overkill
149 /// although it is reasonable to expect that mixing a bit of outer loop and
150 /// inner loop vectorization + unrolling will provide interesting choices to
151 /// MLIR.
152 ///
153 /// LLVM SLP Vectorizer:
154 /// --------------------
155 /// Super-vectorization however is not meant to be usable in a similar fashion
156 /// to the SLP vectorizer. The main difference lies in the information that
157 /// both vectorizers use: super-vectorization examines contiguity of memory
158 /// references along fastest varying dimensions and loops with recursive nested
159 /// patterns capturing imperfectly-nested loop nests; the SLP vectorizer, on
160 /// the other hand, performs flat pattern matching inside a single unrolled loop
161 /// body and stitches together pieces of load and store operations into full
162 /// 1-D vectors. We envision that the SLP vectorizer is a good way to capture
163 /// innermost loop, control-flow dependent patterns that super-vectorization may
164 /// not be able to capture easily. In other words, super-vectorization does not
165 /// aim at replacing the SLP vectorizer and the two solutions are complementary.
166 ///
167 /// Ongoing investigations:
168 /// -----------------------
169 /// We discuss the following *early* places where super-vectorization is
170 /// applicable and touch on the expected benefits and risks . We list the
171 /// opportunities in the context of the traditional polyhedral compiler flow
172 /// described in PPCG. There are essentially 6 places in the MLIR pass pipeline
173 /// we expect to experiment with super-vectorization:
174 /// 1. Right after language lowering to MLIR: this is the earliest time where
175 /// super-vectorization is expected to be applied. At this level, all the
176 /// language/user/library-level annotations are available and can be fully
177 /// exploited. Examples include loop-type annotations (such as parallel,
178 /// reduction, scan, dependence distance vector, vectorizable) as well as
179 /// memory access annotations (such as non-aliasing writes guaranteed,
180 /// indirect accesses that are permutations by construction) accesses or
181 /// that a particular operation is prescribed atomic by the user. At this
182 /// level, anything that enriches what dependence analysis can do should be
183 /// aggressively exploited. At this level we are close to having explicit
184 /// vector types in the language, except we do not impose that burden on the
185 /// programmer/library: we derive information from scalar code + annotations.
186 /// 2. After dependence analysis and before polyhedral scheduling: the
187 /// information that supports vectorization does not need to be supplied by a
188 /// higher level of abstraction. Traditional dependence analysis is available
189 /// in MLIR and will be used to drive vectorization and cost models.
190 ///
191 /// Let's pause here and remark that applying super-vectorization as described
192 /// in 1. and 2. presents clear opportunities and risks:
193 /// - the opportunity is that vectorization is burned in the type system and
194 /// is protected from the adverse effect of loop scheduling, tiling, loop
195 /// interchange and all passes downstream. Provided that subsequent passes are
196 /// able to operate on vector types; the vector shapes, associated loop
197 /// iterator properties, alignment, and contiguity of fastest varying
198 /// dimensions are preserved until we lower the super-vector types. We expect
199 /// this to significantly rein in on the adverse effects of phase ordering.
200 /// - the risks are that a. all passes after super-vectorization have to work
201 /// on elemental vector types (not that this is always true, wherever
202 /// vectorization is applied) and b. that imposing vectorization constraints
203 /// too early may be overall detrimental to loop fusion, tiling and other
204 /// transformations because the dependence distances are coarsened when
205 /// operating on elemental vector types. For this reason, the pattern
206 /// profitability analysis should include a component that also captures the
207 /// maximal amount of fusion available under a particular pattern. This is
208 /// still at the stage of rough ideas but in this context, search is our
209 /// friend as the Tensor Comprehensions and auto-TVM contributions
210 /// demonstrated previously.
211 /// Bottom-line is we do not yet have good answers for the above but aim at
212 /// making it easy to answer such questions.
213 ///
214 /// Back to our listing, the last places where early super-vectorization makes
215 /// sense are:
216 /// 3. right after polyhedral-style scheduling: PLUTO-style algorithms are known
217 /// to improve locality, parallelism and be configurable (e.g. max-fuse,
218 /// smart-fuse etc). They can also have adverse effects on contiguity
219 /// properties that are required for vectorization but the vector.transfer
220 /// copy-reshape-pad-transpose abstraction is expected to help recapture
221 /// these properties.
222 /// 4. right after polyhedral-style scheduling+tiling;
223 /// 5. right after scheduling+tiling+rescheduling: points 4 and 5 represent
224 /// probably the most promising places because applying tiling achieves a
225 /// separation of concerns that allows rescheduling to worry less about
226 /// locality and more about parallelism and distribution (e.g. min-fuse).
227 ///
228 /// At these levels the risk-reward looks different: on one hand we probably
229 /// lost a good deal of language/user/library-level annotation; on the other
230 /// hand we gained parallelism and locality through scheduling and tiling.
231 /// However we probably want to ensure tiling is compatible with the
232 /// full-tile-only abstraction used in super-vectorization or suffer the
233 /// consequences. It is too early to place bets on what will win but we expect
234 /// super-vectorization to be the right abstraction to allow exploring at all
235 /// these levels. And again, search is our friend.
236 ///
237 /// Lastly, we mention it again here:
238 /// 6. as a MLIR-based alternative to VPLAN.
239 ///
240 /// Lowering, unrolling, pipelining:
241 /// ================================
242 /// TODO: point to the proper places.
243 ///
244 /// Algorithm:
245 /// ==========
246 /// The algorithm proceeds in a few steps:
247 /// 1. defining super-vectorization patterns and matching them on the tree of
248 /// AffineForOp. A super-vectorization pattern is defined as a recursive
249 /// data structures that matches and captures nested, imperfectly-nested
250 /// loops that have a. conformable loop annotations attached (e.g. parallel,
251 /// reduction, vectorizable, ...) as well as b. all contiguous load/store
252 /// operations along a specified minor dimension (not necessarily the
253 /// fastest varying) ;
254 /// 2. analyzing those patterns for profitability (TODO: and
255 /// interference);
256 /// 3. then, for each pattern in order:
257 /// a. applying iterative rewriting of the loops and all their nested
258 /// operations in topological order. Rewriting is implemented by
259 /// coarsening the loops and converting operations and operands to their
260 /// vector forms. Processing operations in topological order is relatively
261 /// simple due to the structured nature of the control-flow
262 /// representation. This order ensures that all the operands of a given
263 /// operation have been vectorized before the operation itself in a single
264 /// traversal, except for operands defined outside of the loop nest. The
265 /// algorithm can convert the following operations to their vector form:
266 /// * Affine load and store operations are converted to opaque vector
267 /// transfer read and write operations.
268 /// * Scalar constant operations/operands are converted to vector
269 /// constant operations (splat).
270 /// * Uniform operands (only induction variables of loops not mapped to
271 /// a vector dimension, or operands defined outside of the loop nest
272 /// for now) are broadcasted to a vector.
273 /// TODO: Support more uniform cases.
274 /// * Affine for operations with 'iter_args' are vectorized by
275 /// vectorizing their 'iter_args' operands and results.
276 /// TODO: Support more complex loops with divergent lbs and/or ubs.
277 /// * The remaining operations in the loop nest are vectorized by
278 /// widening their scalar types to vector types.
279 /// b. if everything under the root AffineForOp in the current pattern
280 /// is vectorized properly, we commit that loop to the IR and remove the
281 /// scalar loop. Otherwise, we discard the vectorized loop and keep the
282 /// original scalar loop.
283 /// c. vectorization is applied on the next pattern in the list. Because
284 /// pattern interference avoidance is not yet implemented and that we do
285 /// not support further vectorizing an already vector load we need to
286 /// re-verify that the pattern is still vectorizable. This is expected to
287 /// make cost models more difficult to write and is subject to improvement
288 /// in the future.
289 ///
290 /// Choice of loop transformation to support the algorithm:
291 /// =======================================================
292 /// The choice of loop transformation to apply for coarsening vectorized loops
293 /// is still subject to exploratory tradeoffs. In particular, say we want to
294 /// vectorize by a factor 128, we want to transform the following input:
295 /// ```mlir
296 /// affine.for %i = %M to %N {
297 /// %a = affine.load %A[%i] : memref<?xf32>
298 /// }
299 /// ```
300 ///
301 /// Traditionally, one would vectorize late (after scheduling, tiling,
302 /// memory promotion etc) say after stripmining (and potentially unrolling in
303 /// the case of LLVM's SLP vectorizer):
304 /// ```mlir
305 /// affine.for %i = floor(%M, 128) to ceil(%N, 128) {
306 /// affine.for %ii = max(%M, 128 * %i) to min(%N, 128*%i + 127) {
307 /// %a = affine.load %A[%ii] : memref<?xf32>
308 /// }
309 /// }
310 /// ```
311 ///
312 /// Instead, we seek to vectorize early and freeze vector types before
313 /// scheduling, so we want to generate a pattern that resembles:
314 /// ```mlir
315 /// affine.for %i = ? to ? step ? {
316 /// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
317 /// }
318 /// ```
319 ///
320 /// i. simply dividing the lower / upper bounds by 128 creates issues
321 /// when representing expressions such as ii + 1 because now we only
322 /// have access to original values that have been divided. Additional
323 /// information is needed to specify accesses at below-128 granularity;
324 /// ii. another alternative is to coarsen the loop step but this may have
325 /// consequences on dependence analysis and fusability of loops: fusable
326 /// loops probably need to have the same step (because we don't want to
327 /// stripmine/unroll to enable fusion).
328 /// As a consequence, we choose to represent the coarsening using the loop
329 /// step for now and reevaluate in the future. Note that we can renormalize
330 /// loop steps later if/when we have evidence that they are problematic.
331 ///
332 /// For the simple strawman example above, vectorizing for a 1-D vector
333 /// abstraction of size 128 returns code similar to:
334 /// ```mlir
335 /// affine.for %i = %M to %N step 128 {
336 /// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
337 /// }
338 /// ```
339 ///
340 /// Unsupported cases, extensions, and work in progress (help welcome :-) ):
341 /// ========================================================================
342 /// 1. lowering to concrete vector types for various HW;
343 /// 2. reduction support for n-D vectorization and non-unit steps;
344 /// 3. non-effecting padding during vector.transfer_read and filter during
345 /// vector.transfer_write;
346 /// 4. misalignment support vector.transfer_read / vector.transfer_write
347 /// (hopefully without read-modify-writes);
348 /// 5. control-flow support;
349 /// 6. cost-models, heuristics and search;
350 /// 7. Op implementation, extensions and implication on memref views;
351 /// 8. many TODOs left around.
352 ///
353 /// Examples:
354 /// =========
355 /// Consider the following Function:
356 /// ```mlir
357 /// func @vector_add_2d(%M : index, %N : index) -> f32 {
358 /// %A = alloc (%M, %N) : memref<?x?xf32, 0>
359 /// %B = alloc (%M, %N) : memref<?x?xf32, 0>
360 /// %C = alloc (%M, %N) : memref<?x?xf32, 0>
361 /// %f1 = arith.constant 1.0 : f32
362 /// %f2 = arith.constant 2.0 : f32
363 /// affine.for %i0 = 0 to %M {
364 /// affine.for %i1 = 0 to %N {
365 /// // non-scoped %f1
366 /// affine.store %f1, %A[%i0, %i1] : memref<?x?xf32, 0>
367 /// }
368 /// }
369 /// affine.for %i2 = 0 to %M {
370 /// affine.for %i3 = 0 to %N {
371 /// // non-scoped %f2
372 /// affine.store %f2, %B[%i2, %i3] : memref<?x?xf32, 0>
373 /// }
374 /// }
375 /// affine.for %i4 = 0 to %M {
376 /// affine.for %i5 = 0 to %N {
377 /// %a5 = affine.load %A[%i4, %i5] : memref<?x?xf32, 0>
378 /// %b5 = affine.load %B[%i4, %i5] : memref<?x?xf32, 0>
379 /// %s5 = arith.addf %a5, %b5 : f32
380 /// // non-scoped %f1
381 /// %s6 = arith.addf %s5, %f1 : f32
382 /// // non-scoped %f2
383 /// %s7 = arith.addf %s5, %f2 : f32
384 /// // diamond dependency.
385 /// %s8 = arith.addf %s7, %s6 : f32
386 /// affine.store %s8, %C[%i4, %i5] : memref<?x?xf32, 0>
387 /// }
388 /// }
389 /// %c7 = arith.constant 7 : index
390 /// %c42 = arith.constant 42 : index
391 /// %res = load %C[%c7, %c42] : memref<?x?xf32, 0>
392 /// return %res : f32
393 /// }
394 /// ```
395 ///
396 /// The -affine-super-vectorize pass with the following arguments:
397 /// ```
398 /// -affine-super-vectorize="virtual-vector-size=256 test-fastest-varying=0"
399 /// ```
400 ///
401 /// produces this standard innermost-loop vectorized code:
402 /// ```mlir
403 /// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
404 /// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
405 /// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
406 /// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
407 /// %cst = arith.constant 1.0 : f32
408 /// %cst_0 = arith.constant 2.0 : f32
409 /// affine.for %i0 = 0 to %arg0 {
410 /// affine.for %i1 = 0 to %arg1 step 256 {
411 /// %cst_1 = arith.constant dense<vector<256xf32>, 1.0> :
412 /// vector<256xf32>
413 /// vector.transfer_write %cst_1, %0[%i0, %i1] :
414 /// vector<256xf32>, memref<?x?xf32>
415 /// }
416 /// }
417 /// affine.for %i2 = 0 to %arg0 {
418 /// affine.for %i3 = 0 to %arg1 step 256 {
419 /// %cst_2 = arith.constant dense<vector<256xf32>, 2.0> :
420 /// vector<256xf32>
421 /// vector.transfer_write %cst_2, %1[%i2, %i3] :
422 /// vector<256xf32>, memref<?x?xf32>
423 /// }
424 /// }
425 /// affine.for %i4 = 0 to %arg0 {
426 /// affine.for %i5 = 0 to %arg1 step 256 {
427 /// %3 = vector.transfer_read %0[%i4, %i5] :
428 /// memref<?x?xf32>, vector<256xf32>
429 /// %4 = vector.transfer_read %1[%i4, %i5] :
430 /// memref<?x?xf32>, vector<256xf32>
431 /// %5 = arith.addf %3, %4 : vector<256xf32>
432 /// %cst_3 = arith.constant dense<vector<256xf32>, 1.0> :
433 /// vector<256xf32>
434 /// %6 = arith.addf %5, %cst_3 : vector<256xf32>
435 /// %cst_4 = arith.constant dense<vector<256xf32>, 2.0> :
436 /// vector<256xf32>
437 /// %7 = arith.addf %5, %cst_4 : vector<256xf32>
438 /// %8 = arith.addf %7, %6 : vector<256xf32>
439 /// vector.transfer_write %8, %2[%i4, %i5] :
440 /// vector<256xf32>, memref<?x?xf32>
441 /// }
442 /// }
443 /// %c7 = arith.constant 7 : index
444 /// %c42 = arith.constant 42 : index
445 /// %9 = load %2[%c7, %c42] : memref<?x?xf32>
446 /// return %9 : f32
447 /// }
448 /// ```
449 ///
450 /// The -affine-super-vectorize pass with the following arguments:
451 /// ```
452 /// -affine-super-vectorize="virtual-vector-size=32,256 \
453 /// test-fastest-varying=1,0"
454 /// ```
455 ///
456 /// produces this more interesting mixed outer-innermost-loop vectorized code:
457 /// ```mlir
458 /// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
459 /// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
460 /// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
461 /// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
462 /// %cst = arith.constant 1.0 : f32
463 /// %cst_0 = arith.constant 2.0 : f32
464 /// affine.for %i0 = 0 to %arg0 step 32 {
465 /// affine.for %i1 = 0 to %arg1 step 256 {
466 /// %cst_1 = arith.constant dense<vector<32x256xf32>, 1.0> :
467 /// vector<32x256xf32>
468 /// vector.transfer_write %cst_1, %0[%i0, %i1] :
469 /// vector<32x256xf32>, memref<?x?xf32>
470 /// }
471 /// }
472 /// affine.for %i2 = 0 to %arg0 step 32 {
473 /// affine.for %i3 = 0 to %arg1 step 256 {
474 /// %cst_2 = arith.constant dense<vector<32x256xf32>, 2.0> :
475 /// vector<32x256xf32>
476 /// vector.transfer_write %cst_2, %1[%i2, %i3] :
477 /// vector<32x256xf32>, memref<?x?xf32>
478 /// }
479 /// }
480 /// affine.for %i4 = 0 to %arg0 step 32 {
481 /// affine.for %i5 = 0 to %arg1 step 256 {
482 /// %3 = vector.transfer_read %0[%i4, %i5] :
483 /// memref<?x?xf32> vector<32x256xf32>
484 /// %4 = vector.transfer_read %1[%i4, %i5] :
485 /// memref<?x?xf32>, vector<32x256xf32>
486 /// %5 = arith.addf %3, %4 : vector<32x256xf32>
487 /// %cst_3 = arith.constant dense<vector<32x256xf32>, 1.0> :
488 /// vector<32x256xf32>
489 /// %6 = arith.addf %5, %cst_3 : vector<32x256xf32>
490 /// %cst_4 = arith.constant dense<vector<32x256xf32>, 2.0> :
491 /// vector<32x256xf32>
492 /// %7 = arith.addf %5, %cst_4 : vector<32x256xf32>
493 /// %8 = arith.addf %7, %6 : vector<32x256xf32>
494 /// vector.transfer_write %8, %2[%i4, %i5] :
495 /// vector<32x256xf32>, memref<?x?xf32>
496 /// }
497 /// }
498 /// %c7 = arith.constant 7 : index
499 /// %c42 = arith.constant 42 : index
500 /// %9 = load %2[%c7, %c42] : memref<?x?xf32>
501 /// return %9 : f32
502 /// }
503 /// ```
504 ///
505 /// Of course, much more intricate n-D imperfectly-nested patterns can be
506 /// vectorized too and specified in a fully declarative fashion.
507 ///
508 /// Reduction:
509 /// ==========
510 /// Vectorizing reduction loops along the reduction dimension is supported if:
511 /// - the reduction kind is supported,
512 /// - the vectorization is 1-D, and
513 /// - the step size of the loop equals to one.
514 ///
515 /// Comparing to the non-vector-dimension case, two additional things are done
516 /// during vectorization of such loops:
517 /// - The resulting vector returned from the loop is reduced to a scalar using
518 /// `vector.reduce`.
519 /// - In some cases a mask is applied to the vector yielded at the end of the
520 /// loop to prevent garbage values from being written to the accumulator.
521 ///
522 /// Reduction vectorization is switched off by default, it can be enabled by
523 /// passing a map from loops to reductions to utility functions, or by passing
524 /// `vectorize-reductions=true` to the vectorization pass.
525 ///
526 /// Consider the following example:
527 /// ```mlir
528 /// func @vecred(%in: memref<512xf32>) -> f32 {
529 /// %cst = arith.constant 0.000000e+00 : f32
530 /// %sum = affine.for %i = 0 to 500 iter_args(%part_sum = %cst) -> (f32) {
531 /// %ld = affine.load %in[%i] : memref<512xf32>
532 /// %cos = math.cos %ld : f32
533 /// %add = arith.addf %part_sum, %cos : f32
534 /// affine.yield %add : f32
535 /// }
536 /// return %sum : f32
537 /// }
538 /// ```
539 ///
540 /// The -affine-super-vectorize pass with the following arguments:
541 /// ```
542 /// -affine-super-vectorize="virtual-vector-size=128 test-fastest-varying=0 \
543 /// vectorize-reductions=true"
544 /// ```
545 /// produces the following output:
546 /// ```mlir
547 /// #map = affine_map<(d0) -> (-d0 + 500)>
548 /// func @vecred(%arg0: memref<512xf32>) -> f32 {
549 /// %cst = arith.constant 0.000000e+00 : f32
550 /// %cst_0 = arith.constant dense<0.000000e+00> : vector<128xf32>
551 /// %0 = affine.for %arg1 = 0 to 500 step 128 iter_args(%arg2 = %cst_0)
552 /// -> (vector<128xf32>) {
553 /// // %2 is the number of iterations left in the original loop.
554 /// %2 = affine.apply #map(%arg1)
555 /// %3 = vector.create_mask %2 : vector<128xi1>
556 /// %cst_1 = arith.constant 0.000000e+00 : f32
557 /// %4 = vector.transfer_read %arg0[%arg1], %cst_1 :
558 /// memref<512xf32>, vector<128xf32>
559 /// %5 = math.cos %4 : vector<128xf32>
560 /// %6 = arith.addf %arg2, %5 : vector<128xf32>
561 /// // We filter out the effect of last 12 elements using the mask.
562 /// %7 = select %3, %6, %arg2 : vector<128xi1>, vector<128xf32>
563 /// affine.yield %7 : vector<128xf32>
564 /// }
565 /// %1 = vector.reduction <add>, %0 : vector<128xf32> into f32
566 /// return %1 : f32
567 /// }
568 /// ```
569 ///
570 /// Note that because of loop misalignment we needed to apply a mask to prevent
571 /// last 12 elements from affecting the final result. The mask is full of ones
572 /// in every iteration except for the last one, in which it has the form
573 /// `11...100...0` with 116 ones and 12 zeros.
574 
575 #define DEBUG_TYPE "early-vect"
576 
577 using llvm::dbgs;
578 
579 /// Forward declaration.
580 static FilterFunctionType
582  int fastestVaryingMemRefDimension);
583 
584 /// Creates a vectorization pattern from the command line arguments.
585 /// Up to 3-D patterns are supported.
586 /// If the command line argument requests a pattern of higher order, returns an
587 /// empty pattern list which will conservatively result in no vectorization.
588 static std::optional<NestedPattern>
589 makePattern(const DenseSet<Operation *> &parallelLoops, int vectorRank,
590  ArrayRef<int64_t> fastestVaryingPattern) {
591  using affine::matcher::For;
592  int64_t d0 = fastestVaryingPattern.empty() ? -1 : fastestVaryingPattern[0];
593  int64_t d1 = fastestVaryingPattern.size() < 2 ? -1 : fastestVaryingPattern[1];
594  int64_t d2 = fastestVaryingPattern.size() < 3 ? -1 : fastestVaryingPattern[2];
595  switch (vectorRank) {
596  case 1:
597  return For(isVectorizableLoopPtrFactory(parallelLoops, d0));
598  case 2:
599  return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
600  For(isVectorizableLoopPtrFactory(parallelLoops, d1)));
601  case 3:
602  return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
603  For(isVectorizableLoopPtrFactory(parallelLoops, d1),
604  For(isVectorizableLoopPtrFactory(parallelLoops, d2))));
605  default: {
606  return std::nullopt;
607  }
608  }
609 }
610 
612  static auto pattern = affine::matcher::Op([](Operation &op) {
613  return isa<vector::TransferReadOp, vector::TransferWriteOp>(op);
614  });
615  return pattern;
616 }
617 
618 namespace {
619 
620 /// Base state for the vectorize pass.
621 /// Command line arguments are preempted by non-empty pass arguments.
622 struct Vectorize : public affine::impl::AffineVectorizeBase<Vectorize> {
623  using Base::Base;
624 
625  void runOnOperation() override;
626 };
627 
628 } // namespace
629 
630 static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern,
631  unsigned patternDepth,
632  VectorizationStrategy *strategy) {
633  assert(patternDepth > depthInPattern &&
634  "patternDepth is greater than depthInPattern");
635  if (patternDepth - depthInPattern > strategy->vectorSizes.size()) {
636  // Don't vectorize this loop
637  return;
638  }
639  strategy->loopToVectorDim[loop] =
640  strategy->vectorSizes.size() - (patternDepth - depthInPattern);
641 }
642 
643 /// Implements a simple strawman strategy for vectorization.
644 /// Given a matched pattern `matches` of depth `patternDepth`, this strategy
645 /// greedily assigns the fastest varying dimension ** of the vector ** to the
646 /// innermost loop in the pattern.
647 /// When coupled with a pattern that looks for the fastest varying dimension in
648 /// load/store MemRefs, this creates a generic vectorization strategy that works
649 /// for any loop in a hierarchy (outermost, innermost or intermediate).
650 ///
651 /// TODO: In the future we should additionally increase the power of the
652 /// profitability analysis along 3 directions:
653 /// 1. account for loop extents (both static and parametric + annotations);
654 /// 2. account for data layout permutations;
655 /// 3. account for impact of vectorization on maximal loop fusion.
656 /// Then we can quantify the above to build a cost model and search over
657 /// strategies.
659  unsigned depthInPattern,
660  unsigned patternDepth,
661  VectorizationStrategy *strategy) {
662  for (auto m : matches) {
663  if (failed(analyzeProfitability(m.getMatchedChildren(), depthInPattern + 1,
664  patternDepth, strategy))) {
665  return failure();
666  }
667  vectorizeLoopIfProfitable(m.getMatchedOperation(), depthInPattern,
668  patternDepth, strategy);
669  }
670  return success();
671 }
672 
673 ///// end TODO: Hoist to a VectorizationStrategy.cpp when appropriate /////
674 
675 namespace {
676 
677 struct VectorizationState {
678 
679  VectorizationState(MLIRContext *context) : builder(context) {}
680 
681  /// Registers the vector replacement of a scalar operation and its result
682  /// values. Both operations must have the same number of results.
683  ///
684  /// This utility is used to register the replacement for the vast majority of
685  /// the vectorized operations.
686  ///
687  /// Example:
688  /// * 'replaced': %0 = arith.addf %1, %2 : f32
689  /// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
690  void registerOpVectorReplacement(Operation *replaced, Operation *replacement);
691 
692  /// Registers the vector replacement of a scalar value. The replacement
693  /// operation should have a single result, which replaces the scalar value.
694  ///
695  /// This utility is used to register the vector replacement of block arguments
696  /// and operation results which are not directly vectorized (i.e., their
697  /// scalar version still exists after vectorization), like uniforms.
698  ///
699  /// Example:
700  /// * 'replaced': block argument or operation outside of the vectorized
701  /// loop.
702  /// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
703  void registerValueVectorReplacement(Value replaced, Operation *replacement);
704 
705  /// Registers the vector replacement of a block argument (e.g., iter_args).
706  ///
707  /// Example:
708  /// * 'replaced': 'iter_arg' block argument.
709  /// * 'replacement': vectorized 'iter_arg' block argument.
710  void registerBlockArgVectorReplacement(BlockArgument replaced,
711  BlockArgument replacement);
712 
713  /// Registers the scalar replacement of a scalar value. 'replacement' must be
714  /// scalar.
715  ///
716  /// This utility is used to register the replacement of block arguments
717  /// or affine.apply results that are within the loop be vectorized and will
718  /// continue being scalar within the vector loop.
719  ///
720  /// Example:
721  /// * 'replaced': induction variable of a loop to be vectorized.
722  /// * 'replacement': new induction variable in the new vector loop.
723  void registerValueScalarReplacement(Value replaced, Value replacement);
724 
725  /// Registers the scalar replacement of a scalar result returned from a
726  /// reduction loop. 'replacement' must be scalar.
727  ///
728  /// This utility is used to register the replacement for scalar results of
729  /// vectorized reduction loops with iter_args.
730  ///
731  /// Example 2:
732  /// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
733  /// * 'replacement': %1 = vector.reduction <add>, %0 : vector<4xf32> into
734  /// f32
735  void registerLoopResultScalarReplacement(Value replaced, Value replacement);
736 
737  /// Returns in 'replacedVals' the scalar replacement for values in
738  /// 'inputVals'.
739  void getScalarValueReplacementsFor(ValueRange inputVals,
740  SmallVectorImpl<Value> &replacedVals);
741 
742  /// Erases the scalar loop nest after its successful vectorization.
743  void finishVectorizationPattern(AffineForOp rootLoop);
744 
745  // Used to build and insert all the new operations created. The insertion
746  // point is preserved and updated along the vectorization process.
747  OpBuilder builder;
748 
749  // Maps input scalar operations to their vector counterparts.
750  DenseMap<Operation *, Operation *> opVectorReplacement;
751  // Maps input scalar values to their vector counterparts.
752  IRMapping valueVectorReplacement;
753  // Maps input scalar values to their new scalar counterparts in the vector
754  // loop nest.
755  IRMapping valueScalarReplacement;
756  // Maps results of reduction loops to their new scalar counterparts.
757  DenseMap<Value, Value> loopResultScalarReplacement;
758 
759  // Maps the newly created vector loops to their vector dimension.
760  DenseMap<Operation *, unsigned> vecLoopToVecDim;
761 
762  // Maps the new vectorized loops to the corresponding vector masks if it is
763  // required.
764  DenseMap<Operation *, Value> vecLoopToMask;
765 
766  // The strategy drives which loop to vectorize by which amount.
767  const VectorizationStrategy *strategy = nullptr;
768 
769 private:
770  /// Internal implementation to map input scalar values to new vector or scalar
771  /// values.
772  void registerValueVectorReplacementImpl(Value replaced, Value replacement);
773 };
774 
775 } // namespace
776 
777 /// Registers the vector replacement of a scalar operation and its result
778 /// values. Both operations must have the same number of results.
779 ///
780 /// This utility is used to register the replacement for the vast majority of
781 /// the vectorized operations.
782 ///
783 /// Example:
784 /// * 'replaced': %0 = arith.addf %1, %2 : f32
785 /// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
786 void VectorizationState::registerOpVectorReplacement(Operation *replaced,
787  Operation *replacement) {
788  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ commit vectorized op:\n");
789  LLVM_DEBUG(dbgs() << *replaced << "\n");
790  LLVM_DEBUG(dbgs() << "into\n");
791  LLVM_DEBUG(dbgs() << *replacement << "\n");
792 
793  assert(replaced->getNumResults() == replacement->getNumResults() &&
794  "Unexpected replaced and replacement results");
795  assert(opVectorReplacement.count(replaced) == 0 && "already registered");
796  opVectorReplacement[replaced] = replacement;
797 
798  for (auto resultTuple :
799  llvm::zip(replaced->getResults(), replacement->getResults()))
800  registerValueVectorReplacementImpl(std::get<0>(resultTuple),
801  std::get<1>(resultTuple));
802 }
803 
804 /// Registers the vector replacement of a scalar value. The replacement
805 /// operation should have a single result, which replaces the scalar value.
806 ///
807 /// This utility is used to register the vector replacement of block arguments
808 /// and operation results which are not directly vectorized (i.e., their
809 /// scalar version still exists after vectorization), like uniforms.
810 ///
811 /// Example:
812 /// * 'replaced': block argument or operation outside of the vectorized loop.
813 /// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
814 void VectorizationState::registerValueVectorReplacement(
815  Value replaced, Operation *replacement) {
816  assert(replacement->getNumResults() == 1 &&
817  "Expected single-result replacement");
818  if (Operation *defOp = replaced.getDefiningOp())
819  registerOpVectorReplacement(defOp, replacement);
820  else
821  registerValueVectorReplacementImpl(replaced, replacement->getResult(0));
822 }
823 
824 /// Registers the vector replacement of a block argument (e.g., iter_args).
825 ///
826 /// Example:
827 /// * 'replaced': 'iter_arg' block argument.
828 /// * 'replacement': vectorized 'iter_arg' block argument.
829 void VectorizationState::registerBlockArgVectorReplacement(
830  BlockArgument replaced, BlockArgument replacement) {
831  registerValueVectorReplacementImpl(replaced, replacement);
832 }
833 
834 void VectorizationState::registerValueVectorReplacementImpl(Value replaced,
835  Value replacement) {
836  assert(!valueVectorReplacement.contains(replaced) &&
837  "Vector replacement already registered");
838  assert(isa<VectorType>(replacement.getType()) &&
839  "Expected vector type in vector replacement");
840  valueVectorReplacement.map(replaced, replacement);
841 }
842 
843 /// Registers the scalar replacement of a scalar value. 'replacement' must be
844 /// scalar.
845 ///
846 /// This utility is used to register the replacement of block arguments
847 /// or affine.apply results that are within the loop be vectorized and will
848 /// continue being scalar within the vector loop.
849 ///
850 /// Example:
851 /// * 'replaced': induction variable of a loop to be vectorized.
852 /// * 'replacement': new induction variable in the new vector loop.
853 void VectorizationState::registerValueScalarReplacement(Value replaced,
854  Value replacement) {
855  assert(!valueScalarReplacement.contains(replaced) &&
856  "Scalar value replacement already registered");
857  assert(!isa<VectorType>(replacement.getType()) &&
858  "Expected scalar type in scalar replacement");
859  valueScalarReplacement.map(replaced, replacement);
860 }
861 
862 /// Registers the scalar replacement of a scalar result returned from a
863 /// reduction loop. 'replacement' must be scalar.
864 ///
865 /// This utility is used to register the replacement for scalar results of
866 /// vectorized reduction loops with iter_args.
867 ///
868 /// Example 2:
869 /// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
870 /// * 'replacement': %1 = vector.reduction <add>, %0 : vector<4xf32> into f32
871 void VectorizationState::registerLoopResultScalarReplacement(
872  Value replaced, Value replacement) {
873  assert(isa<AffineForOp>(replaced.getDefiningOp()));
874  assert(loopResultScalarReplacement.count(replaced) == 0 &&
875  "already registered");
876  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ will replace a result of the loop "
877  "with scalar: "
878  << replacement);
879  loopResultScalarReplacement[replaced] = replacement;
880 }
881 
882 /// Returns in 'replacedVals' the scalar replacement for values in 'inputVals'.
883 void VectorizationState::getScalarValueReplacementsFor(
884  ValueRange inputVals, SmallVectorImpl<Value> &replacedVals) {
885  for (Value inputVal : inputVals)
886  replacedVals.push_back(valueScalarReplacement.lookupOrDefault(inputVal));
887 }
888 
889 /// Erases a loop nest, including all its nested operations.
890 static void eraseLoopNest(AffineForOp forOp) {
891  LLVM_DEBUG(dbgs() << "[early-vect]+++++ erasing:\n" << forOp << "\n");
892  forOp.erase();
893 }
894 
895 /// Erases the scalar loop nest after its successful vectorization.
896 void VectorizationState::finishVectorizationPattern(AffineForOp rootLoop) {
897  LLVM_DEBUG(dbgs() << "\n[early-vect] Finalizing vectorization\n");
898  eraseLoopNest(rootLoop);
899 }
900 
901 // Apply 'map' with 'mapOperands' returning resulting values in 'results'.
903  ValueRange mapOperands,
904  VectorizationState &state,
905  SmallVectorImpl<Value> &results) {
906  for (auto resultExpr : map.getResults()) {
907  auto singleResMap =
908  AffineMap::get(map.getNumDims(), map.getNumSymbols(), resultExpr);
909  auto afOp = state.builder.create<AffineApplyOp>(op->getLoc(), singleResMap,
910  mapOperands);
911  results.push_back(afOp);
912  }
913 }
914 
915 /// Returns a FilterFunctionType that can be used in NestedPattern to match a
916 /// loop whose underlying load/store accesses are either invariant or all
917 // varying along the `fastestVaryingMemRefDimension`.
918 static FilterFunctionType
920  int fastestVaryingMemRefDimension) {
921  return [&parallelLoops, fastestVaryingMemRefDimension](Operation &forOp) {
922  auto loop = cast<AffineForOp>(forOp);
923  if (!parallelLoops.contains(loop))
924  return false;
925  int memRefDim = -1;
926  auto vectorizableBody =
927  isVectorizableLoopBody(loop, &memRefDim, vectorTransferPattern());
928  if (!vectorizableBody)
929  return false;
930  return memRefDim == -1 || fastestVaryingMemRefDimension == -1 ||
931  memRefDim == fastestVaryingMemRefDimension;
932  };
933 }
934 
935 /// Returns the vector type resulting from applying the provided vectorization
936 /// strategy on the scalar type.
937 static VectorType getVectorType(Type scalarTy,
938  const VectorizationStrategy *strategy) {
939  assert(!isa<VectorType>(scalarTy) && "Expected scalar type");
940  return VectorType::get(strategy->vectorSizes, scalarTy);
941 }
942 
943 /// Tries to transform a scalar constant into a vector constant. Returns the
944 /// vector constant if the scalar type is valid vector element type. Returns
945 /// nullptr, otherwise.
946 static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp,
947  VectorizationState &state) {
948  Type scalarTy = constOp.getType();
949  if (!VectorType::isValidElementType(scalarTy))
950  return nullptr;
951 
952  auto vecTy = getVectorType(scalarTy, state.strategy);
953  auto vecAttr = DenseElementsAttr::get(vecTy, constOp.getValue());
954 
955  OpBuilder::InsertionGuard guard(state.builder);
956  Operation *parentOp = state.builder.getInsertionBlock()->getParentOp();
957  // Find the innermost vectorized ancestor loop to insert the vector constant.
958  while (parentOp && !state.vecLoopToVecDim.count(parentOp))
959  parentOp = parentOp->getParentOp();
960  assert(parentOp && state.vecLoopToVecDim.count(parentOp) &&
961  isa<AffineForOp>(parentOp) && "Expected a vectorized for op");
962  auto vecForOp = cast<AffineForOp>(parentOp);
963  state.builder.setInsertionPointToStart(vecForOp.getBody());
964  auto newConstOp =
965  state.builder.create<arith::ConstantOp>(constOp.getLoc(), vecAttr);
966 
967  // Register vector replacement for future uses in the scope.
968  state.registerOpVectorReplacement(constOp, newConstOp);
969  return newConstOp;
970 }
971 
972 /// We have no need to vectorize affine.apply. However, we still need to
973 /// generate it and replace the operands with values in valueScalarReplacement.
974 static Operation *vectorizeAffineApplyOp(AffineApplyOp applyOp,
975  VectorizationState &state) {
976  SmallVector<Value, 8> updatedOperands;
977  for (Value operand : applyOp.getOperands()) {
978  if (state.valueVectorReplacement.contains(operand)) {
979  LLVM_DEBUG(
980  dbgs() << "\n[early-vect]+++++ affine.apply on vector operand\n");
981  return nullptr;
982  } else {
983  Value updatedOperand = state.valueScalarReplacement.lookupOrNull(operand);
984  if (!updatedOperand)
985  updatedOperand = operand;
986  updatedOperands.push_back(updatedOperand);
987  }
988  }
989 
990  auto newApplyOp = state.builder.create<AffineApplyOp>(
991  applyOp.getLoc(), applyOp.getAffineMap(), updatedOperands);
992 
993  // Register the new affine.apply result.
994  state.registerValueScalarReplacement(applyOp.getResult(),
995  newApplyOp.getResult());
996  return newApplyOp;
997 }
998 
999 /// Creates a constant vector filled with the neutral elements of the given
1000 /// reduction. The scalar type of vector elements will be taken from
1001 /// `oldOperand`.
1002 static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind,
1003  Value oldOperand,
1004  VectorizationState &state) {
1005  Type scalarTy = oldOperand.getType();
1006  if (!VectorType::isValidElementType(scalarTy))
1007  return nullptr;
1008 
1009  Attribute valueAttr = getIdentityValueAttr(
1010  reductionKind, scalarTy, state.builder, oldOperand.getLoc());
1011  auto vecTy = getVectorType(scalarTy, state.strategy);
1012  auto vecAttr = DenseElementsAttr::get(vecTy, valueAttr);
1013  auto newConstOp =
1014  state.builder.create<arith::ConstantOp>(oldOperand.getLoc(), vecAttr);
1015 
1016  return newConstOp;
1017 }
1018 
1019 /// Creates a mask used to filter out garbage elements in the last iteration
1020 /// of unaligned loops. If a mask is not required then `nullptr` is returned.
1021 /// The mask will be a vector of booleans representing meaningful vector
1022 /// elements in the current iteration. It is filled with ones for each iteration
1023 /// except for the last one, where it has the form `11...100...0` with the
1024 /// number of ones equal to the number of meaningful elements (i.e. the number
1025 /// of iterations that would be left in the original loop).
1026 static Value createMask(AffineForOp vecForOp, VectorizationState &state) {
1027  assert(state.strategy->vectorSizes.size() == 1 &&
1028  "Creating a mask non-1-D vectors is not supported.");
1029  assert(vecForOp.getStep() == state.strategy->vectorSizes[0] &&
1030  "Creating a mask for loops with non-unit original step size is not "
1031  "supported.");
1032 
1033  // Check if we have already created the mask.
1034  if (Value mask = state.vecLoopToMask.lookup(vecForOp))
1035  return mask;
1036 
1037  // If the loop has constant bounds and the original number of iterations is
1038  // divisable by the vector size then we don't need a mask.
1039  if (vecForOp.hasConstantBounds()) {
1040  int64_t originalTripCount =
1041  vecForOp.getConstantUpperBound() - vecForOp.getConstantLowerBound();
1042  if (originalTripCount % vecForOp.getStepAsInt() == 0)
1043  return nullptr;
1044  }
1045 
1046  OpBuilder::InsertionGuard guard(state.builder);
1047  state.builder.setInsertionPointToStart(vecForOp.getBody());
1048 
1049  // We generate the mask using the `vector.create_mask` operation which accepts
1050  // the number of meaningful elements (i.e. the length of the prefix of 1s).
1051  // To compute the number of meaningful elements we subtract the current value
1052  // of the iteration variable from the upper bound of the loop. Example:
1053  //
1054  // // 500 is the upper bound of the loop
1055  // #map = affine_map<(d0) -> (500 - d0)>
1056  // %elems_left = affine.apply #map(%iv)
1057  // %mask = vector.create_mask %elems_left : vector<128xi1>
1058 
1059  Location loc = vecForOp.getLoc();
1060 
1061  // First we get the upper bound of the loop using `affine.apply` or
1062  // `affine.min`.
1063  AffineMap ubMap = vecForOp.getUpperBoundMap();
1064  Value ub;
1065  if (ubMap.getNumResults() == 1)
1066  ub = state.builder.create<AffineApplyOp>(loc, vecForOp.getUpperBoundMap(),
1067  vecForOp.getUpperBoundOperands());
1068  else
1069  ub = state.builder.create<AffineMinOp>(loc, vecForOp.getUpperBoundMap(),
1070  vecForOp.getUpperBoundOperands());
1071  // Then we compute the number of (original) iterations left in the loop.
1072  AffineExpr subExpr =
1073  state.builder.getAffineDimExpr(0) - state.builder.getAffineDimExpr(1);
1074  Value itersLeft =
1075  makeComposedAffineApply(state.builder, loc, AffineMap::get(2, 0, subExpr),
1076  {ub, vecForOp.getInductionVar()});
1077  // If the affine maps were successfully composed then `ub` is unneeded.
1078  if (ub.use_empty())
1079  ub.getDefiningOp()->erase();
1080  // Finally we create the mask.
1081  Type maskTy = VectorType::get(state.strategy->vectorSizes,
1082  state.builder.getIntegerType(1));
1083  Value mask =
1084  state.builder.create<vector::CreateMaskOp>(loc, maskTy, itersLeft);
1085 
1086  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a mask:\n"
1087  << itersLeft << "\n"
1088  << mask << "\n");
1089 
1090  state.vecLoopToMask[vecForOp] = mask;
1091  return mask;
1092 }
1093 
1094 /// Returns true if the provided value is vector uniform given the vectorization
1095 /// strategy.
1096 // TODO: For now, only values that are induction variables of loops not in
1097 // `loopToVectorDim` or invariants to all the loops in the vectorization
1098 // strategy are considered vector uniforms.
1099 static bool isUniformDefinition(Value value,
1100  const VectorizationStrategy *strategy) {
1101  AffineForOp forOp = getForInductionVarOwner(value);
1102  if (forOp && strategy->loopToVectorDim.count(forOp) == 0)
1103  return true;
1104 
1105  for (auto loopToDim : strategy->loopToVectorDim) {
1106  auto loop = cast<AffineForOp>(loopToDim.first);
1107  if (!loop.isDefinedOutsideOfLoop(value))
1108  return false;
1109  }
1110  return true;
1111 }
1112 
1113 /// Generates a broadcast op for the provided uniform value using the
1114 /// vectorization strategy in 'state'.
1115 static Operation *vectorizeUniform(Value uniformVal,
1116  VectorizationState &state) {
1117  OpBuilder::InsertionGuard guard(state.builder);
1118  Value uniformScalarRepl =
1119  state.valueScalarReplacement.lookupOrDefault(uniformVal);
1120  state.builder.setInsertionPointAfterValue(uniformScalarRepl);
1121 
1122  auto vectorTy = getVectorType(uniformVal.getType(), state.strategy);
1123  auto bcastOp = state.builder.create<BroadcastOp>(uniformVal.getLoc(),
1124  vectorTy, uniformScalarRepl);
1125  state.registerValueVectorReplacement(uniformVal, bcastOp);
1126  return bcastOp;
1127 }
1128 
1129 /// Tries to vectorize a given `operand` by applying the following logic:
1130 /// 1. if the defining operation has been already vectorized, `operand` is
1131 /// already in the proper vector form;
1132 /// 2. if the `operand` is a constant, returns the vectorized form of the
1133 /// constant;
1134 /// 3. if the `operand` is uniform, returns a vector broadcast of the `op`;
1135 /// 4. otherwise, the vectorization of `operand` is not supported.
1136 /// Newly created vector operations are registered in `state` as replacement
1137 /// for their scalar counterparts.
1138 /// In particular this logic captures some of the use cases where definitions
1139 /// that are not scoped under the current pattern are needed to vectorize.
1140 /// One such example is top level function constants that need to be splatted.
1141 ///
1142 /// Returns an operand that has been vectorized to match `state`'s strategy if
1143 /// vectorization is possible with the above logic. Returns nullptr otherwise.
1144 ///
1145 /// TODO: handle more complex cases.
1147  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorize operand: " << operand);
1148  // If this value is already vectorized, we are done.
1149  if (Value vecRepl = state.valueVectorReplacement.lookupOrNull(operand)) {
1150  LLVM_DEBUG(dbgs() << " -> already vectorized: " << vecRepl);
1151  return vecRepl;
1152  }
1153 
1154  // An vector operand that is not in the replacement map should never reach
1155  // this point. Reaching this point could mean that the code was already
1156  // vectorized and we shouldn't try to vectorize already vectorized code.
1157  assert(!isa<VectorType>(operand.getType()) &&
1158  "Vector op not found in replacement map");
1159 
1160  // Vectorize constant.
1161  if (auto constOp = operand.getDefiningOp<arith::ConstantOp>()) {
1162  auto vecConstant = vectorizeConstant(constOp, state);
1163  LLVM_DEBUG(dbgs() << "-> constant: " << vecConstant);
1164  return vecConstant.getResult();
1165  }
1166 
1167  // Vectorize uniform values.
1168  if (isUniformDefinition(operand, state.strategy)) {
1169  Operation *vecUniform = vectorizeUniform(operand, state);
1170  LLVM_DEBUG(dbgs() << "-> uniform: " << *vecUniform);
1171  return vecUniform->getResult(0);
1172  }
1173 
1174  // Check for unsupported block argument scenarios. A supported block argument
1175  // should have been vectorized already.
1176  if (!operand.getDefiningOp())
1177  LLVM_DEBUG(dbgs() << "-> unsupported block argument\n");
1178  else
1179  // Generic unsupported case.
1180  LLVM_DEBUG(dbgs() << "-> non-vectorizable\n");
1181 
1182  return nullptr;
1183 }
1184 
1185 /// Vectorizes an affine load with the vectorization strategy in 'state' by
1186 /// generating a 'vector.transfer_read' op with the proper permutation map
1187 /// inferred from the indices of the load. The new 'vector.transfer_read' is
1188 /// registered as replacement of the scalar load. Returns the newly created
1189 /// 'vector.transfer_read' if vectorization was successful. Returns nullptr,
1190 /// otherwise.
1191 static Operation *vectorizeAffineLoad(AffineLoadOp loadOp,
1192  VectorizationState &state) {
1193  MemRefType memRefType = loadOp.getMemRefType();
1194  Type elementType = memRefType.getElementType();
1195  auto vectorType = VectorType::get(state.strategy->vectorSizes, elementType);
1196 
1197  // Replace map operands with operands from the vector loop nest.
1198  SmallVector<Value, 8> mapOperands;
1199  state.getScalarValueReplacementsFor(loadOp.getMapOperands(), mapOperands);
1200 
1201  // Compute indices for the transfer op. AffineApplyOp's may be generated.
1202  SmallVector<Value, 8> indices;
1203  indices.reserve(memRefType.getRank());
1204  if (loadOp.getAffineMap() !=
1205  state.builder.getMultiDimIdentityMap(memRefType.getRank())) {
1206  // Check the operand in loadOp affine map does not come from AffineApplyOp.
1207  for (auto op : mapOperands) {
1208  if (op.getDefiningOp<AffineApplyOp>())
1209  return nullptr;
1210  }
1211  computeMemoryOpIndices(loadOp, loadOp.getAffineMap(), mapOperands, state,
1212  indices);
1213  } else {
1214  indices.append(mapOperands.begin(), mapOperands.end());
1215  }
1216 
1217  // Compute permutation map using the information of new vector loops.
1218  auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1219  indices, state.vecLoopToVecDim);
1220  if (!permutationMap) {
1221  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ can't compute permutationMap\n");
1222  return nullptr;
1223  }
1224  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1225  LLVM_DEBUG(permutationMap.print(dbgs()));
1226 
1227  auto transfer = state.builder.create<vector::TransferReadOp>(
1228  loadOp.getLoc(), vectorType, loadOp.getMemRef(), indices, permutationMap);
1229 
1230  // Register replacement for future uses in the scope.
1231  state.registerOpVectorReplacement(loadOp, transfer);
1232  return transfer;
1233 }
1234 
1235 /// Vectorizes an affine store with the vectorization strategy in 'state' by
1236 /// generating a 'vector.transfer_write' op with the proper permutation map
1237 /// inferred from the indices of the store. The new 'vector.transfer_store' is
1238 /// registered as replacement of the scalar load. Returns the newly created
1239 /// 'vector.transfer_write' if vectorization was successful. Returns nullptr,
1240 /// otherwise.
1241 static Operation *vectorizeAffineStore(AffineStoreOp storeOp,
1242  VectorizationState &state) {
1243  MemRefType memRefType = storeOp.getMemRefType();
1244  Value vectorValue = vectorizeOperand(storeOp.getValueToStore(), state);
1245  if (!vectorValue)
1246  return nullptr;
1247 
1248  // Replace map operands with operands from the vector loop nest.
1249  SmallVector<Value, 8> mapOperands;
1250  state.getScalarValueReplacementsFor(storeOp.getMapOperands(), mapOperands);
1251 
1252  // Compute indices for the transfer op. AffineApplyOp's may be generated.
1253  SmallVector<Value, 8> indices;
1254  indices.reserve(memRefType.getRank());
1255  if (storeOp.getAffineMap() !=
1256  state.builder.getMultiDimIdentityMap(memRefType.getRank()))
1257  computeMemoryOpIndices(storeOp, storeOp.getAffineMap(), mapOperands, state,
1258  indices);
1259  else
1260  indices.append(mapOperands.begin(), mapOperands.end());
1261 
1262  // Compute permutation map using the information of new vector loops.
1263  auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1264  indices, state.vecLoopToVecDim);
1265  if (!permutationMap)
1266  return nullptr;
1267  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1268  LLVM_DEBUG(permutationMap.print(dbgs()));
1269 
1270  auto transfer = state.builder.create<vector::TransferWriteOp>(
1271  storeOp.getLoc(), vectorValue, storeOp.getMemRef(), indices,
1272  permutationMap);
1273  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorized store: " << transfer);
1274 
1275  // Register replacement for future uses in the scope.
1276  state.registerOpVectorReplacement(storeOp, transfer);
1277  return transfer;
1278 }
1279 
1280 /// Returns true if `value` is a constant equal to the neutral element of the
1281 /// given vectorizable reduction.
1282 static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind,
1283  Value value, VectorizationState &state) {
1284  Type scalarTy = value.getType();
1285  if (!VectorType::isValidElementType(scalarTy))
1286  return false;
1287  Attribute valueAttr = getIdentityValueAttr(reductionKind, scalarTy,
1288  state.builder, value.getLoc());
1289  if (auto constOp = dyn_cast_or_null<arith::ConstantOp>(value.getDefiningOp()))
1290  return constOp.getValue() == valueAttr;
1291  return false;
1292 }
1293 
1294 /// Vectorizes a loop with the vectorization strategy in 'state'. A new loop is
1295 /// created and registered as replacement for the scalar loop. The builder's
1296 /// insertion point is set to the new loop's body so that subsequent vectorized
1297 /// operations are inserted into the new loop. If the loop is a vector
1298 /// dimension, the step of the newly created loop will reflect the vectorization
1299 /// factor used to vectorized that dimension.
1300 static Operation *vectorizeAffineForOp(AffineForOp forOp,
1301  VectorizationState &state) {
1302  const VectorizationStrategy &strategy = *state.strategy;
1303  auto loopToVecDimIt = strategy.loopToVectorDim.find(forOp);
1304  bool isLoopVecDim = loopToVecDimIt != strategy.loopToVectorDim.end();
1305 
1306  // TODO: Vectorization of reduction loops is not supported for non-unit steps.
1307  if (isLoopVecDim && forOp.getNumIterOperands() > 0 && forOp.getStep() != 1) {
1308  LLVM_DEBUG(
1309  dbgs()
1310  << "\n[early-vect]+++++ unsupported step size for reduction loop: "
1311  << forOp.getStep() << "\n");
1312  return nullptr;
1313  }
1314 
1315  // If we are vectorizing a vector dimension, compute a new step for the new
1316  // vectorized loop using the vectorization factor for the vector dimension.
1317  // Otherwise, propagate the step of the scalar loop.
1318  unsigned newStep;
1319  if (isLoopVecDim) {
1320  unsigned vectorDim = loopToVecDimIt->second;
1321  assert(vectorDim < strategy.vectorSizes.size() && "vector dim overflow");
1322  int64_t forOpVecFactor = strategy.vectorSizes[vectorDim];
1323  newStep = forOp.getStepAsInt() * forOpVecFactor;
1324  } else {
1325  newStep = forOp.getStepAsInt();
1326  }
1327 
1328  // Get information about reduction kinds.
1329  ArrayRef<LoopReduction> reductions;
1330  if (isLoopVecDim && forOp.getNumIterOperands() > 0) {
1331  auto it = strategy.reductionLoops.find(forOp);
1332  assert(it != strategy.reductionLoops.end() &&
1333  "Reduction descriptors not found when vectorizing a reduction loop");
1334  reductions = it->second;
1335  assert(reductions.size() == forOp.getNumIterOperands() &&
1336  "The size of reductions array must match the number of iter_args");
1337  }
1338 
1339  // Vectorize 'iter_args'.
1340  SmallVector<Value, 8> vecIterOperands;
1341  if (!isLoopVecDim) {
1342  for (auto operand : forOp.getInits())
1343  vecIterOperands.push_back(vectorizeOperand(operand, state));
1344  } else {
1345  // For reduction loops we need to pass a vector of neutral elements as an
1346  // initial value of the accumulator. We will add the original initial value
1347  // later.
1348  for (auto redAndOperand : llvm::zip(reductions, forOp.getInits())) {
1349  vecIterOperands.push_back(createInitialVector(
1350  std::get<0>(redAndOperand).kind, std::get<1>(redAndOperand), state));
1351  }
1352  }
1353 
1354  auto vecForOp = state.builder.create<AffineForOp>(
1355  forOp.getLoc(), forOp.getLowerBoundOperands(), forOp.getLowerBoundMap(),
1356  forOp.getUpperBoundOperands(), forOp.getUpperBoundMap(), newStep,
1357  vecIterOperands,
1358  /*bodyBuilder=*/[](OpBuilder &, Location, Value, ValueRange) {
1359  // Make sure we don't create a default terminator in the loop body as
1360  // the proper terminator will be added during vectorization.
1361  });
1362 
1363  // Register loop-related replacements:
1364  // 1) The new vectorized loop is registered as vector replacement of the
1365  // scalar loop.
1366  // 2) The new iv of the vectorized loop is registered as scalar replacement
1367  // since a scalar copy of the iv will prevail in the vectorized loop.
1368  // TODO: A vector replacement will also be added in the future when
1369  // vectorization of linear ops is supported.
1370  // 3) The new 'iter_args' region arguments are registered as vector
1371  // replacements since they have been vectorized.
1372  // 4) If the loop performs a reduction along the vector dimension, a
1373  // `vector.reduction` or similar op is inserted for each resulting value
1374  // of the loop and its scalar value replaces the corresponding scalar
1375  // result of the loop.
1376  state.registerOpVectorReplacement(forOp, vecForOp);
1377  state.registerValueScalarReplacement(forOp.getInductionVar(),
1378  vecForOp.getInductionVar());
1379  for (auto iterTuple :
1380  llvm ::zip(forOp.getRegionIterArgs(), vecForOp.getRegionIterArgs()))
1381  state.registerBlockArgVectorReplacement(std::get<0>(iterTuple),
1382  std::get<1>(iterTuple));
1383 
1384  if (isLoopVecDim) {
1385  for (unsigned i = 0; i < vecForOp.getNumIterOperands(); ++i) {
1386  // First, we reduce the vector returned from the loop into a scalar.
1387  Value reducedRes =
1388  getVectorReductionOp(reductions[i].kind, state.builder,
1389  vecForOp.getLoc(), vecForOp.getResult(i));
1390  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a vector reduction: "
1391  << reducedRes);
1392  // Then we combine it with the original (scalar) initial value unless it
1393  // is equal to the neutral element of the reduction.
1394  Value origInit = forOp.getOperand(forOp.getNumControlOperands() + i);
1395  Value finalRes = reducedRes;
1396  if (!isNeutralElementConst(reductions[i].kind, origInit, state))
1397  finalRes =
1398  arith::getReductionOp(reductions[i].kind, state.builder,
1399  reducedRes.getLoc(), reducedRes, origInit);
1400  state.registerLoopResultScalarReplacement(forOp.getResult(i), finalRes);
1401  }
1402  }
1403 
1404  if (isLoopVecDim)
1405  state.vecLoopToVecDim[vecForOp] = loopToVecDimIt->second;
1406 
1407  // Change insertion point so that upcoming vectorized instructions are
1408  // inserted into the vectorized loop's body.
1409  state.builder.setInsertionPointToStart(vecForOp.getBody());
1410 
1411  // If this is a reduction loop then we may need to create a mask to filter out
1412  // garbage in the last iteration.
1413  if (isLoopVecDim && forOp.getNumIterOperands() > 0)
1414  createMask(vecForOp, state);
1415 
1416  return vecForOp;
1417 }
1418 
1419 /// Vectorizes arbitrary operation by plain widening. We apply generic type
1420 /// widening of all its results and retrieve the vector counterparts for all its
1421 /// operands.
1423  SmallVector<Type, 8> vectorTypes;
1424  for (Value result : op->getResults())
1425  vectorTypes.push_back(
1426  VectorType::get(state.strategy->vectorSizes, result.getType()));
1427 
1428  SmallVector<Value, 8> vectorOperands;
1429  for (Value operand : op->getOperands()) {
1430  Value vecOperand = vectorizeOperand(operand, state);
1431  if (!vecOperand) {
1432  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ an operand failed vectorize\n");
1433  return nullptr;
1434  }
1435  vectorOperands.push_back(vecOperand);
1436  }
1437 
1438  // Create a clone of the op with the proper operands and return types.
1439  // TODO: The following assumes there is always an op with a fixed
1440  // name that works both in scalar mode and vector mode.
1441  // TODO: Is it worth considering an Operation.clone operation which
1442  // changes the type so we can promote an Operation with less boilerplate?
1443  Operation *vecOp =
1444  state.builder.create(op->getLoc(), op->getName().getIdentifier(),
1445  vectorOperands, vectorTypes, op->getAttrs());
1446  state.registerOpVectorReplacement(op, vecOp);
1447  return vecOp;
1448 }
1449 
1450 /// Vectorizes a yield operation by widening its types. The builder's insertion
1451 /// point is set after the vectorized parent op to continue vectorizing the
1452 /// operations after the parent op. When vectorizing a reduction loop a mask may
1453 /// be used to prevent adding garbage values to the accumulator.
1454 static Operation *vectorizeAffineYieldOp(AffineYieldOp yieldOp,
1455  VectorizationState &state) {
1456  Operation *newYieldOp = widenOp(yieldOp, state);
1457  Operation *newParentOp = state.builder.getInsertionBlock()->getParentOp();
1458 
1459  // If there is a mask for this loop then we must prevent garbage values from
1460  // being added to the accumulator by inserting `select` operations, for
1461  // example:
1462  //
1463  // %val_masked = select %mask, %val, %neutralCst : vector<128xi1>,
1464  // vector<128xf32>
1465  // %res = arith.addf %acc, %val_masked : vector<128xf32>
1466  // affine.yield %res : vector<128xf32>
1467  //
1468  if (Value mask = state.vecLoopToMask.lookup(newParentOp)) {
1469  state.builder.setInsertionPoint(newYieldOp);
1470  for (unsigned i = 0; i < newYieldOp->getNumOperands(); ++i) {
1471  SmallVector<Operation *> combinerOps;
1472  Value reducedVal = matchReduction(
1473  cast<AffineForOp>(newParentOp).getRegionIterArgs(), i, combinerOps);
1474  assert(reducedVal && "expect non-null value for parallel reduction loop");
1475  assert(combinerOps.size() == 1 && "expect only one combiner op");
1476  // IterOperands are neutral element vectors.
1477  Value neutralVal = cast<AffineForOp>(newParentOp).getInits()[i];
1478  state.builder.setInsertionPoint(combinerOps.back());
1479  Value maskedReducedVal = state.builder.create<arith::SelectOp>(
1480  reducedVal.getLoc(), mask, reducedVal, neutralVal);
1481  LLVM_DEBUG(
1482  dbgs() << "\n[early-vect]+++++ masking an input to a binary op that"
1483  "produces value for a yield Op: "
1484  << maskedReducedVal);
1485  combinerOps.back()->replaceUsesOfWith(reducedVal, maskedReducedVal);
1486  }
1487  }
1488 
1489  state.builder.setInsertionPointAfter(newParentOp);
1490  return newYieldOp;
1491 }
1492 
1493 /// Encodes Operation-specific behavior for vectorization. In general we
1494 /// assume that all operands of an op must be vectorized but this is not
1495 /// always true. In the future, it would be nice to have a trait that
1496 /// describes how a particular operation vectorizes. For now we implement the
1497 /// case distinction here. Returns a vectorized form of an operation or
1498 /// nullptr if vectorization fails.
1499 // TODO: consider adding a trait to Op to describe how it gets vectorized.
1500 // Maybe some Ops are not vectorizable or require some tricky logic, we cannot
1501 // do one-off logic here; ideally it would be TableGen'd.
1503  VectorizationState &state) {
1504  // Sanity checks.
1505  assert(!isa<vector::TransferReadOp>(op) &&
1506  "vector.transfer_read cannot be further vectorized");
1507  assert(!isa<vector::TransferWriteOp>(op) &&
1508  "vector.transfer_write cannot be further vectorized");
1509 
1510  if (auto loadOp = dyn_cast<AffineLoadOp>(op))
1511  return vectorizeAffineLoad(loadOp, state);
1512  if (auto storeOp = dyn_cast<AffineStoreOp>(op))
1513  return vectorizeAffineStore(storeOp, state);
1514  if (auto forOp = dyn_cast<AffineForOp>(op))
1515  return vectorizeAffineForOp(forOp, state);
1516  if (auto yieldOp = dyn_cast<AffineYieldOp>(op))
1517  return vectorizeAffineYieldOp(yieldOp, state);
1518  if (auto constant = dyn_cast<arith::ConstantOp>(op))
1519  return vectorizeConstant(constant, state);
1520  if (auto applyOp = dyn_cast<AffineApplyOp>(op))
1521  return vectorizeAffineApplyOp(applyOp, state);
1522 
1523  // Other ops with regions are not supported.
1524  if (op->getNumRegions() != 0)
1525  return nullptr;
1526 
1527  return widenOp(op, state);
1528 }
1529 
1530 /// Recursive implementation to convert all the nested loops in 'match' to a 2D
1531 /// vector container that preserves the relative nesting level of each loop with
1532 /// respect to the others in 'match'. 'currentLevel' is the nesting level that
1533 /// will be assigned to the loop in the current 'match'.
1534 static void
1535 getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel,
1536  std::vector<SmallVector<AffineForOp, 2>> &loops) {
1537  // Add a new empty level to the output if it doesn't exist already.
1538  assert(currentLevel <= loops.size() && "Unexpected currentLevel");
1539  if (currentLevel == loops.size())
1540  loops.emplace_back();
1541 
1542  // Add current match and recursively visit its children.
1543  loops[currentLevel].push_back(cast<AffineForOp>(match.getMatchedOperation()));
1544  for (auto childMatch : match.getMatchedChildren()) {
1545  getMatchedAffineLoopsRec(childMatch, currentLevel + 1, loops);
1546  }
1547 }
1548 
1549 /// Converts all the nested loops in 'match' to a 2D vector container that
1550 /// preserves the relative nesting level of each loop with respect to the others
1551 /// in 'match'. This means that every loop in 'loops[i]' will have a parent loop
1552 /// in 'loops[i-1]'. A loop in 'loops[i]' may or may not have a child loop in
1553 /// 'loops[i+1]'.
1554 static void
1556  std::vector<SmallVector<AffineForOp, 2>> &loops) {
1557  getMatchedAffineLoopsRec(match, /*currLoopDepth=*/0, loops);
1558 }
1559 
1560 /// Internal implementation to vectorize affine loops from a single loop nest
1561 /// using an n-D vectorization strategy.
1562 static LogicalResult
1564  const VectorizationStrategy &strategy) {
1565  assert(loops[0].size() == 1 && "Expected single root loop");
1566  AffineForOp rootLoop = loops[0][0];
1567  VectorizationState state(rootLoop.getContext());
1568  state.builder.setInsertionPointAfter(rootLoop);
1569  state.strategy = &strategy;
1570 
1571  // Since patterns are recursive, they can very well intersect.
1572  // Since we do not want a fully greedy strategy in general, we decouple
1573  // pattern matching, from profitability analysis, from application.
1574  // As a consequence we must check that each root pattern is still
1575  // vectorizable. If a pattern is not vectorizable anymore, we just skip it.
1576  // TODO: implement a non-greedy profitability analysis that keeps only
1577  // non-intersecting patterns.
1578  if (!isVectorizableLoopBody(rootLoop, vectorTransferPattern())) {
1579  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ loop is not vectorizable");
1580  return failure();
1581  }
1582 
1583  //////////////////////////////////////////////////////////////////////////////
1584  // Vectorize the scalar loop nest following a topological order. A new vector
1585  // loop nest with the vectorized operations is created along the process. If
1586  // vectorization succeeds, the scalar loop nest is erased. If vectorization
1587  // fails, the vector loop nest is erased and the scalar loop nest is not
1588  // modified.
1589  //////////////////////////////////////////////////////////////////////////////
1590 
1591  auto opVecResult = rootLoop.walk<WalkOrder::PreOrder>([&](Operation *op) {
1592  LLVM_DEBUG(dbgs() << "[early-vect]+++++ Vectorizing: " << *op);
1593  Operation *vectorOp = vectorizeOneOperation(op, state);
1594  if (!vectorOp) {
1595  LLVM_DEBUG(
1596  dbgs() << "[early-vect]+++++ failed vectorizing the operation: "
1597  << *op << "\n");
1598  return WalkResult::interrupt();
1599  }
1600 
1601  return WalkResult::advance();
1602  });
1603 
1604  if (opVecResult.wasInterrupted()) {
1605  LLVM_DEBUG(dbgs() << "[early-vect]+++++ failed vectorization for: "
1606  << rootLoop << "\n");
1607  // Erase vector loop nest if it was created.
1608  auto vecRootLoopIt = state.opVectorReplacement.find(rootLoop);
1609  if (vecRootLoopIt != state.opVectorReplacement.end())
1610  eraseLoopNest(cast<AffineForOp>(vecRootLoopIt->second));
1611 
1612  return failure();
1613  }
1614 
1615  // Replace results of reduction loops with the scalar values computed using
1616  // `vector.reduce` or similar ops.
1617  for (auto resPair : state.loopResultScalarReplacement)
1618  resPair.first.replaceAllUsesWith(resPair.second);
1619 
1620  assert(state.opVectorReplacement.count(rootLoop) == 1 &&
1621  "Expected vector replacement for loop nest");
1622  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ success vectorizing pattern");
1623  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorization result:\n"
1624  << *state.opVectorReplacement[rootLoop]);
1625 
1626  // Finish this vectorization pattern.
1627  state.finishVectorizationPattern(rootLoop);
1628  return success();
1629 }
1630 
1631 /// Extracts the matched loops and vectorizes them following a topological
1632 /// order. A new vector loop nest will be created if vectorization succeeds. The
1633 /// original loop nest won't be modified in any case.
1635  const VectorizationStrategy &strategy) {
1636  std::vector<SmallVector<AffineForOp, 2>> loopsToVectorize;
1637  getMatchedAffineLoops(m, loopsToVectorize);
1638  return vectorizeLoopNest(loopsToVectorize, strategy);
1639 }
1640 
1641 /// Traverses all the loop matches and classifies them into intersection
1642 /// buckets. Two matches intersect if any of them encloses the other one. A
1643 /// match intersects with a bucket if the match intersects with the root
1644 /// (outermost) loop in that bucket.
1646  ArrayRef<NestedMatch> matches,
1647  std::vector<SmallVector<NestedMatch, 8>> &intersectionBuckets) {
1648  assert(intersectionBuckets.empty() && "Expected empty output");
1649  // Keeps track of the root (outermost) loop of each bucket.
1650  SmallVector<AffineForOp, 8> bucketRoots;
1651 
1652  for (const NestedMatch &match : matches) {
1653  AffineForOp matchRoot = cast<AffineForOp>(match.getMatchedOperation());
1654  bool intersects = false;
1655  for (int i = 0, end = intersectionBuckets.size(); i < end; ++i) {
1656  AffineForOp bucketRoot = bucketRoots[i];
1657  // Add match to the bucket if the bucket root encloses the match root.
1658  if (bucketRoot->isAncestor(matchRoot)) {
1659  intersectionBuckets[i].push_back(match);
1660  intersects = true;
1661  break;
1662  }
1663  // Add match to the bucket if the match root encloses the bucket root. The
1664  // match root becomes the new bucket root.
1665  if (matchRoot->isAncestor(bucketRoot)) {
1666  bucketRoots[i] = matchRoot;
1667  intersectionBuckets[i].push_back(match);
1668  intersects = true;
1669  break;
1670  }
1671  }
1672 
1673  // Match doesn't intersect with any existing bucket. Create a new bucket for
1674  // it.
1675  if (!intersects) {
1676  bucketRoots.push_back(matchRoot);
1677  intersectionBuckets.emplace_back();
1678  intersectionBuckets.back().push_back(match);
1679  }
1680  }
1681 }
1682 
1683 /// Internal implementation to vectorize affine loops in 'loops' using the n-D
1684 /// vectorization factors in 'vectorSizes'. By default, each vectorization
1685 /// factor is applied inner-to-outer to the loops of each loop nest.
1686 /// 'fastestVaryingPattern' can be optionally used to provide a different loop
1687 /// vectorization order. `reductionLoops` can be provided to specify loops which
1688 /// can be vectorized along the reduction dimension.
1689 static void vectorizeLoops(Operation *parentOp, DenseSet<Operation *> &loops,
1690  ArrayRef<int64_t> vectorSizes,
1691  ArrayRef<int64_t> fastestVaryingPattern,
1692  const ReductionLoopMap &reductionLoops) {
1693  assert((reductionLoops.empty() || vectorSizes.size() == 1) &&
1694  "Vectorizing reductions is supported only for 1-D vectors");
1695 
1696  // Compute 1-D, 2-D or 3-D loop pattern to be matched on the target loops.
1697  std::optional<NestedPattern> pattern =
1698  makePattern(loops, vectorSizes.size(), fastestVaryingPattern);
1699  if (!pattern) {
1700  LLVM_DEBUG(dbgs() << "\n[early-vect] pattern couldn't be computed\n");
1701  return;
1702  }
1703 
1704  LLVM_DEBUG(dbgs() << "\n******************************************");
1705  LLVM_DEBUG(dbgs() << "\n******************************************");
1706  LLVM_DEBUG(dbgs() << "\n[early-vect] new pattern on parent op\n");
1707  LLVM_DEBUG(dbgs() << *parentOp << "\n");
1708 
1709  unsigned patternDepth = pattern->getDepth();
1710 
1711  // Compute all the pattern matches and classify them into buckets of
1712  // intersecting matches.
1713  SmallVector<NestedMatch, 32> allMatches;
1714  pattern->match(parentOp, &allMatches);
1715  std::vector<SmallVector<NestedMatch, 8>> intersectionBuckets;
1716  computeIntersectionBuckets(allMatches, intersectionBuckets);
1717 
1718  // Iterate over all buckets and vectorize the matches eagerly. We can only
1719  // vectorize one match from each bucket since all the matches within a bucket
1720  // intersect.
1721  for (auto &intersectingMatches : intersectionBuckets) {
1722  for (NestedMatch &match : intersectingMatches) {
1723  VectorizationStrategy strategy;
1724  // TODO: depending on profitability, elect to reduce the vector size.
1725  strategy.vectorSizes.assign(vectorSizes.begin(), vectorSizes.end());
1726  strategy.reductionLoops = reductionLoops;
1727  if (failed(analyzeProfitability(match.getMatchedChildren(), 1,
1728  patternDepth, &strategy))) {
1729  continue;
1730  }
1731  vectorizeLoopIfProfitable(match.getMatchedOperation(), 0, patternDepth,
1732  &strategy);
1733  // Vectorize match. Skip the rest of intersecting matches in the bucket if
1734  // vectorization succeeded.
1735  // TODO: if pattern does not apply, report it; alter the cost/benefit.
1736  // TODO: some diagnostics if failure to vectorize occurs.
1737  if (succeeded(vectorizeRootMatch(match, strategy)))
1738  break;
1739  }
1740  }
1741 
1742  LLVM_DEBUG(dbgs() << "\n");
1743 }
1744 
1745 /// Applies vectorization to the current function by searching over a bunch of
1746 /// predetermined patterns.
1747 void Vectorize::runOnOperation() {
1748  func::FuncOp f = getOperation();
1749  if (!fastestVaryingPattern.empty() &&
1750  fastestVaryingPattern.size() != vectorSizes.size()) {
1751  f.emitRemark("Fastest varying pattern specified with different size than "
1752  "the vector size.");
1753  return signalPassFailure();
1754  }
1755 
1756  if (vectorizeReductions && vectorSizes.size() != 1) {
1757  f.emitError("Vectorizing reductions is supported only for 1-D vectors.");
1758  return signalPassFailure();
1759  }
1760 
1761  if (llvm::any_of(vectorSizes, [](int64_t size) { return size <= 0; })) {
1762  f.emitError("Vectorization factor must be greater than zero.");
1763  return signalPassFailure();
1764  }
1765 
1766  DenseSet<Operation *> parallelLoops;
1767  ReductionLoopMap reductionLoops;
1768 
1769  // If 'vectorize-reduction=true' is provided, we also populate the
1770  // `reductionLoops` map.
1771  if (vectorizeReductions) {
1772  f.walk([&parallelLoops, &reductionLoops](AffineForOp loop) {
1773  SmallVector<LoopReduction, 2> reductions;
1774  if (isLoopParallel(loop, &reductions)) {
1775  parallelLoops.insert(loop);
1776  // If it's not a reduction loop, adding it to the map is not necessary.
1777  if (!reductions.empty())
1778  reductionLoops[loop] = reductions;
1779  }
1780  });
1781  } else {
1782  f.walk([&parallelLoops](AffineForOp loop) {
1783  if (isLoopParallel(loop))
1784  parallelLoops.insert(loop);
1785  });
1786  }
1787 
1788  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1789  NestedPatternContext mlContext;
1790  vectorizeLoops(f, parallelLoops, vectorSizes, fastestVaryingPattern,
1791  reductionLoops);
1792 }
1793 
1794 /// Verify that affine loops in 'loops' meet the nesting criteria expected by
1795 /// SuperVectorizer:
1796 /// * There must be at least one loop.
1797 /// * There must be a single root loop (nesting level 0).
1798 /// * Each loop at a given nesting level must be nested in a loop from a
1799 /// previous nesting level.
1800 static LogicalResult
1802  // Expected at least one loop.
1803  if (loops.empty())
1804  return failure();
1805 
1806  // Expected only one root loop.
1807  if (loops[0].size() != 1)
1808  return failure();
1809 
1810  // Traverse loops outer-to-inner to check some invariants.
1811  for (int i = 1, end = loops.size(); i < end; ++i) {
1812  for (AffineForOp loop : loops[i]) {
1813  // Check that each loop at this level is nested in one of the loops from
1814  // the previous level.
1815  if (none_of(loops[i - 1], [&](AffineForOp maybeParent) {
1816  return maybeParent->isProperAncestor(loop);
1817  }))
1818  return failure();
1819 
1820  // Check that each loop at this level is not nested in another loop from
1821  // this level.
1822  for (AffineForOp sibling : loops[i]) {
1823  if (sibling->isProperAncestor(loop))
1824  return failure();
1825  }
1826  }
1827  }
1828 
1829  return success();
1830 }
1831 
1832 
1833 /// External utility to vectorize affine loops in 'loops' using the n-D
1834 /// vectorization factors in 'vectorSizes'. By default, each vectorization
1835 /// factor is applied inner-to-outer to the loops of each loop nest.
1836 /// 'fastestVaryingPattern' can be optionally used to provide a different loop
1837 /// vectorization order.
1838 /// If `reductionLoops` is not empty, the given reduction loops may be
1839 /// vectorized along the reduction dimension.
1840 /// TODO: Vectorizing reductions is supported only for 1-D vectorization.
1842  Operation *parentOp, DenseSet<Operation *> &loops,
1843  ArrayRef<int64_t> vectorSizes, ArrayRef<int64_t> fastestVaryingPattern,
1844  const ReductionLoopMap &reductionLoops) {
1845  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1846  NestedPatternContext mlContext;
1847  vectorizeLoops(parentOp, loops, vectorSizes, fastestVaryingPattern,
1848  reductionLoops);
1849 }
1850 
1851 /// External utility to vectorize affine loops from a single loop nest using an
1852 /// n-D vectorization strategy (see doc in VectorizationStrategy definition).
1853 /// Loops are provided in a 2D vector container. The first dimension represents
1854 /// the nesting level relative to the loops to be vectorized. The second
1855 /// dimension contains the loops. This means that:
1856 /// a) every loop in 'loops[i]' must have a parent loop in 'loops[i-1]',
1857 /// b) a loop in 'loops[i]' may or may not have a child loop in 'loops[i+1]'.
1858 ///
1859 /// For example, for the following loop nest:
1860 ///
1861 /// func @vec2d(%in0: memref<64x128x512xf32>, %in1: memref<64x128x128xf32>,
1862 /// %out0: memref<64x128x512xf32>,
1863 /// %out1: memref<64x128x128xf32>) {
1864 /// affine.for %i0 = 0 to 64 {
1865 /// affine.for %i1 = 0 to 128 {
1866 /// affine.for %i2 = 0 to 512 {
1867 /// %ld = affine.load %in0[%i0, %i1, %i2] : memref<64x128x512xf32>
1868 /// affine.store %ld, %out0[%i0, %i1, %i2] : memref<64x128x512xf32>
1869 /// }
1870 /// affine.for %i3 = 0 to 128 {
1871 /// %ld = affine.load %in1[%i0, %i1, %i3] : memref<64x128x128xf32>
1872 /// affine.store %ld, %out1[%i0, %i1, %i3] : memref<64x128x128xf32>
1873 /// }
1874 /// }
1875 /// }
1876 /// return
1877 /// }
1878 ///
1879 /// loops = {{%i0}, {%i2, %i3}}, to vectorize the outermost and the two
1880 /// innermost loops;
1881 /// loops = {{%i1}, {%i2, %i3}}, to vectorize the middle and the two innermost
1882 /// loops;
1883 /// loops = {{%i2}}, to vectorize only the first innermost loop;
1884 /// loops = {{%i3}}, to vectorize only the second innermost loop;
1885 /// loops = {{%i1}}, to vectorize only the middle loop.
1887  std::vector<SmallVector<AffineForOp, 2>> &loops,
1888  const VectorizationStrategy &strategy) {
1889  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1890  NestedPatternContext mlContext;
1891  if (failed(verifyLoopNesting(loops)))
1892  return failure();
1893  return vectorizeLoopNest(loops, strategy);
1894 }
static bool intersects(const ConstantIntRanges &lhs, const ConstantIntRanges &rhs)
Returns true if 2 integer ranges have intersection.
static Operation * vectorizeAffineStore(AffineStoreOp storeOp, VectorizationState &state)
Vectorizes an affine store with the vectorization strategy in 'state' by generating a 'vector....
static Operation * vectorizeAffineForOp(AffineForOp forOp, VectorizationState &state)
Vectorizes a loop with the vectorization strategy in 'state'.
static LogicalResult vectorizeRootMatch(NestedMatch m, const VectorizationStrategy &strategy)
Extracts the matched loops and vectorizes them following a topological order.
static LogicalResult verifyLoopNesting(const std::vector< SmallVector< AffineForOp, 2 >> &loops)
Verify that affine loops in 'loops' meet the nesting criteria expected by SuperVectorizer:
static void getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel, std::vector< SmallVector< AffineForOp, 2 >> &loops)
Recursive implementation to convert all the nested loops in 'match' to a 2D vector container that pre...
static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
static Operation * vectorizeOneOperation(Operation *op, VectorizationState &state)
Encodes Operation-specific behavior for vectorization.
static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind, Value value, VectorizationState &state)
Returns true if value is a constant equal to the neutral element of the given vectorizable reduction.
static Operation * vectorizeUniform(Value uniformVal, VectorizationState &state)
Generates a broadcast op for the provided uniform value using the vectorization strategy in 'state'.
static Operation * vectorizeAffineYieldOp(AffineYieldOp yieldOp, VectorizationState &state)
Vectorizes a yield operation by widening its types.
static void computeIntersectionBuckets(ArrayRef< NestedMatch > matches, std::vector< SmallVector< NestedMatch, 8 >> &intersectionBuckets)
Traverses all the loop matches and classifies them into intersection buckets.
static LogicalResult analyzeProfitability(ArrayRef< NestedMatch > matches, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
Implements a simple strawman strategy for vectorization.
static FilterFunctionType isVectorizableLoopPtrFactory(const DenseSet< Operation * > &parallelLoops, int fastestVaryingMemRefDimension)
Forward declaration.
static Operation * widenOp(Operation *op, VectorizationState &state)
Vectorizes arbitrary operation by plain widening.
static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp, VectorizationState &state)
Tries to transform a scalar constant into a vector constant.
static bool isUniformDefinition(Value value, const VectorizationStrategy *strategy)
Returns true if the provided value is vector uniform given the vectorization strategy.
static void eraseLoopNest(AffineForOp forOp)
Erases a loop nest, including all its nested operations.
static VectorType getVectorType(Type scalarTy, const VectorizationStrategy *strategy)
Returns the vector type resulting from applying the provided vectorization strategy on the scalar typ...
static void getMatchedAffineLoops(NestedMatch match, std::vector< SmallVector< AffineForOp, 2 >> &loops)
Converts all the nested loops in 'match' to a 2D vector container that preserves the relative nesting...
static Value vectorizeOperand(Value operand, VectorizationState &state)
Tries to vectorize a given operand by applying the following logic:
static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind, Value oldOperand, VectorizationState &state)
Creates a constant vector filled with the neutral elements of the given reduction.
static LogicalResult vectorizeLoopNest(std::vector< SmallVector< AffineForOp, 2 >> &loops, const VectorizationStrategy &strategy)
Internal implementation to vectorize affine loops from a single loop nest using an n-D vectorization ...
static NestedPattern & vectorTransferPattern()
static Operation * vectorizeAffineApplyOp(AffineApplyOp applyOp, VectorizationState &state)
We have no need to vectorize affine.apply.
static void vectorizeLoops(Operation *parentOp, DenseSet< Operation * > &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops)
Internal implementation to vectorize affine loops in 'loops' using the n-D vectorization factors in '...
static void computeMemoryOpIndices(Operation *op, AffineMap map, ValueRange mapOperands, VectorizationState &state, SmallVectorImpl< Value > &results)
static Operation * vectorizeAffineLoad(AffineLoadOp loadOp, VectorizationState &state)
Vectorizes an affine load with the vectorization strategy in 'state' by generating a 'vector....
static Value createMask(AffineForOp vecForOp, VectorizationState &state)
Creates a mask used to filter out garbage elements in the last iteration of unaligned loops.
static std::optional< NestedPattern > makePattern(const DenseSet< Operation * > &parallelLoops, int vectorRank, ArrayRef< int64_t > fastestVaryingPattern)
Creates a vectorization pattern from the command line arguments.
static AffineMap makePermutationMap(ArrayRef< Value > indices, const DenseMap< Operation *, unsigned > &enclosingLoopToVectorDim)
Constructs a permutation map from memref indices to vector dimension.
Base type for affine expression.
Definition: AffineExpr.h:69
A multi-dimensional affine map Affine map's are immutable like Type's, and they are uniqued.
Definition: AffineMap.h:47
static AffineMap get(MLIRContext *context)
Returns a zero result affine map with no dimensions or symbols: () -> ().
unsigned getNumSymbols() const
Definition: AffineMap.cpp:384
unsigned getNumDims() const
Definition: AffineMap.cpp:380
ArrayRef< AffineExpr > getResults() const
Definition: AffineMap.cpp:393
unsigned getNumResults() const
Definition: AffineMap.cpp:388
Attributes are known-constant values of operations.
Definition: Attributes.h:25
This class represents an argument of a Block.
Definition: Value.h:315
static DenseElementsAttr get(ShapedType type, ArrayRef< Attribute > values)
Constructs a dense elements attribute from an array of element values.
This is a utility class for mapping one set of IR entities to another.
Definition: IRMapping.h:26
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition: Location.h:63
MLIRContext is the top-level object for a collection of MLIR operations.
Definition: MLIRContext.h:60
RAII guard to reset the insertion point of the builder when destroyed.
Definition: Builders.h:350
This class helps build Operations.
Definition: Builders.h:209
StringAttr getIdentifier() const
Return the name of this operation as a StringAttr.
Operation is the basic unit of execution within MLIR.
Definition: Operation.h:88
OpResult getResult(unsigned idx)
Get the 'idx'th result of this operation.
Definition: Operation.h:402
unsigned getNumRegions()
Returns the number of regions held by this operation.
Definition: Operation.h:669
Location getLoc()
The source location the operation was defined or derived from.
Definition: Operation.h:223
unsigned getNumOperands()
Definition: Operation.h:341
Operation * getParentOp()
Returns the closest surrounding operation that contains this operation or nullptr if this is a top-le...
Definition: Operation.h:234
ArrayRef< NamedAttribute > getAttrs()
Return all of the attributes on this operation.
Definition: Operation.h:507
OperationName getName()
The name of an operation is the key identifier for it.
Definition: Operation.h:119
operand_range getOperands()
Returns an iterator on the underlying Value's.
Definition: Operation.h:373
result_range getResults()
Definition: Operation.h:410
void erase()
Remove this operation from its parent block and delete it.
Definition: Operation.cpp:539
unsigned getNumResults()
Return the number of results held by this operation.
Definition: Operation.h:399
Instances of the Type class are uniqued, have an immutable identifier and an optional mutable compone...
Definition: Types.h:74
This class provides an abstraction over the different types of ranges over Values.
Definition: ValueRange.h:378
This class represents an instance of an SSA value in the MLIR system, representing a computable value...
Definition: Value.h:96
bool use_empty() const
Returns true if this value has no uses.
Definition: Value.h:214
Type getType() const
Return the type of this value.
Definition: Value.h:125
Location getLoc() const
Return the location of this value.
Definition: Value.cpp:26
Operation * getDefiningOp() const
If this value is the result of an operation, return the operation that defines it.
Definition: Value.cpp:20
static WalkResult advance()
Definition: Visitors.h:52
static WalkResult interrupt()
Definition: Visitors.h:51
An NestedPattern captures nested patterns in the IR.
Definition: NestedMatcher.h:47
Operation * getMatchedOperation() const
Definition: NestedMatcher.h:56
ArrayRef< NestedMatch > getMatchedChildren()
Definition: NestedMatcher.h:57
RAII structure to transparently manage the bump allocator for NestedPattern and NestedMatch classes.
NestedPattern For(const NestedPattern &child)
NestedPattern Op(FilterFunctionType filter=defaultFilterFunction)
bool isVectorizableLoopBody(AffineForOp loop, NestedPattern &vectorTransferMatcher)
Checks whether the loop is structurally vectorizable; i.e.
AffineForOp getForInductionVarOwner(Value val)
Returns the loop parent of an induction variable.
Definition: AffineOps.cpp:2554
AffineApplyOp makeComposedAffineApply(OpBuilder &b, Location loc, AffineMap map, ArrayRef< OpFoldResult > operands)
Returns a composed AffineApplyOp by composing map and operands with other AffineApplyOps supplying th...
Definition: AffineOps.cpp:1135
std::function< bool(Operation &)> FilterFunctionType
A NestedPattern is a nested operation walker that:
Definition: NestedMatcher.h:91
void vectorizeAffineLoops(Operation *parentOp, llvm::DenseSet< Operation *, DenseMapInfo< Operation * >> &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops=ReductionLoopMap())
Vectorizes affine loops in 'loops' using the n-D vectorization factors in 'vectorSizes'.
bool isLoopParallel(AffineForOp forOp, SmallVectorImpl< LoopReduction > *parallelReductions=nullptr)
Returns true if ‘forOp’ is a parallel loop.
LogicalResult vectorizeAffineLoopNest(std::vector< SmallVector< AffineForOp, 2 >> &loops, const VectorizationStrategy &strategy)
External utility to vectorize affine loops from a single loop nest using an n-D vectorization strateg...
TypedAttr getIdentityValueAttr(AtomicRMWKind kind, Type resultType, OpBuilder &builder, Location loc, bool useOnlyFiniteValue=false)
Returns the identity value attribute associated with an AtomicRMWKind op.
Definition: ArithOps.cpp:2433
Value getReductionOp(AtomicRMWKind op, OpBuilder &builder, Location loc, Value lhs, Value rhs)
Returns the value obtained by applying the reduction operation kind associated with a binary AtomicRM...
Definition: ArithOps.cpp:2539
Value getVectorReductionOp(arith::AtomicRMWKind op, OpBuilder &builder, Location loc, Value vector)
Returns the value obtained by reducing the vector into a scalar using the operation kind associated w...
Definition: VectorOps.cpp:568
Include the generated interface declarations.
LogicalResult failure(bool isFailure=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:62
bool succeeded(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a success value.
Definition: LogicalResult.h:68
LogicalResult success(bool isSuccess=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:56
Value matchReduction(ArrayRef< BlockArgument > iterCarriedArgs, unsigned redPos, SmallVectorImpl< Operation * > &combinerOps)
Utility to match a generic reduction given a list of iteration-carried arguments, iterCarriedArgs and...
auto get(MLIRContext *context, Ts &&...params)
Helper method that injects context only if needed, this helps unify some of the attribute constructio...
bool failed(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a failure value.
Definition: LogicalResult.h:72
Contains the vectorization state and related methods used across the vectorization process of a given...
This class represents an efficient way to signal success or failure.
Definition: LogicalResult.h:26
Holds parameters to perform n-D vectorization on a single loop nest.
Definition: Utils.h:92
SmallVector< int64_t, 8 > vectorSizes
Definition: Utils.h:95
DenseMap< Operation *, unsigned > loopToVectorDim
Definition: Utils.h:99
ReductionLoopMap reductionLoops
Definition: Utils.h:102