MLIR  14.0.0git
SuperVectorize.cpp
Go to the documentation of this file.
1 //===- SuperVectorize.cpp - Vectorize Pass Impl ---------------------------===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // This file implements vectorization of loops, operations and data types to
10 // a target-independent, n-D super-vector abstraction.
11 //
12 //===----------------------------------------------------------------------===//
13 
14 #include "PassDetail.h"
24 #include "mlir/Support/LLVM.h"
25 #include "llvm/ADT/STLExtras.h"
26 #include "llvm/Support/Debug.h"
27 
28 using namespace mlir;
29 using namespace vector;
30 
31 ///
32 /// Implements a high-level vectorization strategy on a Function.
33 /// The abstraction used is that of super-vectors, which provide a single,
34 /// compact, representation in the vector types, information that is expected
35 /// to reduce the impact of the phase ordering problem
36 ///
37 /// Vector granularity:
38 /// ===================
39 /// This pass is designed to perform vectorization at a super-vector
40 /// granularity. A super-vector is loosely defined as a vector type that is a
41 /// multiple of a "good" vector size so the HW can efficiently implement a set
42 /// of high-level primitives. Multiple is understood along any dimension; e.g.
43 /// both vector<16xf32> and vector<2x8xf32> are valid super-vectors for a
44 /// vector<8xf32> HW vector. Note that a "good vector size so the HW can
45 /// efficiently implement a set of high-level primitives" is not necessarily an
46 /// integer multiple of actual hardware registers. We leave details of this
47 /// distinction unspecified for now.
48 ///
49 /// Some may prefer the terminology a "tile of HW vectors". In this case, one
50 /// should note that super-vectors implement an "always full tile" abstraction.
51 /// They guarantee no partial-tile separation is necessary by relying on a
52 /// high-level copy-reshape abstraction that we call vector.transfer. This
53 /// copy-reshape operations is also responsible for performing layout
54 /// transposition if necessary. In the general case this will require a scoped
55 /// allocation in some notional local memory.
56 ///
57 /// Whatever the mental model one prefers to use for this abstraction, the key
58 /// point is that we burn into a single, compact, representation in the vector
59 /// types, information that is expected to reduce the impact of the phase
60 /// ordering problem. Indeed, a vector type conveys information that:
61 /// 1. the associated loops have dependency semantics that do not prevent
62 /// vectorization;
63 /// 2. the associate loops have been sliced in chunks of static sizes that are
64 /// compatible with vector sizes (i.e. similar to unroll-and-jam);
65 /// 3. the inner loops, in the unroll-and-jam analogy of 2, are captured by
66 /// the
67 /// vector type and no vectorization hampering transformations can be
68 /// applied to them anymore;
69 /// 4. the underlying memrefs are accessed in some notional contiguous way
70 /// that allows loading into vectors with some amount of spatial locality;
71 /// In other words, super-vectorization provides a level of separation of
72 /// concern by way of opacity to subsequent passes. This has the effect of
73 /// encapsulating and propagating vectorization constraints down the list of
74 /// passes until we are ready to lower further.
75 ///
76 /// For a particular target, a notion of minimal n-d vector size will be
77 /// specified and vectorization targets a multiple of those. In the following
78 /// paragraph, let "k ." represent "a multiple of", to be understood as a
79 /// multiple in the same dimension (e.g. vector<16 x k . 128> summarizes
80 /// vector<16 x 128>, vector<16 x 256>, vector<16 x 1024>, etc).
81 ///
82 /// Some non-exhaustive notable super-vector sizes of interest include:
83 /// - CPU: vector<k . HW_vector_size>,
84 /// vector<k' . core_count x k . HW_vector_size>,
85 /// vector<socket_count x k' . core_count x k . HW_vector_size>;
86 /// - GPU: vector<k . warp_size>,
87 /// vector<k . warp_size x float2>,
88 /// vector<k . warp_size x float4>,
89 /// vector<k . warp_size x 4 x 4x 4> (for tensor_core sizes).
90 ///
91 /// Loops and operations are emitted that operate on those super-vector shapes.
92 /// Subsequent lowering passes will materialize to actual HW vector sizes. These
93 /// passes are expected to be (gradually) more target-specific.
94 ///
95 /// At a high level, a vectorized load in a loop will resemble:
96 /// ```mlir
97 /// affine.for %i = ? to ? step ? {
98 /// %v_a = vector.transfer_read A[%i] : memref<?xf32>, vector<128xf32>
99 /// }
100 /// ```
101 /// It is the responsibility of the implementation of vector.transfer_read to
102 /// materialize vector registers from the original scalar memrefs. A later (more
103 /// target-dependent) lowering pass will materialize to actual HW vector sizes.
104 /// This lowering may be occur at different times:
105 /// 1. at the MLIR level into a combination of loops, unrolling, DmaStartOp +
106 /// DmaWaitOp + vectorized operations for data transformations and shuffle;
107 /// thus opening opportunities for unrolling and pipelining. This is an
108 /// instance of library call "whiteboxing"; or
109 /// 2. later in the a target-specific lowering pass or hand-written library
110 /// call; achieving full separation of concerns. This is an instance of
111 /// library call; or
112 /// 3. a mix of both, e.g. based on a model.
113 /// In the future, these operations will expose a contract to constrain the
114 /// search on vectorization patterns and sizes.
115 ///
116 /// Occurrence of super-vectorization in the compiler flow:
117 /// =======================================================
118 /// This is an active area of investigation. We start with 2 remarks to position
119 /// super-vectorization in the context of existing ongoing work: LLVM VPLAN
120 /// and LLVM SLP Vectorizer.
121 ///
122 /// LLVM VPLAN:
123 /// -----------
124 /// The astute reader may have noticed that in the limit, super-vectorization
125 /// can be applied at a similar time and with similar objectives than VPLAN.
126 /// For instance, in the case of a traditional, polyhedral compilation-flow (for
127 /// instance, the PPCG project uses ISL to provide dependence analysis,
128 /// multi-level(scheduling + tiling), lifting footprint to fast memory,
129 /// communication synthesis, mapping, register optimizations) and before
130 /// unrolling. When vectorization is applied at this *late* level in a typical
131 /// polyhedral flow, and is instantiated with actual hardware vector sizes,
132 /// super-vectorization is expected to match (or subsume) the type of patterns
133 /// that LLVM's VPLAN aims at targeting. The main difference here is that MLIR
134 /// is higher level and our implementation should be significantly simpler. Also
135 /// note that in this mode, recursive patterns are probably a bit of an overkill
136 /// although it is reasonable to expect that mixing a bit of outer loop and
137 /// inner loop vectorization + unrolling will provide interesting choices to
138 /// MLIR.
139 ///
140 /// LLVM SLP Vectorizer:
141 /// --------------------
142 /// Super-vectorization however is not meant to be usable in a similar fashion
143 /// to the SLP vectorizer. The main difference lies in the information that
144 /// both vectorizers use: super-vectorization examines contiguity of memory
145 /// references along fastest varying dimensions and loops with recursive nested
146 /// patterns capturing imperfectly-nested loop nests; the SLP vectorizer, on
147 /// the other hand, performs flat pattern matching inside a single unrolled loop
148 /// body and stitches together pieces of load and store operations into full
149 /// 1-D vectors. We envision that the SLP vectorizer is a good way to capture
150 /// innermost loop, control-flow dependent patterns that super-vectorization may
151 /// not be able to capture easily. In other words, super-vectorization does not
152 /// aim at replacing the SLP vectorizer and the two solutions are complementary.
153 ///
154 /// Ongoing investigations:
155 /// -----------------------
156 /// We discuss the following *early* places where super-vectorization is
157 /// applicable and touch on the expected benefits and risks . We list the
158 /// opportunities in the context of the traditional polyhedral compiler flow
159 /// described in PPCG. There are essentially 6 places in the MLIR pass pipeline
160 /// we expect to experiment with super-vectorization:
161 /// 1. Right after language lowering to MLIR: this is the earliest time where
162 /// super-vectorization is expected to be applied. At this level, all the
163 /// language/user/library-level annotations are available and can be fully
164 /// exploited. Examples include loop-type annotations (such as parallel,
165 /// reduction, scan, dependence distance vector, vectorizable) as well as
166 /// memory access annotations (such as non-aliasing writes guaranteed,
167 /// indirect accesses that are permutations by construction) accesses or
168 /// that a particular operation is prescribed atomic by the user. At this
169 /// level, anything that enriches what dependence analysis can do should be
170 /// aggressively exploited. At this level we are close to having explicit
171 /// vector types in the language, except we do not impose that burden on the
172 /// programmer/library: we derive information from scalar code + annotations.
173 /// 2. After dependence analysis and before polyhedral scheduling: the
174 /// information that supports vectorization does not need to be supplied by a
175 /// higher level of abstraction. Traditional dependence analysis is available
176 /// in MLIR and will be used to drive vectorization and cost models.
177 ///
178 /// Let's pause here and remark that applying super-vectorization as described
179 /// in 1. and 2. presents clear opportunities and risks:
180 /// - the opportunity is that vectorization is burned in the type system and
181 /// is protected from the adverse effect of loop scheduling, tiling, loop
182 /// interchange and all passes downstream. Provided that subsequent passes are
183 /// able to operate on vector types; the vector shapes, associated loop
184 /// iterator properties, alignment, and contiguity of fastest varying
185 /// dimensions are preserved until we lower the super-vector types. We expect
186 /// this to significantly rein in on the adverse effects of phase ordering.
187 /// - the risks are that a. all passes after super-vectorization have to work
188 /// on elemental vector types (not that this is always true, wherever
189 /// vectorization is applied) and b. that imposing vectorization constraints
190 /// too early may be overall detrimental to loop fusion, tiling and other
191 /// transformations because the dependence distances are coarsened when
192 /// operating on elemental vector types. For this reason, the pattern
193 /// profitability analysis should include a component that also captures the
194 /// maximal amount of fusion available under a particular pattern. This is
195 /// still at the stage of rough ideas but in this context, search is our
196 /// friend as the Tensor Comprehensions and auto-TVM contributions
197 /// demonstrated previously.
198 /// Bottom-line is we do not yet have good answers for the above but aim at
199 /// making it easy to answer such questions.
200 ///
201 /// Back to our listing, the last places where early super-vectorization makes
202 /// sense are:
203 /// 3. right after polyhedral-style scheduling: PLUTO-style algorithms are known
204 /// to improve locality, parallelism and be configurable (e.g. max-fuse,
205 /// smart-fuse etc). They can also have adverse effects on contiguity
206 /// properties that are required for vectorization but the vector.transfer
207 /// copy-reshape-pad-transpose abstraction is expected to help recapture
208 /// these properties.
209 /// 4. right after polyhedral-style scheduling+tiling;
210 /// 5. right after scheduling+tiling+rescheduling: points 4 and 5 represent
211 /// probably the most promising places because applying tiling achieves a
212 /// separation of concerns that allows rescheduling to worry less about
213 /// locality and more about parallelism and distribution (e.g. min-fuse).
214 ///
215 /// At these levels the risk-reward looks different: on one hand we probably
216 /// lost a good deal of language/user/library-level annotation; on the other
217 /// hand we gained parallelism and locality through scheduling and tiling.
218 /// However we probably want to ensure tiling is compatible with the
219 /// full-tile-only abstraction used in super-vectorization or suffer the
220 /// consequences. It is too early to place bets on what will win but we expect
221 /// super-vectorization to be the right abstraction to allow exploring at all
222 /// these levels. And again, search is our friend.
223 ///
224 /// Lastly, we mention it again here:
225 /// 6. as a MLIR-based alternative to VPLAN.
226 ///
227 /// Lowering, unrolling, pipelining:
228 /// ================================
229 /// TODO: point to the proper places.
230 ///
231 /// Algorithm:
232 /// ==========
233 /// The algorithm proceeds in a few steps:
234 /// 1. defining super-vectorization patterns and matching them on the tree of
235 /// AffineForOp. A super-vectorization pattern is defined as a recursive
236 /// data structures that matches and captures nested, imperfectly-nested
237 /// loops that have a. conformable loop annotations attached (e.g. parallel,
238 /// reduction, vectorizable, ...) as well as b. all contiguous load/store
239 /// operations along a specified minor dimension (not necessarily the
240 /// fastest varying) ;
241 /// 2. analyzing those patterns for profitability (TODO: and
242 /// interference);
243 /// 3. then, for each pattern in order:
244 /// a. applying iterative rewriting of the loops and all their nested
245 /// operations in topological order. Rewriting is implemented by
246 /// coarsening the loops and converting operations and operands to their
247 /// vector forms. Processing operations in topological order is relatively
248 /// simple due to the structured nature of the control-flow
249 /// representation. This order ensures that all the operands of a given
250 /// operation have been vectorized before the operation itself in a single
251 /// traversal, except for operands defined outside of the loop nest. The
252 /// algorithm can convert the following operations to their vector form:
253 /// * Affine load and store operations are converted to opaque vector
254 /// transfer read and write operations.
255 /// * Scalar constant operations/operands are converted to vector
256 /// constant operations (splat).
257 /// * Uniform operands (only induction variables of loops not mapped to
258 /// a vector dimension, or operands defined outside of the loop nest
259 /// for now) are broadcasted to a vector.
260 /// TODO: Support more uniform cases.
261 /// * Affine for operations with 'iter_args' are vectorized by
262 /// vectorizing their 'iter_args' operands and results.
263 /// TODO: Support more complex loops with divergent lbs and/or ubs.
264 /// * The remaining operations in the loop nest are vectorized by
265 /// widening their scalar types to vector types.
266 /// b. if everything under the root AffineForOp in the current pattern
267 /// is vectorized properly, we commit that loop to the IR and remove the
268 /// scalar loop. Otherwise, we discard the vectorized loop and keep the
269 /// original scalar loop.
270 /// c. vectorization is applied on the next pattern in the list. Because
271 /// pattern interference avoidance is not yet implemented and that we do
272 /// not support further vectorizing an already vector load we need to
273 /// re-verify that the pattern is still vectorizable. This is expected to
274 /// make cost models more difficult to write and is subject to improvement
275 /// in the future.
276 ///
277 /// Choice of loop transformation to support the algorithm:
278 /// =======================================================
279 /// The choice of loop transformation to apply for coarsening vectorized loops
280 /// is still subject to exploratory tradeoffs. In particular, say we want to
281 /// vectorize by a factor 128, we want to transform the following input:
282 /// ```mlir
283 /// affine.for %i = %M to %N {
284 /// %a = affine.load %A[%i] : memref<?xf32>
285 /// }
286 /// ```
287 ///
288 /// Traditionally, one would vectorize late (after scheduling, tiling,
289 /// memory promotion etc) say after stripmining (and potentially unrolling in
290 /// the case of LLVM's SLP vectorizer):
291 /// ```mlir
292 /// affine.for %i = floor(%M, 128) to ceil(%N, 128) {
293 /// affine.for %ii = max(%M, 128 * %i) to min(%N, 128*%i + 127) {
294 /// %a = affine.load %A[%ii] : memref<?xf32>
295 /// }
296 /// }
297 /// ```
298 ///
299 /// Instead, we seek to vectorize early and freeze vector types before
300 /// scheduling, so we want to generate a pattern that resembles:
301 /// ```mlir
302 /// affine.for %i = ? to ? step ? {
303 /// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
304 /// }
305 /// ```
306 ///
307 /// i. simply dividing the lower / upper bounds by 128 creates issues
308 /// when representing expressions such as ii + 1 because now we only
309 /// have access to original values that have been divided. Additional
310 /// information is needed to specify accesses at below-128 granularity;
311 /// ii. another alternative is to coarsen the loop step but this may have
312 /// consequences on dependence analysis and fusability of loops: fusable
313 /// loops probably need to have the same step (because we don't want to
314 /// stripmine/unroll to enable fusion).
315 /// As a consequence, we choose to represent the coarsening using the loop
316 /// step for now and reevaluate in the future. Note that we can renormalize
317 /// loop steps later if/when we have evidence that they are problematic.
318 ///
319 /// For the simple strawman example above, vectorizing for a 1-D vector
320 /// abstraction of size 128 returns code similar to:
321 /// ```mlir
322 /// affine.for %i = %M to %N step 128 {
323 /// %v_a = vector.transfer_read %A[%i] : memref<?xf32>, vector<128xf32>
324 /// }
325 /// ```
326 ///
327 /// Unsupported cases, extensions, and work in progress (help welcome :-) ):
328 /// ========================================================================
329 /// 1. lowering to concrete vector types for various HW;
330 /// 2. reduction support for n-D vectorization and non-unit steps;
331 /// 3. non-effecting padding during vector.transfer_read and filter during
332 /// vector.transfer_write;
333 /// 4. misalignment support vector.transfer_read / vector.transfer_write
334 /// (hopefully without read-modify-writes);
335 /// 5. control-flow support;
336 /// 6. cost-models, heuristics and search;
337 /// 7. Op implementation, extensions and implication on memref views;
338 /// 8. many TODOs left around.
339 ///
340 /// Examples:
341 /// =========
342 /// Consider the following Function:
343 /// ```mlir
344 /// func @vector_add_2d(%M : index, %N : index) -> f32 {
345 /// %A = alloc (%M, %N) : memref<?x?xf32, 0>
346 /// %B = alloc (%M, %N) : memref<?x?xf32, 0>
347 /// %C = alloc (%M, %N) : memref<?x?xf32, 0>
348 /// %f1 = arith.constant 1.0 : f32
349 /// %f2 = arith.constant 2.0 : f32
350 /// affine.for %i0 = 0 to %M {
351 /// affine.for %i1 = 0 to %N {
352 /// // non-scoped %f1
353 /// affine.store %f1, %A[%i0, %i1] : memref<?x?xf32, 0>
354 /// }
355 /// }
356 /// affine.for %i2 = 0 to %M {
357 /// affine.for %i3 = 0 to %N {
358 /// // non-scoped %f2
359 /// affine.store %f2, %B[%i2, %i3] : memref<?x?xf32, 0>
360 /// }
361 /// }
362 /// affine.for %i4 = 0 to %M {
363 /// affine.for %i5 = 0 to %N {
364 /// %a5 = affine.load %A[%i4, %i5] : memref<?x?xf32, 0>
365 /// %b5 = affine.load %B[%i4, %i5] : memref<?x?xf32, 0>
366 /// %s5 = arith.addf %a5, %b5 : f32
367 /// // non-scoped %f1
368 /// %s6 = arith.addf %s5, %f1 : f32
369 /// // non-scoped %f2
370 /// %s7 = arith.addf %s5, %f2 : f32
371 /// // diamond dependency.
372 /// %s8 = arith.addf %s7, %s6 : f32
373 /// affine.store %s8, %C[%i4, %i5] : memref<?x?xf32, 0>
374 /// }
375 /// }
376 /// %c7 = arith.constant 7 : index
377 /// %c42 = arith.constant 42 : index
378 /// %res = load %C[%c7, %c42] : memref<?x?xf32, 0>
379 /// return %res : f32
380 /// }
381 /// ```
382 ///
383 /// The -affine-vectorize pass with the following arguments:
384 /// ```
385 /// -affine-vectorize="virtual-vector-size=256 test-fastest-varying=0"
386 /// ```
387 ///
388 /// produces this standard innermost-loop vectorized code:
389 /// ```mlir
390 /// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
391 /// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
392 /// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
393 /// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
394 /// %cst = arith.constant 1.0 : f32
395 /// %cst_0 = arith.constant 2.0 : f32
396 /// affine.for %i0 = 0 to %arg0 {
397 /// affine.for %i1 = 0 to %arg1 step 256 {
398 /// %cst_1 = arith.constant dense<vector<256xf32>, 1.0> :
399 /// vector<256xf32>
400 /// vector.transfer_write %cst_1, %0[%i0, %i1] :
401 /// vector<256xf32>, memref<?x?xf32>
402 /// }
403 /// }
404 /// affine.for %i2 = 0 to %arg0 {
405 /// affine.for %i3 = 0 to %arg1 step 256 {
406 /// %cst_2 = arith.constant dense<vector<256xf32>, 2.0> :
407 /// vector<256xf32>
408 /// vector.transfer_write %cst_2, %1[%i2, %i3] :
409 /// vector<256xf32>, memref<?x?xf32>
410 /// }
411 /// }
412 /// affine.for %i4 = 0 to %arg0 {
413 /// affine.for %i5 = 0 to %arg1 step 256 {
414 /// %3 = vector.transfer_read %0[%i4, %i5] :
415 /// memref<?x?xf32>, vector<256xf32>
416 /// %4 = vector.transfer_read %1[%i4, %i5] :
417 /// memref<?x?xf32>, vector<256xf32>
418 /// %5 = arith.addf %3, %4 : vector<256xf32>
419 /// %cst_3 = arith.constant dense<vector<256xf32>, 1.0> :
420 /// vector<256xf32>
421 /// %6 = arith.addf %5, %cst_3 : vector<256xf32>
422 /// %cst_4 = arith.constant dense<vector<256xf32>, 2.0> :
423 /// vector<256xf32>
424 /// %7 = arith.addf %5, %cst_4 : vector<256xf32>
425 /// %8 = arith.addf %7, %6 : vector<256xf32>
426 /// vector.transfer_write %8, %2[%i4, %i5] :
427 /// vector<256xf32>, memref<?x?xf32>
428 /// }
429 /// }
430 /// %c7 = arith.constant 7 : index
431 /// %c42 = arith.constant 42 : index
432 /// %9 = load %2[%c7, %c42] : memref<?x?xf32>
433 /// return %9 : f32
434 /// }
435 /// ```
436 ///
437 /// The -affine-vectorize pass with the following arguments:
438 /// ```
439 /// -affine-vectorize="virtual-vector-size=32,256 test-fastest-varying=1,0"
440 /// ```
441 ///
442 /// produces this more interesting mixed outer-innermost-loop vectorized code:
443 /// ```mlir
444 /// func @vector_add_2d(%arg0 : index, %arg1 : index) -> f32 {
445 /// %0 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
446 /// %1 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
447 /// %2 = memref.alloc(%arg0, %arg1) : memref<?x?xf32>
448 /// %cst = arith.constant 1.0 : f32
449 /// %cst_0 = arith.constant 2.0 : f32
450 /// affine.for %i0 = 0 to %arg0 step 32 {
451 /// affine.for %i1 = 0 to %arg1 step 256 {
452 /// %cst_1 = arith.constant dense<vector<32x256xf32>, 1.0> :
453 /// vector<32x256xf32>
454 /// vector.transfer_write %cst_1, %0[%i0, %i1] :
455 /// vector<32x256xf32>, memref<?x?xf32>
456 /// }
457 /// }
458 /// affine.for %i2 = 0 to %arg0 step 32 {
459 /// affine.for %i3 = 0 to %arg1 step 256 {
460 /// %cst_2 = arith.constant dense<vector<32x256xf32>, 2.0> :
461 /// vector<32x256xf32>
462 /// vector.transfer_write %cst_2, %1[%i2, %i3] :
463 /// vector<32x256xf32>, memref<?x?xf32>
464 /// }
465 /// }
466 /// affine.for %i4 = 0 to %arg0 step 32 {
467 /// affine.for %i5 = 0 to %arg1 step 256 {
468 /// %3 = vector.transfer_read %0[%i4, %i5] :
469 /// memref<?x?xf32> vector<32x256xf32>
470 /// %4 = vector.transfer_read %1[%i4, %i5] :
471 /// memref<?x?xf32>, vector<32x256xf32>
472 /// %5 = arith.addf %3, %4 : vector<32x256xf32>
473 /// %cst_3 = arith.constant dense<vector<32x256xf32>, 1.0> :
474 /// vector<32x256xf32>
475 /// %6 = arith.addf %5, %cst_3 : vector<32x256xf32>
476 /// %cst_4 = arith.constant dense<vector<32x256xf32>, 2.0> :
477 /// vector<32x256xf32>
478 /// %7 = arith.addf %5, %cst_4 : vector<32x256xf32>
479 /// %8 = arith.addf %7, %6 : vector<32x256xf32>
480 /// vector.transfer_write %8, %2[%i4, %i5] :
481 /// vector<32x256xf32>, memref<?x?xf32>
482 /// }
483 /// }
484 /// %c7 = arith.constant 7 : index
485 /// %c42 = arith.constant 42 : index
486 /// %9 = load %2[%c7, %c42] : memref<?x?xf32>
487 /// return %9 : f32
488 /// }
489 /// ```
490 ///
491 /// Of course, much more intricate n-D imperfectly-nested patterns can be
492 /// vectorized too and specified in a fully declarative fashion.
493 ///
494 /// Reduction:
495 /// ==========
496 /// Vectorizing reduction loops along the reduction dimension is supported if:
497 /// - the reduction kind is supported,
498 /// - the vectorization is 1-D, and
499 /// - the step size of the loop equals to one.
500 ///
501 /// Comparing to the non-vector-dimension case, two additional things are done
502 /// during vectorization of such loops:
503 /// - The resulting vector returned from the loop is reduced to a scalar using
504 /// `vector.reduce`.
505 /// - In some cases a mask is applied to the vector yielded at the end of the
506 /// loop to prevent garbage values from being written to the accumulator.
507 ///
508 /// Reduction vectorization is switched off by default, it can be enabled by
509 /// passing a map from loops to reductions to utility functions, or by passing
510 /// `vectorize-reductions=true` to the vectorization pass.
511 ///
512 /// Consider the following example:
513 /// ```mlir
514 /// func @vecred(%in: memref<512xf32>) -> f32 {
515 /// %cst = arith.constant 0.000000e+00 : f32
516 /// %sum = affine.for %i = 0 to 500 iter_args(%part_sum = %cst) -> (f32) {
517 /// %ld = affine.load %in[%i] : memref<512xf32>
518 /// %cos = math.cos %ld : f32
519 /// %add = arith.addf %part_sum, %cos : f32
520 /// affine.yield %add : f32
521 /// }
522 /// return %sum : f32
523 /// }
524 /// ```
525 ///
526 /// The -affine-vectorize pass with the following arguments:
527 /// ```
528 /// -affine-vectorize="virtual-vector-size=128 test-fastest-varying=0 \
529 /// vectorize-reductions=true"
530 /// ```
531 /// produces the following output:
532 /// ```mlir
533 /// #map = affine_map<(d0) -> (-d0 + 500)>
534 /// func @vecred(%arg0: memref<512xf32>) -> f32 {
535 /// %cst = arith.constant 0.000000e+00 : f32
536 /// %cst_0 = arith.constant dense<0.000000e+00> : vector<128xf32>
537 /// %0 = affine.for %arg1 = 0 to 500 step 128 iter_args(%arg2 = %cst_0)
538 /// -> (vector<128xf32>) {
539 /// // %2 is the number of iterations left in the original loop.
540 /// %2 = affine.apply #map(%arg1)
541 /// %3 = vector.create_mask %2 : vector<128xi1>
542 /// %cst_1 = arith.constant 0.000000e+00 : f32
543 /// %4 = vector.transfer_read %arg0[%arg1], %cst_1 :
544 /// memref<512xf32>, vector<128xf32>
545 /// %5 = math.cos %4 : vector<128xf32>
546 /// %6 = arith.addf %arg2, %5 : vector<128xf32>
547 /// // We filter out the effect of last 12 elements using the mask.
548 /// %7 = select %3, %6, %arg2 : vector<128xi1>, vector<128xf32>
549 /// affine.yield %7 : vector<128xf32>
550 /// }
551 /// %1 = vector.reduction "add", %0 : vector<128xf32> into f32
552 /// return %1 : f32
553 /// }
554 /// ```
555 ///
556 /// Note that because of loop misalignment we needed to apply a mask to prevent
557 /// last 12 elements from affecting the final result. The mask is full of ones
558 /// in every iteration except for the last one, in which it has the form
559 /// `11...100...0` with 116 ones and 12 zeros.
560 
561 #define DEBUG_TYPE "early-vect"
562 
563 using llvm::dbgs;
564 
565 /// Forward declaration.
566 static FilterFunctionType
568  int fastestVaryingMemRefDimension);
569 
570 /// Creates a vectorization pattern from the command line arguments.
571 /// Up to 3-D patterns are supported.
572 /// If the command line argument requests a pattern of higher order, returns an
573 /// empty pattern list which will conservatively result in no vectorization.
575 makePattern(const DenseSet<Operation *> &parallelLoops, int vectorRank,
576  ArrayRef<int64_t> fastestVaryingPattern) {
577  using matcher::For;
578  int64_t d0 = fastestVaryingPattern.empty() ? -1 : fastestVaryingPattern[0];
579  int64_t d1 = fastestVaryingPattern.size() < 2 ? -1 : fastestVaryingPattern[1];
580  int64_t d2 = fastestVaryingPattern.size() < 3 ? -1 : fastestVaryingPattern[2];
581  switch (vectorRank) {
582  case 1:
583  return For(isVectorizableLoopPtrFactory(parallelLoops, d0));
584  case 2:
585  return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
586  For(isVectorizableLoopPtrFactory(parallelLoops, d1)));
587  case 3:
588  return For(isVectorizableLoopPtrFactory(parallelLoops, d0),
589  For(isVectorizableLoopPtrFactory(parallelLoops, d1),
590  For(isVectorizableLoopPtrFactory(parallelLoops, d2))));
591  default: {
592  return llvm::None;
593  }
594  }
595 }
596 
598  static auto pattern = matcher::Op([](Operation &op) {
599  return isa<vector::TransferReadOp, vector::TransferWriteOp>(op);
600  });
601  return pattern;
602 }
603 
604 namespace {
605 
606 /// Base state for the vectorize pass.
607 /// Command line arguments are preempted by non-empty pass arguments.
608 struct Vectorize : public AffineVectorizeBase<Vectorize> {
609  Vectorize() = default;
610  Vectorize(ArrayRef<int64_t> virtualVectorSize);
611  void runOnOperation() override;
612 };
613 
614 } // namespace
615 
616 Vectorize::Vectorize(ArrayRef<int64_t> virtualVectorSize) {
617  vectorSizes = virtualVectorSize;
618 }
619 
620 static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern,
621  unsigned patternDepth,
622  VectorizationStrategy *strategy) {
623  assert(patternDepth > depthInPattern &&
624  "patternDepth is greater than depthInPattern");
625  if (patternDepth - depthInPattern > strategy->vectorSizes.size()) {
626  // Don't vectorize this loop
627  return;
628  }
629  strategy->loopToVectorDim[loop] =
630  strategy->vectorSizes.size() - (patternDepth - depthInPattern);
631 }
632 
633 /// Implements a simple strawman strategy for vectorization.
634 /// Given a matched pattern `matches` of depth `patternDepth`, this strategy
635 /// greedily assigns the fastest varying dimension ** of the vector ** to the
636 /// innermost loop in the pattern.
637 /// When coupled with a pattern that looks for the fastest varying dimension in
638 /// load/store MemRefs, this creates a generic vectorization strategy that works
639 /// for any loop in a hierarchy (outermost, innermost or intermediate).
640 ///
641 /// TODO: In the future we should additionally increase the power of the
642 /// profitability analysis along 3 directions:
643 /// 1. account for loop extents (both static and parametric + annotations);
644 /// 2. account for data layout permutations;
645 /// 3. account for impact of vectorization on maximal loop fusion.
646 /// Then we can quantify the above to build a cost model and search over
647 /// strategies.
649  unsigned depthInPattern,
650  unsigned patternDepth,
651  VectorizationStrategy *strategy) {
652  for (auto m : matches) {
653  if (failed(analyzeProfitability(m.getMatchedChildren(), depthInPattern + 1,
654  patternDepth, strategy))) {
655  return failure();
656  }
657  vectorizeLoopIfProfitable(m.getMatchedOperation(), depthInPattern,
658  patternDepth, strategy);
659  }
660  return success();
661 }
662 
663 ///// end TODO: Hoist to a VectorizationStrategy.cpp when appropriate /////
664 
665 namespace {
666 
667 struct VectorizationState {
668 
669  VectorizationState(MLIRContext *context) : builder(context) {}
670 
671  /// Registers the vector replacement of a scalar operation and its result
672  /// values. Both operations must have the same number of results.
673  ///
674  /// This utility is used to register the replacement for the vast majority of
675  /// the vectorized operations.
676  ///
677  /// Example:
678  /// * 'replaced': %0 = arith.addf %1, %2 : f32
679  /// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
680  void registerOpVectorReplacement(Operation *replaced, Operation *replacement);
681 
682  /// Registers the vector replacement of a scalar value. The replacement
683  /// operation should have a single result, which replaces the scalar value.
684  ///
685  /// This utility is used to register the vector replacement of block arguments
686  /// and operation results which are not directly vectorized (i.e., their
687  /// scalar version still exists after vectorization), like uniforms.
688  ///
689  /// Example:
690  /// * 'replaced': block argument or operation outside of the vectorized
691  /// loop.
692  /// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
693  void registerValueVectorReplacement(Value replaced, Operation *replacement);
694 
695  /// Registers the vector replacement of a block argument (e.g., iter_args).
696  ///
697  /// Example:
698  /// * 'replaced': 'iter_arg' block argument.
699  /// * 'replacement': vectorized 'iter_arg' block argument.
700  void registerBlockArgVectorReplacement(BlockArgument replaced,
701  BlockArgument replacement);
702 
703  /// Registers the scalar replacement of a scalar value. 'replacement' must be
704  /// scalar. Both values must be block arguments. Operation results should be
705  /// replaced using the 'registerOp*' utilitites.
706  ///
707  /// This utility is used to register the replacement of block arguments
708  /// that are within the loop to be vectorized and will continue being scalar
709  /// within the vector loop.
710  ///
711  /// Example:
712  /// * 'replaced': induction variable of a loop to be vectorized.
713  /// * 'replacement': new induction variable in the new vector loop.
714  void registerValueScalarReplacement(BlockArgument replaced,
715  BlockArgument replacement);
716 
717  /// Registers the scalar replacement of a scalar result returned from a
718  /// reduction loop. 'replacement' must be scalar.
719  ///
720  /// This utility is used to register the replacement for scalar results of
721  /// vectorized reduction loops with iter_args.
722  ///
723  /// Example 2:
724  /// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
725  /// * 'replacement': %1 = vector.reduction "add" %0 : vector<4xf32> into f32
726  void registerLoopResultScalarReplacement(Value replaced, Value replacement);
727 
728  /// Returns in 'replacedVals' the scalar replacement for values in
729  /// 'inputVals'.
730  void getScalarValueReplacementsFor(ValueRange inputVals,
731  SmallVectorImpl<Value> &replacedVals);
732 
733  /// Erases the scalar loop nest after its successful vectorization.
734  void finishVectorizationPattern(AffineForOp rootLoop);
735 
736  // Used to build and insert all the new operations created. The insertion
737  // point is preserved and updated along the vectorization process.
738  OpBuilder builder;
739 
740  // Maps input scalar operations to their vector counterparts.
741  DenseMap<Operation *, Operation *> opVectorReplacement;
742  // Maps input scalar values to their vector counterparts.
743  BlockAndValueMapping valueVectorReplacement;
744  // Maps input scalar values to their new scalar counterparts in the vector
745  // loop nest.
746  BlockAndValueMapping valueScalarReplacement;
747  // Maps results of reduction loops to their new scalar counterparts.
748  DenseMap<Value, Value> loopResultScalarReplacement;
749 
750  // Maps the newly created vector loops to their vector dimension.
751  DenseMap<Operation *, unsigned> vecLoopToVecDim;
752 
753  // Maps the new vectorized loops to the corresponding vector masks if it is
754  // required.
755  DenseMap<Operation *, Value> vecLoopToMask;
756 
757  // The strategy drives which loop to vectorize by which amount.
758  const VectorizationStrategy *strategy = nullptr;
759 
760 private:
761  /// Internal implementation to map input scalar values to new vector or scalar
762  /// values.
763  void registerValueVectorReplacementImpl(Value replaced, Value replacement);
764  void registerValueScalarReplacementImpl(Value replaced, Value replacement);
765 };
766 
767 } // namespace
768 
769 /// Registers the vector replacement of a scalar operation and its result
770 /// values. Both operations must have the same number of results.
771 ///
772 /// This utility is used to register the replacement for the vast majority of
773 /// the vectorized operations.
774 ///
775 /// Example:
776 /// * 'replaced': %0 = arith.addf %1, %2 : f32
777 /// * 'replacement': %0 = arith.addf %1, %2 : vector<128xf32>
778 void VectorizationState::registerOpVectorReplacement(Operation *replaced,
779  Operation *replacement) {
780  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ commit vectorized op:\n");
781  LLVM_DEBUG(dbgs() << *replaced << "\n");
782  LLVM_DEBUG(dbgs() << "into\n");
783  LLVM_DEBUG(dbgs() << *replacement << "\n");
784 
785  assert(replaced->getNumResults() == replacement->getNumResults() &&
786  "Unexpected replaced and replacement results");
787  assert(opVectorReplacement.count(replaced) == 0 && "already registered");
788  opVectorReplacement[replaced] = replacement;
789 
790  for (auto resultTuple :
791  llvm::zip(replaced->getResults(), replacement->getResults()))
792  registerValueVectorReplacementImpl(std::get<0>(resultTuple),
793  std::get<1>(resultTuple));
794 }
795 
796 /// Registers the vector replacement of a scalar value. The replacement
797 /// operation should have a single result, which replaces the scalar value.
798 ///
799 /// This utility is used to register the vector replacement of block arguments
800 /// and operation results which are not directly vectorized (i.e., their
801 /// scalar version still exists after vectorization), like uniforms.
802 ///
803 /// Example:
804 /// * 'replaced': block argument or operation outside of the vectorized loop.
805 /// * 'replacement': %0 = vector.broadcast %1 : f32 to vector<128xf32>
806 void VectorizationState::registerValueVectorReplacement(
807  Value replaced, Operation *replacement) {
808  assert(replacement->getNumResults() == 1 &&
809  "Expected single-result replacement");
810  if (Operation *defOp = replaced.getDefiningOp())
811  registerOpVectorReplacement(defOp, replacement);
812  else
813  registerValueVectorReplacementImpl(replaced, replacement->getResult(0));
814 }
815 
816 /// Registers the vector replacement of a block argument (e.g., iter_args).
817 ///
818 /// Example:
819 /// * 'replaced': 'iter_arg' block argument.
820 /// * 'replacement': vectorized 'iter_arg' block argument.
821 void VectorizationState::registerBlockArgVectorReplacement(
822  BlockArgument replaced, BlockArgument replacement) {
823  registerValueVectorReplacementImpl(replaced, replacement);
824 }
825 
826 void VectorizationState::registerValueVectorReplacementImpl(Value replaced,
827  Value replacement) {
828  assert(!valueVectorReplacement.contains(replaced) &&
829  "Vector replacement already registered");
830  assert(replacement.getType().isa<VectorType>() &&
831  "Expected vector type in vector replacement");
832  valueVectorReplacement.map(replaced, replacement);
833 }
834 
835 /// Registers the scalar replacement of a scalar value. 'replacement' must be
836 /// scalar. Both values must be block arguments. Operation results should be
837 /// replaced using the 'registerOp*' utilitites.
838 ///
839 /// This utility is used to register the replacement of block arguments
840 /// that are within the loop to be vectorized and will continue being scalar
841 /// within the vector loop.
842 ///
843 /// Example:
844 /// * 'replaced': induction variable of a loop to be vectorized.
845 /// * 'replacement': new induction variable in the new vector loop.
846 void VectorizationState::registerValueScalarReplacement(
847  BlockArgument replaced, BlockArgument replacement) {
848  registerValueScalarReplacementImpl(replaced, replacement);
849 }
850 
851 /// Registers the scalar replacement of a scalar result returned from a
852 /// reduction loop. 'replacement' must be scalar.
853 ///
854 /// This utility is used to register the replacement for scalar results of
855 /// vectorized reduction loops with iter_args.
856 ///
857 /// Example 2:
858 /// * 'replaced': %0 = affine.for %i = 0 to 512 iter_args(%x = ...) -> (f32)
859 /// * 'replacement': %1 = vector.reduction "add" %0 : vector<4xf32> into f32
860 void VectorizationState::registerLoopResultScalarReplacement(
861  Value replaced, Value replacement) {
862  assert(isa<AffineForOp>(replaced.getDefiningOp()));
863  assert(loopResultScalarReplacement.count(replaced) == 0 &&
864  "already registered");
865  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ will replace a result of the loop "
866  "with scalar: "
867  << replacement);
868  loopResultScalarReplacement[replaced] = replacement;
869 }
870 
871 void VectorizationState::registerValueScalarReplacementImpl(Value replaced,
872  Value replacement) {
873  assert(!valueScalarReplacement.contains(replaced) &&
874  "Scalar value replacement already registered");
875  assert(!replacement.getType().isa<VectorType>() &&
876  "Expected scalar type in scalar replacement");
877  valueScalarReplacement.map(replaced, replacement);
878 }
879 
880 /// Returns in 'replacedVals' the scalar replacement for values in 'inputVals'.
881 void VectorizationState::getScalarValueReplacementsFor(
882  ValueRange inputVals, SmallVectorImpl<Value> &replacedVals) {
883  for (Value inputVal : inputVals)
884  replacedVals.push_back(valueScalarReplacement.lookupOrDefault(inputVal));
885 }
886 
887 /// Erases a loop nest, including all its nested operations.
888 static void eraseLoopNest(AffineForOp forOp) {
889  LLVM_DEBUG(dbgs() << "[early-vect]+++++ erasing:\n" << forOp << "\n");
890  forOp.erase();
891 }
892 
893 /// Erases the scalar loop nest after its successful vectorization.
894 void VectorizationState::finishVectorizationPattern(AffineForOp rootLoop) {
895  LLVM_DEBUG(dbgs() << "\n[early-vect] Finalizing vectorization\n");
896  eraseLoopNest(rootLoop);
897 }
898 
899 // Apply 'map' with 'mapOperands' returning resulting values in 'results'.
901  ValueRange mapOperands,
902  VectorizationState &state,
903  SmallVectorImpl<Value> &results) {
904  for (auto resultExpr : map.getResults()) {
905  auto singleResMap =
906  AffineMap::get(map.getNumDims(), map.getNumSymbols(), resultExpr);
907  auto afOp = state.builder.create<AffineApplyOp>(op->getLoc(), singleResMap,
908  mapOperands);
909  results.push_back(afOp);
910  }
911 }
912 
913 /// Returns a FilterFunctionType that can be used in NestedPattern to match a
914 /// loop whose underlying load/store accesses are either invariant or all
915 // varying along the `fastestVaryingMemRefDimension`.
916 static FilterFunctionType
918  int fastestVaryingMemRefDimension) {
919  return [&parallelLoops, fastestVaryingMemRefDimension](Operation &forOp) {
920  auto loop = cast<AffineForOp>(forOp);
921  auto parallelIt = parallelLoops.find(loop);
922  if (parallelIt == parallelLoops.end())
923  return false;
924  int memRefDim = -1;
925  auto vectorizableBody =
926  isVectorizableLoopBody(loop, &memRefDim, vectorTransferPattern());
927  if (!vectorizableBody)
928  return false;
929  return memRefDim == -1 || fastestVaryingMemRefDimension == -1 ||
930  memRefDim == fastestVaryingMemRefDimension;
931  };
932 }
933 
934 /// Returns the vector type resulting from applying the provided vectorization
935 /// strategy on the scalar type.
936 static VectorType getVectorType(Type scalarTy,
937  const VectorizationStrategy *strategy) {
938  assert(!scalarTy.isa<VectorType>() && "Expected scalar type");
939  return VectorType::get(strategy->vectorSizes, scalarTy);
940 }
941 
942 /// Tries to transform a scalar constant into a vector constant. Returns the
943 /// vector constant if the scalar type is valid vector element type. Returns
944 /// nullptr, otherwise.
945 static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp,
946  VectorizationState &state) {
947  Type scalarTy = constOp.getType();
948  if (!VectorType::isValidElementType(scalarTy))
949  return nullptr;
950 
951  auto vecTy = getVectorType(scalarTy, state.strategy);
952  auto vecAttr = DenseElementsAttr::get(vecTy, constOp.getValue());
953 
954  OpBuilder::InsertionGuard guard(state.builder);
955  Operation *parentOp = state.builder.getInsertionBlock()->getParentOp();
956  // Find the innermost vectorized ancestor loop to insert the vector constant.
957  while (parentOp && !state.vecLoopToVecDim.count(parentOp))
958  parentOp = parentOp->getParentOp();
959  assert(parentOp && state.vecLoopToVecDim.count(parentOp) &&
960  isa<AffineForOp>(parentOp) && "Expected a vectorized for op");
961  auto vecForOp = cast<AffineForOp>(parentOp);
962  state.builder.setInsertionPointToStart(vecForOp.getBody());
963  auto newConstOp =
964  state.builder.create<arith::ConstantOp>(constOp.getLoc(), vecAttr);
965 
966  // Register vector replacement for future uses in the scope.
967  state.registerOpVectorReplacement(constOp, newConstOp);
968  return newConstOp;
969 }
970 
971 /// Creates a constant vector filled with the neutral elements of the given
972 /// reduction. The scalar type of vector elements will be taken from
973 /// `oldOperand`.
974 static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind,
975  Value oldOperand,
976  VectorizationState &state) {
977  Type scalarTy = oldOperand.getType();
978  if (!VectorType::isValidElementType(scalarTy))
979  return nullptr;
980 
981  Attribute valueAttr = getIdentityValueAttr(
982  reductionKind, scalarTy, state.builder, oldOperand.getLoc());
983  auto vecTy = getVectorType(scalarTy, state.strategy);
984  auto vecAttr = DenseElementsAttr::get(vecTy, valueAttr);
985  auto newConstOp =
986  state.builder.create<arith::ConstantOp>(oldOperand.getLoc(), vecAttr);
987 
988  return newConstOp;
989 }
990 
991 /// Creates a mask used to filter out garbage elements in the last iteration
992 /// of unaligned loops. If a mask is not required then `nullptr` is returned.
993 /// The mask will be a vector of booleans representing meaningful vector
994 /// elements in the current iteration. It is filled with ones for each iteration
995 /// except for the last one, where it has the form `11...100...0` with the
996 /// number of ones equal to the number of meaningful elements (i.e. the number
997 /// of iterations that would be left in the original loop).
998 static Value createMask(AffineForOp vecForOp, VectorizationState &state) {
999  assert(state.strategy->vectorSizes.size() == 1 &&
1000  "Creating a mask non-1-D vectors is not supported.");
1001  assert(vecForOp.getStep() == state.strategy->vectorSizes[0] &&
1002  "Creating a mask for loops with non-unit original step size is not "
1003  "supported.");
1004 
1005  // Check if we have already created the mask.
1006  if (Value mask = state.vecLoopToMask.lookup(vecForOp))
1007  return mask;
1008 
1009  // If the loop has constant bounds and the original number of iterations is
1010  // divisable by the vector size then we don't need a mask.
1011  if (vecForOp.hasConstantBounds()) {
1012  int64_t originalTripCount =
1013  vecForOp.getConstantUpperBound() - vecForOp.getConstantLowerBound();
1014  if (originalTripCount % vecForOp.getStep() == 0)
1015  return nullptr;
1016  }
1017 
1018  OpBuilder::InsertionGuard guard(state.builder);
1019  state.builder.setInsertionPointToStart(vecForOp.getBody());
1020 
1021  // We generate the mask using the `vector.create_mask` operation which accepts
1022  // the number of meaningful elements (i.e. the length of the prefix of 1s).
1023  // To compute the number of meaningful elements we subtract the current value
1024  // of the iteration variable from the upper bound of the loop. Example:
1025  //
1026  // // 500 is the upper bound of the loop
1027  // #map = affine_map<(d0) -> (500 - d0)>
1028  // %elems_left = affine.apply #map(%iv)
1029  // %mask = vector.create_mask %elems_left : vector<128xi1>
1030 
1031  Location loc = vecForOp.getLoc();
1032 
1033  // First we get the upper bound of the loop using `affine.apply` or
1034  // `affine.min`.
1035  AffineMap ubMap = vecForOp.getUpperBoundMap();
1036  Value ub;
1037  if (ubMap.getNumResults() == 1)
1038  ub = state.builder.create<AffineApplyOp>(loc, vecForOp.getUpperBoundMap(),
1039  vecForOp.getUpperBoundOperands());
1040  else
1041  ub = state.builder.create<AffineMinOp>(loc, vecForOp.getUpperBoundMap(),
1042  vecForOp.getUpperBoundOperands());
1043  // Then we compute the number of (original) iterations left in the loop.
1044  AffineExpr subExpr =
1045  state.builder.getAffineDimExpr(0) - state.builder.getAffineDimExpr(1);
1046  Value itersLeft =
1047  makeComposedAffineApply(state.builder, loc, AffineMap::get(2, 0, subExpr),
1048  {ub, vecForOp.getInductionVar()});
1049  // If the affine maps were successfully composed then `ub` is unneeded.
1050  if (ub.use_empty())
1051  ub.getDefiningOp()->erase();
1052  // Finally we create the mask.
1053  Type maskTy = VectorType::get(state.strategy->vectorSizes,
1054  state.builder.getIntegerType(1));
1055  Value mask =
1056  state.builder.create<vector::CreateMaskOp>(loc, maskTy, itersLeft);
1057 
1058  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a mask:\n"
1059  << itersLeft << "\n"
1060  << mask << "\n");
1061 
1062  state.vecLoopToMask[vecForOp] = mask;
1063  return mask;
1064 }
1065 
1066 /// Returns true if the provided value is vector uniform given the vectorization
1067 /// strategy.
1068 // TODO: For now, only values that are induction variables of loops not in
1069 // `loopToVectorDim` or invariants to all the loops in the vectorization
1070 // strategy are considered vector uniforms.
1072  const VectorizationStrategy *strategy) {
1073  AffineForOp forOp = getForInductionVarOwner(value);
1074  if (forOp && strategy->loopToVectorDim.count(forOp) == 0)
1075  return true;
1076 
1077  for (auto loopToDim : strategy->loopToVectorDim) {
1078  auto loop = cast<AffineForOp>(loopToDim.first);
1079  if (!loop.isDefinedOutsideOfLoop(value))
1080  return false;
1081  }
1082  return true;
1083 }
1084 
1085 /// Generates a broadcast op for the provided uniform value using the
1086 /// vectorization strategy in 'state'.
1087 static Operation *vectorizeUniform(Value uniformVal,
1088  VectorizationState &state) {
1089  OpBuilder::InsertionGuard guard(state.builder);
1090  Value uniformScalarRepl =
1091  state.valueScalarReplacement.lookupOrDefault(uniformVal);
1092  state.builder.setInsertionPointAfterValue(uniformScalarRepl);
1093 
1094  auto vectorTy = getVectorType(uniformVal.getType(), state.strategy);
1095  auto bcastOp = state.builder.create<BroadcastOp>(uniformVal.getLoc(),
1096  vectorTy, uniformScalarRepl);
1097  state.registerValueVectorReplacement(uniformVal, bcastOp);
1098  return bcastOp;
1099 }
1100 
1101 /// Tries to vectorize a given `operand` by applying the following logic:
1102 /// 1. if the defining operation has been already vectorized, `operand` is
1103 /// already in the proper vector form;
1104 /// 2. if the `operand` is a constant, returns the vectorized form of the
1105 /// constant;
1106 /// 3. if the `operand` is uniform, returns a vector broadcast of the `op`;
1107 /// 4. otherwise, the vectorization of `operand` is not supported.
1108 /// Newly created vector operations are registered in `state` as replacement
1109 /// for their scalar counterparts.
1110 /// In particular this logic captures some of the use cases where definitions
1111 /// that are not scoped under the current pattern are needed to vectorize.
1112 /// One such example is top level function constants that need to be splatted.
1113 ///
1114 /// Returns an operand that has been vectorized to match `state`'s strategy if
1115 /// vectorization is possible with the above logic. Returns nullptr otherwise.
1116 ///
1117 /// TODO: handle more complex cases.
1118 static Value vectorizeOperand(Value operand, VectorizationState &state) {
1119  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorize operand: " << operand);
1120  // If this value is already vectorized, we are done.
1121  if (Value vecRepl = state.valueVectorReplacement.lookupOrNull(operand)) {
1122  LLVM_DEBUG(dbgs() << " -> already vectorized: " << vecRepl);
1123  return vecRepl;
1124  }
1125 
1126  // An vector operand that is not in the replacement map should never reach
1127  // this point. Reaching this point could mean that the code was already
1128  // vectorized and we shouldn't try to vectorize already vectorized code.
1129  assert(!operand.getType().isa<VectorType>() &&
1130  "Vector op not found in replacement map");
1131 
1132  // Vectorize constant.
1133  if (auto constOp = operand.getDefiningOp<arith::ConstantOp>()) {
1134  auto vecConstant = vectorizeConstant(constOp, state);
1135  LLVM_DEBUG(dbgs() << "-> constant: " << vecConstant);
1136  return vecConstant.getResult();
1137  }
1138 
1139  // Vectorize uniform values.
1140  if (isUniformDefinition(operand, state.strategy)) {
1141  Operation *vecUniform = vectorizeUniform(operand, state);
1142  LLVM_DEBUG(dbgs() << "-> uniform: " << *vecUniform);
1143  return vecUniform->getResult(0);
1144  }
1145 
1146  // Check for unsupported block argument scenarios. A supported block argument
1147  // should have been vectorized already.
1148  if (!operand.getDefiningOp())
1149  LLVM_DEBUG(dbgs() << "-> unsupported block argument\n");
1150  else
1151  // Generic unsupported case.
1152  LLVM_DEBUG(dbgs() << "-> non-vectorizable\n");
1153 
1154  return nullptr;
1155 }
1156 
1157 /// Vectorizes an affine load with the vectorization strategy in 'state' by
1158 /// generating a 'vector.transfer_read' op with the proper permutation map
1159 /// inferred from the indices of the load. The new 'vector.transfer_read' is
1160 /// registered as replacement of the scalar load. Returns the newly created
1161 /// 'vector.transfer_read' if vectorization was successful. Returns nullptr,
1162 /// otherwise.
1163 static Operation *vectorizeAffineLoad(AffineLoadOp loadOp,
1164  VectorizationState &state) {
1165  MemRefType memRefType = loadOp.getMemRefType();
1166  Type elementType = memRefType.getElementType();
1167  auto vectorType = VectorType::get(state.strategy->vectorSizes, elementType);
1168 
1169  // Replace map operands with operands from the vector loop nest.
1170  SmallVector<Value, 8> mapOperands;
1171  state.getScalarValueReplacementsFor(loadOp.getMapOperands(), mapOperands);
1172 
1173  // Compute indices for the transfer op. AffineApplyOp's may be generated.
1174  SmallVector<Value, 8> indices;
1175  indices.reserve(memRefType.getRank());
1176  if (loadOp.getAffineMap() !=
1177  state.builder.getMultiDimIdentityMap(memRefType.getRank()))
1178  computeMemoryOpIndices(loadOp, loadOp.getAffineMap(), mapOperands, state,
1179  indices);
1180  else
1181  indices.append(mapOperands.begin(), mapOperands.end());
1182 
1183  // Compute permutation map using the information of new vector loops.
1184  auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1185  indices, state.vecLoopToVecDim);
1186  if (!permutationMap) {
1187  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ can't compute permutationMap\n");
1188  return nullptr;
1189  }
1190  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1191  LLVM_DEBUG(permutationMap.print(dbgs()));
1192 
1193  auto transfer = state.builder.create<vector::TransferReadOp>(
1194  loadOp.getLoc(), vectorType, loadOp.getMemRef(), indices, permutationMap);
1195 
1196  // Register replacement for future uses in the scope.
1197  state.registerOpVectorReplacement(loadOp, transfer);
1198  return transfer;
1199 }
1200 
1201 /// Vectorizes an affine store with the vectorization strategy in 'state' by
1202 /// generating a 'vector.transfer_write' op with the proper permutation map
1203 /// inferred from the indices of the store. The new 'vector.transfer_store' is
1204 /// registered as replacement of the scalar load. Returns the newly created
1205 /// 'vector.transfer_write' if vectorization was successful. Returns nullptr,
1206 /// otherwise.
1207 static Operation *vectorizeAffineStore(AffineStoreOp storeOp,
1208  VectorizationState &state) {
1209  MemRefType memRefType = storeOp.getMemRefType();
1210  Value vectorValue = vectorizeOperand(storeOp.getValueToStore(), state);
1211  if (!vectorValue)
1212  return nullptr;
1213 
1214  // Replace map operands with operands from the vector loop nest.
1215  SmallVector<Value, 8> mapOperands;
1216  state.getScalarValueReplacementsFor(storeOp.getMapOperands(), mapOperands);
1217 
1218  // Compute indices for the transfer op. AffineApplyOp's may be generated.
1219  SmallVector<Value, 8> indices;
1220  indices.reserve(memRefType.getRank());
1221  if (storeOp.getAffineMap() !=
1222  state.builder.getMultiDimIdentityMap(memRefType.getRank()))
1223  computeMemoryOpIndices(storeOp, storeOp.getAffineMap(), mapOperands, state,
1224  indices);
1225  else
1226  indices.append(mapOperands.begin(), mapOperands.end());
1227 
1228  // Compute permutation map using the information of new vector loops.
1229  auto permutationMap = makePermutationMap(state.builder.getInsertionBlock(),
1230  indices, state.vecLoopToVecDim);
1231  if (!permutationMap)
1232  return nullptr;
1233  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
1234  LLVM_DEBUG(permutationMap.print(dbgs()));
1235 
1236  auto transfer = state.builder.create<vector::TransferWriteOp>(
1237  storeOp.getLoc(), vectorValue, storeOp.getMemRef(), indices,
1238  permutationMap);
1239  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorized store: " << transfer);
1240 
1241  // Register replacement for future uses in the scope.
1242  state.registerOpVectorReplacement(storeOp, transfer);
1243  return transfer;
1244 }
1245 
1246 /// Returns true if `value` is a constant equal to the neutral element of the
1247 /// given vectorizable reduction.
1248 static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind,
1249  Value value, VectorizationState &state) {
1250  Type scalarTy = value.getType();
1251  if (!VectorType::isValidElementType(scalarTy))
1252  return false;
1253  Attribute valueAttr = getIdentityValueAttr(reductionKind, scalarTy,
1254  state.builder, value.getLoc());
1255  if (auto constOp = dyn_cast_or_null<arith::ConstantOp>(value.getDefiningOp()))
1256  return constOp.getValue() == valueAttr;
1257  return false;
1258 }
1259 
1260 /// Vectorizes a loop with the vectorization strategy in 'state'. A new loop is
1261 /// created and registered as replacement for the scalar loop. The builder's
1262 /// insertion point is set to the new loop's body so that subsequent vectorized
1263 /// operations are inserted into the new loop. If the loop is a vector
1264 /// dimension, the step of the newly created loop will reflect the vectorization
1265 /// factor used to vectorized that dimension.
1266 static Operation *vectorizeAffineForOp(AffineForOp forOp,
1267  VectorizationState &state) {
1268  const VectorizationStrategy &strategy = *state.strategy;
1269  auto loopToVecDimIt = strategy.loopToVectorDim.find(forOp);
1270  bool isLoopVecDim = loopToVecDimIt != strategy.loopToVectorDim.end();
1271 
1272  // TODO: Vectorization of reduction loops is not supported for non-unit steps.
1273  if (isLoopVecDim && forOp.getNumIterOperands() > 0 && forOp.getStep() != 1) {
1274  LLVM_DEBUG(
1275  dbgs()
1276  << "\n[early-vect]+++++ unsupported step size for reduction loop: "
1277  << forOp.getStep() << "\n");
1278  return nullptr;
1279  }
1280 
1281  // If we are vectorizing a vector dimension, compute a new step for the new
1282  // vectorized loop using the vectorization factor for the vector dimension.
1283  // Otherwise, propagate the step of the scalar loop.
1284  unsigned newStep;
1285  if (isLoopVecDim) {
1286  unsigned vectorDim = loopToVecDimIt->second;
1287  assert(vectorDim < strategy.vectorSizes.size() && "vector dim overflow");
1288  int64_t forOpVecFactor = strategy.vectorSizes[vectorDim];
1289  newStep = forOp.getStep() * forOpVecFactor;
1290  } else {
1291  newStep = forOp.getStep();
1292  }
1293 
1294  // Get information about reduction kinds.
1295  ArrayRef<LoopReduction> reductions;
1296  if (isLoopVecDim && forOp.getNumIterOperands() > 0) {
1297  auto it = strategy.reductionLoops.find(forOp);
1298  assert(it != strategy.reductionLoops.end() &&
1299  "Reduction descriptors not found when vectorizing a reduction loop");
1300  reductions = it->second;
1301  assert(reductions.size() == forOp.getNumIterOperands() &&
1302  "The size of reductions array must match the number of iter_args");
1303  }
1304 
1305  // Vectorize 'iter_args'.
1306  SmallVector<Value, 8> vecIterOperands;
1307  if (!isLoopVecDim) {
1308  for (auto operand : forOp.getIterOperands())
1309  vecIterOperands.push_back(vectorizeOperand(operand, state));
1310  } else {
1311  // For reduction loops we need to pass a vector of neutral elements as an
1312  // initial value of the accumulator. We will add the original initial value
1313  // later.
1314  for (auto redAndOperand : llvm::zip(reductions, forOp.getIterOperands())) {
1315  vecIterOperands.push_back(createInitialVector(
1316  std::get<0>(redAndOperand).kind, std::get<1>(redAndOperand), state));
1317  }
1318  }
1319 
1320  auto vecForOp = state.builder.create<AffineForOp>(
1321  forOp.getLoc(), forOp.getLowerBoundOperands(), forOp.getLowerBoundMap(),
1322  forOp.getUpperBoundOperands(), forOp.getUpperBoundMap(), newStep,
1323  vecIterOperands,
1324  /*bodyBuilder=*/[](OpBuilder &, Location, Value, ValueRange) {
1325  // Make sure we don't create a default terminator in the loop body as
1326  // the proper terminator will be added during vectorization.
1327  });
1328 
1329  // Register loop-related replacements:
1330  // 1) The new vectorized loop is registered as vector replacement of the
1331  // scalar loop.
1332  // 2) The new iv of the vectorized loop is registered as scalar replacement
1333  // since a scalar copy of the iv will prevail in the vectorized loop.
1334  // TODO: A vector replacement will also be added in the future when
1335  // vectorization of linear ops is supported.
1336  // 3) The new 'iter_args' region arguments are registered as vector
1337  // replacements since they have been vectorized.
1338  // 4) If the loop performs a reduction along the vector dimension, a
1339  // `vector.reduction` or similar op is inserted for each resulting value
1340  // of the loop and its scalar value replaces the corresponding scalar
1341  // result of the loop.
1342  state.registerOpVectorReplacement(forOp, vecForOp);
1343  state.registerValueScalarReplacement(forOp.getInductionVar(),
1344  vecForOp.getInductionVar());
1345  for (auto iterTuple :
1346  llvm ::zip(forOp.getRegionIterArgs(), vecForOp.getRegionIterArgs()))
1347  state.registerBlockArgVectorReplacement(std::get<0>(iterTuple),
1348  std::get<1>(iterTuple));
1349 
1350  if (isLoopVecDim) {
1351  for (unsigned i = 0; i < vecForOp.getNumIterOperands(); ++i) {
1352  // First, we reduce the vector returned from the loop into a scalar.
1353  Value reducedRes =
1354  getVectorReductionOp(reductions[i].kind, state.builder,
1355  vecForOp.getLoc(), vecForOp.getResult(i));
1356  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ creating a vector reduction: "
1357  << reducedRes);
1358  // Then we combine it with the original (scalar) initial value unless it
1359  // is equal to the neutral element of the reduction.
1360  Value origInit = forOp.getOperand(forOp.getNumControlOperands() + i);
1361  Value finalRes = reducedRes;
1362  if (!isNeutralElementConst(reductions[i].kind, origInit, state))
1363  finalRes =
1364  arith::getReductionOp(reductions[i].kind, state.builder,
1365  reducedRes.getLoc(), reducedRes, origInit);
1366  state.registerLoopResultScalarReplacement(forOp.getResult(i), finalRes);
1367  }
1368  }
1369 
1370  if (isLoopVecDim)
1371  state.vecLoopToVecDim[vecForOp] = loopToVecDimIt->second;
1372 
1373  // Change insertion point so that upcoming vectorized instructions are
1374  // inserted into the vectorized loop's body.
1375  state.builder.setInsertionPointToStart(vecForOp.getBody());
1376 
1377  // If this is a reduction loop then we may need to create a mask to filter out
1378  // garbage in the last iteration.
1379  if (isLoopVecDim && forOp.getNumIterOperands() > 0)
1380  createMask(vecForOp, state);
1381 
1382  return vecForOp;
1383 }
1384 
1385 /// Vectorizes arbitrary operation by plain widening. We apply generic type
1386 /// widening of all its results and retrieve the vector counterparts for all its
1387 /// operands.
1388 static Operation *widenOp(Operation *op, VectorizationState &state) {
1389  SmallVector<Type, 8> vectorTypes;
1390  for (Value result : op->getResults())
1391  vectorTypes.push_back(
1392  VectorType::get(state.strategy->vectorSizes, result.getType()));
1393 
1394  SmallVector<Value, 8> vectorOperands;
1395  for (Value operand : op->getOperands()) {
1396  Value vecOperand = vectorizeOperand(operand, state);
1397  if (!vecOperand) {
1398  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ an operand failed vectorize\n");
1399  return nullptr;
1400  }
1401  vectorOperands.push_back(vecOperand);
1402  }
1403 
1404  // Create a clone of the op with the proper operands and return types.
1405  // TODO: The following assumes there is always an op with a fixed
1406  // name that works both in scalar mode and vector mode.
1407  // TODO: Is it worth considering an Operation.clone operation which
1408  // changes the type so we can promote an Operation with less boilerplate?
1409  OperationState vecOpState(op->getLoc(), op->getName(), vectorOperands,
1410  vectorTypes, op->getAttrs(), /*successors=*/{},
1411  /*regions=*/{});
1412  Operation *vecOp = state.builder.createOperation(vecOpState);
1413  state.registerOpVectorReplacement(op, vecOp);
1414  return vecOp;
1415 }
1416 
1417 /// Vectorizes a yield operation by widening its types. The builder's insertion
1418 /// point is set after the vectorized parent op to continue vectorizing the
1419 /// operations after the parent op. When vectorizing a reduction loop a mask may
1420 /// be used to prevent adding garbage values to the accumulator.
1421 static Operation *vectorizeAffineYieldOp(AffineYieldOp yieldOp,
1422  VectorizationState &state) {
1423  Operation *newYieldOp = widenOp(yieldOp, state);
1424  Operation *newParentOp = state.builder.getInsertionBlock()->getParentOp();
1425 
1426  // If there is a mask for this loop then we must prevent garbage values from
1427  // being added to the accumulator by inserting `select` operations, for
1428  // example:
1429  //
1430  // %res = arith.addf %acc, %val : vector<128xf32>
1431  // %res_masked = select %mask, %res, %acc : vector<128xi1>, vector<128xf32>
1432  // affine.yield %res_masked : vector<128xf32>
1433  //
1434  if (Value mask = state.vecLoopToMask.lookup(newParentOp)) {
1435  state.builder.setInsertionPoint(newYieldOp);
1436  for (unsigned i = 0; i < newYieldOp->getNumOperands(); ++i) {
1437  Value result = newYieldOp->getOperand(i);
1438  Value iterArg = cast<AffineForOp>(newParentOp).getRegionIterArgs()[i];
1439  Value maskedResult = state.builder.create<SelectOp>(result.getLoc(), mask,
1440  result, iterArg);
1441  LLVM_DEBUG(
1442  dbgs() << "\n[early-vect]+++++ masking a yielded vector value: "
1443  << maskedResult);
1444  newYieldOp->setOperand(i, maskedResult);
1445  }
1446  }
1447 
1448  state.builder.setInsertionPointAfter(newParentOp);
1449  return newYieldOp;
1450 }
1451 
1452 /// Encodes Operation-specific behavior for vectorization. In general we
1453 /// assume that all operands of an op must be vectorized but this is not
1454 /// always true. In the future, it would be nice to have a trait that
1455 /// describes how a particular operation vectorizes. For now we implement the
1456 /// case distinction here. Returns a vectorized form of an operation or
1457 /// nullptr if vectorization fails.
1458 // TODO: consider adding a trait to Op to describe how it gets vectorized.
1459 // Maybe some Ops are not vectorizable or require some tricky logic, we cannot
1460 // do one-off logic here; ideally it would be TableGen'd.
1462  VectorizationState &state) {
1463  // Sanity checks.
1464  assert(!isa<vector::TransferReadOp>(op) &&
1465  "vector.transfer_read cannot be further vectorized");
1466  assert(!isa<vector::TransferWriteOp>(op) &&
1467  "vector.transfer_write cannot be further vectorized");
1468 
1469  if (auto loadOp = dyn_cast<AffineLoadOp>(op))
1470  return vectorizeAffineLoad(loadOp, state);
1471  if (auto storeOp = dyn_cast<AffineStoreOp>(op))
1472  return vectorizeAffineStore(storeOp, state);
1473  if (auto forOp = dyn_cast<AffineForOp>(op))
1474  return vectorizeAffineForOp(forOp, state);
1475  if (auto yieldOp = dyn_cast<AffineYieldOp>(op))
1476  return vectorizeAffineYieldOp(yieldOp, state);
1477  if (auto constant = dyn_cast<arith::ConstantOp>(op))
1478  return vectorizeConstant(constant, state);
1479 
1480  // Other ops with regions are not supported.
1481  if (op->getNumRegions() != 0)
1482  return nullptr;
1483 
1484  return widenOp(op, state);
1485 }
1486 
1487 /// Recursive implementation to convert all the nested loops in 'match' to a 2D
1488 /// vector container that preserves the relative nesting level of each loop with
1489 /// respect to the others in 'match'. 'currentLevel' is the nesting level that
1490 /// will be assigned to the loop in the current 'match'.
1491 static void
1492 getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel,
1493  std::vector<SmallVector<AffineForOp, 2>> &loops) {
1494  // Add a new empty level to the output if it doesn't exist already.
1495  assert(currentLevel <= loops.size() && "Unexpected currentLevel");
1496  if (currentLevel == loops.size())
1497  loops.emplace_back();
1498 
1499  // Add current match and recursively visit its children.
1500  loops[currentLevel].push_back(cast<AffineForOp>(match.getMatchedOperation()));
1501  for (auto childMatch : match.getMatchedChildren()) {
1502  getMatchedAffineLoopsRec(childMatch, currentLevel + 1, loops);
1503  }
1504 }
1505 
1506 /// Converts all the nested loops in 'match' to a 2D vector container that
1507 /// preserves the relative nesting level of each loop with respect to the others
1508 /// in 'match'. This means that every loop in 'loops[i]' will have a parent loop
1509 /// in 'loops[i-1]'. A loop in 'loops[i]' may or may not have a child loop in
1510 /// 'loops[i+1]'.
1511 static void
1513  std::vector<SmallVector<AffineForOp, 2>> &loops) {
1514  getMatchedAffineLoopsRec(match, /*currLoopDepth=*/0, loops);
1515 }
1516 
1517 /// Internal implementation to vectorize affine loops from a single loop nest
1518 /// using an n-D vectorization strategy.
1519 static LogicalResult
1521  const VectorizationStrategy &strategy) {
1522  assert(loops[0].size() == 1 && "Expected single root loop");
1523  AffineForOp rootLoop = loops[0][0];
1524  VectorizationState state(rootLoop.getContext());
1525  state.builder.setInsertionPointAfter(rootLoop);
1526  state.strategy = &strategy;
1527 
1528  // Since patterns are recursive, they can very well intersect.
1529  // Since we do not want a fully greedy strategy in general, we decouple
1530  // pattern matching, from profitability analysis, from application.
1531  // As a consequence we must check that each root pattern is still
1532  // vectorizable. If a pattern is not vectorizable anymore, we just skip it.
1533  // TODO: implement a non-greedy profitability analysis that keeps only
1534  // non-intersecting patterns.
1535  if (!isVectorizableLoopBody(rootLoop, vectorTransferPattern())) {
1536  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ loop is not vectorizable");
1537  return failure();
1538  }
1539 
1540  //////////////////////////////////////////////////////////////////////////////
1541  // Vectorize the scalar loop nest following a topological order. A new vector
1542  // loop nest with the vectorized operations is created along the process. If
1543  // vectorization succeeds, the scalar loop nest is erased. If vectorization
1544  // fails, the vector loop nest is erased and the scalar loop nest is not
1545  // modified.
1546  //////////////////////////////////////////////////////////////////////////////
1547 
1548  auto opVecResult = rootLoop.walk<WalkOrder::PreOrder>([&](Operation *op) {
1549  LLVM_DEBUG(dbgs() << "[early-vect]+++++ Vectorizing: " << *op);
1550  Operation *vectorOp = vectorizeOneOperation(op, state);
1551  if (!vectorOp) {
1552  LLVM_DEBUG(
1553  dbgs() << "[early-vect]+++++ failed vectorizing the operation: "
1554  << *op << "\n");
1555  return WalkResult::interrupt();
1556  }
1557 
1558  return WalkResult::advance();
1559  });
1560 
1561  if (opVecResult.wasInterrupted()) {
1562  LLVM_DEBUG(dbgs() << "[early-vect]+++++ failed vectorization for: "
1563  << rootLoop << "\n");
1564  // Erase vector loop nest if it was created.
1565  auto vecRootLoopIt = state.opVectorReplacement.find(rootLoop);
1566  if (vecRootLoopIt != state.opVectorReplacement.end())
1567  eraseLoopNest(cast<AffineForOp>(vecRootLoopIt->second));
1568 
1569  return failure();
1570  }
1571 
1572  // Replace results of reduction loops with the scalar values computed using
1573  // `vector.reduce` or similar ops.
1574  for (auto resPair : state.loopResultScalarReplacement)
1575  resPair.first.replaceAllUsesWith(resPair.second);
1576 
1577  assert(state.opVectorReplacement.count(rootLoop) == 1 &&
1578  "Expected vector replacement for loop nest");
1579  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ success vectorizing pattern");
1580  LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ vectorization result:\n"
1581  << *state.opVectorReplacement[rootLoop]);
1582 
1583  // Finish this vectorization pattern.
1584  state.finishVectorizationPattern(rootLoop);
1585  return success();
1586 }
1587 
1588 /// Extracts the matched loops and vectorizes them following a topological
1589 /// order. A new vector loop nest will be created if vectorization succeeds. The
1590 /// original loop nest won't be modified in any case.
1592  const VectorizationStrategy &strategy) {
1593  std::vector<SmallVector<AffineForOp, 2>> loopsToVectorize;
1594  getMatchedAffineLoops(m, loopsToVectorize);
1595  return vectorizeLoopNest(loopsToVectorize, strategy);
1596 }
1597 
1598 /// Traverses all the loop matches and classifies them into intersection
1599 /// buckets. Two matches intersect if any of them encloses the other one. A
1600 /// match intersects with a bucket if the match intersects with the root
1601 /// (outermost) loop in that bucket.
1603  ArrayRef<NestedMatch> matches,
1604  std::vector<SmallVector<NestedMatch, 8>> &intersectionBuckets) {
1605  assert(intersectionBuckets.empty() && "Expected empty output");
1606  // Keeps track of the root (outermost) loop of each bucket.
1607  SmallVector<AffineForOp, 8> bucketRoots;
1608 
1609  for (const NestedMatch &match : matches) {
1610  AffineForOp matchRoot = cast<AffineForOp>(match.getMatchedOperation());
1611  bool intersects = false;
1612  for (int i = 0, end = intersectionBuckets.size(); i < end; ++i) {
1613  AffineForOp bucketRoot = bucketRoots[i];
1614  // Add match to the bucket if the bucket root encloses the match root.
1615  if (bucketRoot->isAncestor(matchRoot)) {
1616  intersectionBuckets[i].push_back(match);
1617  intersects = true;
1618  break;
1619  }
1620  // Add match to the bucket if the match root encloses the bucket root. The
1621  // match root becomes the new bucket root.
1622  if (matchRoot->isAncestor(bucketRoot)) {
1623  bucketRoots[i] = matchRoot;
1624  intersectionBuckets[i].push_back(match);
1625  intersects = true;
1626  break;
1627  }
1628  }
1629 
1630  // Match doesn't intersect with any existing bucket. Create a new bucket for
1631  // it.
1632  if (!intersects) {
1633  bucketRoots.push_back(matchRoot);
1634  intersectionBuckets.emplace_back();
1635  intersectionBuckets.back().push_back(match);
1636  }
1637  }
1638 }
1639 
1640 /// Internal implementation to vectorize affine loops in 'loops' using the n-D
1641 /// vectorization factors in 'vectorSizes'. By default, each vectorization
1642 /// factor is applied inner-to-outer to the loops of each loop nest.
1643 /// 'fastestVaryingPattern' can be optionally used to provide a different loop
1644 /// vectorization order. `reductionLoops` can be provided to specify loops which
1645 /// can be vectorized along the reduction dimension.
1646 static void vectorizeLoops(Operation *parentOp, DenseSet<Operation *> &loops,
1647  ArrayRef<int64_t> vectorSizes,
1648  ArrayRef<int64_t> fastestVaryingPattern,
1649  const ReductionLoopMap &reductionLoops) {
1650  assert((reductionLoops.empty() || vectorSizes.size() == 1) &&
1651  "Vectorizing reductions is supported only for 1-D vectors");
1652 
1653  // Compute 1-D, 2-D or 3-D loop pattern to be matched on the target loops.
1654  Optional<NestedPattern> pattern =
1655  makePattern(loops, vectorSizes.size(), fastestVaryingPattern);
1656  if (!pattern.hasValue()) {
1657  LLVM_DEBUG(dbgs() << "\n[early-vect] pattern couldn't be computed\n");
1658  return;
1659  }
1660 
1661  LLVM_DEBUG(dbgs() << "\n******************************************");
1662  LLVM_DEBUG(dbgs() << "\n******************************************");
1663  LLVM_DEBUG(dbgs() << "\n[early-vect] new pattern on parent op\n");
1664  LLVM_DEBUG(dbgs() << *parentOp << "\n");
1665 
1666  unsigned patternDepth = pattern->getDepth();
1667 
1668  // Compute all the pattern matches and classify them into buckets of
1669  // intersecting matches.
1670  SmallVector<NestedMatch, 32> allMatches;
1671  pattern->match(parentOp, &allMatches);
1672  std::vector<SmallVector<NestedMatch, 8>> intersectionBuckets;
1673  computeIntersectionBuckets(allMatches, intersectionBuckets);
1674 
1675  // Iterate over all buckets and vectorize the matches eagerly. We can only
1676  // vectorize one match from each bucket since all the matches within a bucket
1677  // intersect.
1678  for (auto &intersectingMatches : intersectionBuckets) {
1679  for (NestedMatch &match : intersectingMatches) {
1680  VectorizationStrategy strategy;
1681  // TODO: depending on profitability, elect to reduce the vector size.
1682  strategy.vectorSizes.assign(vectorSizes.begin(), vectorSizes.end());
1683  strategy.reductionLoops = reductionLoops;
1684  if (failed(analyzeProfitability(match.getMatchedChildren(), 1,
1685  patternDepth, &strategy))) {
1686  continue;
1687  }
1688  vectorizeLoopIfProfitable(match.getMatchedOperation(), 0, patternDepth,
1689  &strategy);
1690  // Vectorize match. Skip the rest of intersecting matches in the bucket if
1691  // vectorization succeeded.
1692  // TODO: if pattern does not apply, report it; alter the cost/benefit.
1693  // TODO: some diagnostics if failure to vectorize occurs.
1694  if (succeeded(vectorizeRootMatch(match, strategy)))
1695  break;
1696  }
1697  }
1698 
1699  LLVM_DEBUG(dbgs() << "\n");
1700 }
1701 
1702 std::unique_ptr<OperationPass<FuncOp>>
1704  return std::make_unique<Vectorize>(virtualVectorSize);
1705 }
1706 std::unique_ptr<OperationPass<FuncOp>> createSuperVectorizePass() {
1707  return std::make_unique<Vectorize>();
1708 }
1709 
1710 /// Applies vectorization to the current function by searching over a bunch of
1711 /// predetermined patterns.
1712 void Vectorize::runOnOperation() {
1713  FuncOp f = getOperation();
1714  if (!fastestVaryingPattern.empty() &&
1715  fastestVaryingPattern.size() != vectorSizes.size()) {
1716  f.emitRemark("Fastest varying pattern specified with different size than "
1717  "the vector size.");
1718  return signalPassFailure();
1719  }
1720 
1721  if (vectorizeReductions && vectorSizes.size() != 1) {
1722  f.emitError("Vectorizing reductions is supported only for 1-D vectors.");
1723  return signalPassFailure();
1724  }
1725 
1726  DenseSet<Operation *> parallelLoops;
1727  ReductionLoopMap reductionLoops;
1728 
1729  // If 'vectorize-reduction=true' is provided, we also populate the
1730  // `reductionLoops` map.
1731  if (vectorizeReductions) {
1732  f.walk([&parallelLoops, &reductionLoops](AffineForOp loop) {
1733  SmallVector<LoopReduction, 2> reductions;
1734  if (isLoopParallel(loop, &reductions)) {
1735  parallelLoops.insert(loop);
1736  // If it's not a reduction loop, adding it to the map is not necessary.
1737  if (!reductions.empty())
1738  reductionLoops[loop] = reductions;
1739  }
1740  });
1741  } else {
1742  f.walk([&parallelLoops](AffineForOp loop) {
1743  if (isLoopParallel(loop))
1744  parallelLoops.insert(loop);
1745  });
1746  }
1747 
1748  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1749  NestedPatternContext mlContext;
1750  vectorizeLoops(f, parallelLoops, vectorSizes, fastestVaryingPattern,
1751  reductionLoops);
1752 }
1753 
1754 /// Verify that affine loops in 'loops' meet the nesting criteria expected by
1755 /// SuperVectorizer:
1756 /// * There must be at least one loop.
1757 /// * There must be a single root loop (nesting level 0).
1758 /// * Each loop at a given nesting level must be nested in a loop from a
1759 /// previous nesting level.
1760 static LogicalResult
1762  // Expected at least one loop.
1763  if (loops.empty())
1764  return failure();
1765 
1766  // Expected only one root loop.
1767  if (loops[0].size() != 1)
1768  return failure();
1769 
1770  // Traverse loops outer-to-inner to check some invariants.
1771  for (int i = 1, end = loops.size(); i < end; ++i) {
1772  for (AffineForOp loop : loops[i]) {
1773  // Check that each loop at this level is nested in one of the loops from
1774  // the previous level.
1775  if (none_of(loops[i - 1], [&](AffineForOp maybeParent) {
1776  return maybeParent->isProperAncestor(loop);
1777  }))
1778  return failure();
1779 
1780  // Check that each loop at this level is not nested in another loop from
1781  // this level.
1782  for (AffineForOp sibling : loops[i]) {
1783  if (sibling->isProperAncestor(loop))
1784  return failure();
1785  }
1786  }
1787  }
1788 
1789  return success();
1790 }
1791 
1792 namespace mlir {
1793 
1794 /// External utility to vectorize affine loops in 'loops' using the n-D
1795 /// vectorization factors in 'vectorSizes'. By default, each vectorization
1796 /// factor is applied inner-to-outer to the loops of each loop nest.
1797 /// 'fastestVaryingPattern' can be optionally used to provide a different loop
1798 /// vectorization order.
1799 /// If `reductionLoops` is not empty, the given reduction loops may be
1800 /// vectorized along the reduction dimension.
1801 /// TODO: Vectorizing reductions is supported only for 1-D vectorization.
1803  ArrayRef<int64_t> vectorSizes,
1804  ArrayRef<int64_t> fastestVaryingPattern,
1805  const ReductionLoopMap &reductionLoops) {
1806  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1807  NestedPatternContext mlContext;
1808  vectorizeLoops(parentOp, loops, vectorSizes, fastestVaryingPattern,
1809  reductionLoops);
1810 }
1811 
1812 /// External utility to vectorize affine loops from a single loop nest using an
1813 /// n-D vectorization strategy (see doc in VectorizationStrategy definition).
1814 /// Loops are provided in a 2D vector container. The first dimension represents
1815 /// the nesting level relative to the loops to be vectorized. The second
1816 /// dimension contains the loops. This means that:
1817 /// a) every loop in 'loops[i]' must have a parent loop in 'loops[i-1]',
1818 /// b) a loop in 'loops[i]' may or may not have a child loop in 'loops[i+1]'.
1819 ///
1820 /// For example, for the following loop nest:
1821 ///
1822 /// func @vec2d(%in0: memref<64x128x512xf32>, %in1: memref<64x128x128xf32>,
1823 /// %out0: memref<64x128x512xf32>,
1824 /// %out1: memref<64x128x128xf32>) {
1825 /// affine.for %i0 = 0 to 64 {
1826 /// affine.for %i1 = 0 to 128 {
1827 /// affine.for %i2 = 0 to 512 {
1828 /// %ld = affine.load %in0[%i0, %i1, %i2] : memref<64x128x512xf32>
1829 /// affine.store %ld, %out0[%i0, %i1, %i2] : memref<64x128x512xf32>
1830 /// }
1831 /// affine.for %i3 = 0 to 128 {
1832 /// %ld = affine.load %in1[%i0, %i1, %i3] : memref<64x128x128xf32>
1833 /// affine.store %ld, %out1[%i0, %i1, %i3] : memref<64x128x128xf32>
1834 /// }
1835 /// }
1836 /// }
1837 /// return
1838 /// }
1839 ///
1840 /// loops = {{%i0}, {%i2, %i3}}, to vectorize the outermost and the two
1841 /// innermost loops;
1842 /// loops = {{%i1}, {%i2, %i3}}, to vectorize the middle and the two innermost
1843 /// loops;
1844 /// loops = {{%i2}}, to vectorize only the first innermost loop;
1845 /// loops = {{%i3}}, to vectorize only the second innermost loop;
1846 /// loops = {{%i1}}, to vectorize only the middle loop.
1849  const VectorizationStrategy &strategy) {
1850  // Thread-safe RAII local context, BumpPtrAllocator freed on exit.
1851  NestedPatternContext mlContext;
1852  if (failed(verifyLoopNesting(loops)))
1853  return failure();
1854  return vectorizeLoopNest(loops, strategy);
1855 }
1856 
1857 std::unique_ptr<OperationPass<FuncOp>>
1859  return std::make_unique<Vectorize>(virtualVectorSize);
1860 }
1861 std::unique_ptr<OperationPass<FuncOp>> createSuperVectorizePass() {
1862  return std::make_unique<Vectorize>();
1863 }
1864 
1865 } // namespace mlir
Include the generated interface declarations.
static Operation * vectorizeOneOperation(Operation *op, VectorizationState &state)
Encodes Operation-specific behavior for vectorization.
Value getVectorReductionOp(arith::AtomicRMWKind op, OpBuilder &builder, Location loc, Value vector)
Returns the value obtained by reducing the vector into a scalar using the operation kind associated w...
Definition: VectorOps.cpp:362
static LogicalResult analyzeProfitability(ArrayRef< NestedMatch > matches, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
Implements a simple strawman strategy for vectorization.
static Operation * vectorizeAffineForOp(AffineForOp forOp, VectorizationState &state)
Vectorizes a loop with the vectorization strategy in &#39;state&#39;.
Operation is a basic unit of execution within MLIR.
Definition: Operation.h:28
static void getMatchedAffineLoopsRec(NestedMatch match, unsigned currentLevel, std::vector< SmallVector< AffineForOp, 2 >> &loops)
Recursive implementation to convert all the nested loops in &#39;match&#39; to a 2D vector container that pre...
unsigned getNumSymbols() const
Definition: AffineMap.cpp:298
unsigned getNumDims() const
Definition: AffineMap.cpp:294
static Operation * widenOp(Operation *op, VectorizationState &state)
Vectorizes arbitrary operation by plain widening.
operand_range getOperands()
Returns an iterator on the underlying Value&#39;s.
Definition: Operation.h:247
void vectorizeAffineLoops(Operation *parentOp, llvm::DenseSet< Operation *, DenseMapInfo< Operation *>> &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops=ReductionLoopMap())
Vectorizes affine loops in &#39;loops&#39; using the n-D vectorization factors in &#39;vectorSizes&#39;.
unsigned getNumRegions()
Returns the number of regions held by this operation.
Definition: Operation.h:420
static arith::ConstantOp createInitialVector(arith::AtomicRMWKind reductionKind, Value oldOperand, VectorizationState &state)
Creates a constant vector filled with the neutral elements of the given reduction.
static void getMatchedAffineLoops(NestedMatch match, std::vector< SmallVector< AffineForOp, 2 >> &loops)
Converts all the nested loops in &#39;match&#39; to a 2D vector container that preserves the relative nesting...
ArrayRef< NamedAttribute > getAttrs()
Return all of the attributes on this operation.
Definition: Operation.h:308
Value getOperand(unsigned idx)
Definition: Operation.h:219
static Value vectorizeOperand(Value operand, VectorizationState &state)
Tries to vectorize a given operand by applying the following logic:
bool failed(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a failure value...
Definition: LogicalResult.h:72
static LogicalResult vectorizeRootMatch(NestedMatch m, const VectorizationStrategy &strategy)
Extracts the matched loops and vectorizes them following a topological order.
static DenseElementsAttr get(ShapedType type, ArrayRef< Attribute > values)
Constructs a dense elements attribute from an array of element values.
unsigned getNumOperands()
Definition: Operation.h:215
AffineApplyOp makeComposedAffineApply(OpBuilder &b, Location loc, AffineMap map, ValueRange operands)
Returns a composed AffineApplyOp by composing map and operands with other AffineApplyOps supplying th...
Definition: AffineOps.cpp:714
bool succeeded(LogicalResult result)
Utility function that returns true if the provided LogicalResult corresponds to a success value...
Definition: LogicalResult.h:68
RAII structure to transparently manage the bump allocator for NestedPattern and NestedMatch classes...
ArrayRef< NestedMatch > getMatchedChildren()
Definition: NestedMatcher.h:56
bool isLoopParallel(AffineForOp forOp, SmallVectorImpl< LoopReduction > *parallelReductions=nullptr)
Returns true if `forOp&#39; is a parallel loop.
static void vectorizeLoops(Operation *parentOp, DenseSet< Operation *> &loops, ArrayRef< int64_t > vectorSizes, ArrayRef< int64_t > fastestVaryingPattern, const ReductionLoopMap &reductionLoops)
Internal implementation to vectorize affine loops in &#39;loops&#39; using the n-D vectorization factors in &#39;...
static void computeIntersectionBuckets(ArrayRef< NestedMatch > matches, std::vector< SmallVector< NestedMatch, 8 >> &intersectionBuckets)
Traverses all the loop matches and classifies them into intersection buckets.
static constexpr const bool value
void erase()
Remove this operation from its parent block and delete it.
Definition: Operation.cpp:424
This class defines the main interface for locations in MLIR and acts as a non-nullable wrapper around...
Definition: Location.h:48
static void computeMemoryOpIndices(Operation *op, AffineMap map, ValueRange mapOperands, VectorizationState &state, SmallVectorImpl< Value > &results)
LogicalResult success(bool isSuccess=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:56
This class represents an efficient way to signal success or failure.
Definition: LogicalResult.h:26
Attribute getIdentityValueAttr(AtomicRMWKind kind, Type resultType, OpBuilder &builder, Location loc)
Returns the identity value attribute associated with an AtomicRMWKind op.
LogicalResult failure(bool isFailure=true)
Utility function to generate a LogicalResult.
Definition: LogicalResult.h:62
static AffineMap get(MLIRContext *context)
Returns a zero result affine map with no dimensions or symbols: () -> ().
Holds parameters to perform n-D vectorization on a single loop nest.
Definition: Utils.h:82
static LogicalResult verifyLoopNesting(const std::vector< SmallVector< AffineForOp, 2 >> &loops)
Verify that affine loops in &#39;loops&#39; meet the nesting criteria expected by SuperVectorizer: ...
static FilterFunctionType isVectorizableLoopPtrFactory(const DenseSet< Operation *> &parallelLoops, int fastestVaryingMemRefDimension)
Forward declaration.
std::function< bool(Operation &)> FilterFunctionType
A NestedPattern is a nested operation walker that:
Definition: NestedMatcher.h:90
Attributes are known-constant values of operations.
Definition: Attributes.h:24
static Operation * vectorizeAffineYieldOp(AffineYieldOp yieldOp, VectorizationState &state)
Vectorizes a yield operation by widening its types.
Operation * getParentOp()
Returns the closest surrounding operation that contains this operation or nullptr if this is a top-le...
Definition: Operation.h:117
NestedPattern For(const NestedPattern &child)
Value getReductionOp(AtomicRMWKind op, OpBuilder &builder, Location loc, Value lhs, Value rhs)
Returns the value obtained by applying the reduction operation kind associated with a binary AtomicRM...
static Operation * vectorizeUniform(Value uniformVal, VectorizationState &state)
Generates a broadcast op for the provided uniform value using the vectorization strategy in &#39;state&#39;...
static Optional< NestedPattern > makePattern(const DenseSet< Operation *> &parallelLoops, int vectorRank, ArrayRef< int64_t > fastestVaryingPattern)
Creates a vectorization pattern from the command line arguments.
Base type for affine expression.
Definition: AffineExpr.h:68
OpResult getResult(unsigned idx)
Get the &#39;idx&#39;th result of this operation.
Definition: Operation.h:276
static NestedPattern & vectorTransferPattern()
DenseMap< Operation *, unsigned > loopToVectorDim
Definition: Utils.h:89
static WalkResult advance()
Definition: Visitors.h:51
unsigned getNumResults() const
Definition: AffineMap.cpp:302
static AffineMap makePermutationMap(ArrayRef< Value > indices, const DenseMap< Operation *, unsigned > &enclosingLoopToVectorDim)
Constructs a permutation map from memref indices to vector dimension.
Location getLoc()
The source location the operation was defined or derived from.
Definition: Operation.h:106
This represents an operation in an abstracted form, suitable for use with the builder APIs...
A multi-dimensional affine map Affine map&#39;s are immutable like Type&#39;s, and they are uniqued...
Definition: AffineMap.h:38
static WalkResult interrupt()
Definition: Visitors.h:50
This class represents an argument of a Block.
Definition: Value.h:298
ArrayRef< AffineExpr > getResults() const
Definition: AffineMap.cpp:311
An NestedPattern captures nested patterns in the IR.
Definition: NestedMatcher.h:46
void setOperand(unsigned idx, Value value)
Definition: Operation.h:220
Location getLoc() const
Return the location of this value.
Definition: Value.cpp:26
Instances of the Type class are uniqued, have an immutable identifier and an optional mutable compone...
Definition: Types.h:72
AffineForOp getForInductionVarOwner(Value val)
Returns the loop parent of an induction variable.
Definition: AffineOps.cpp:1844
This class represents an instance of an SSA value in the MLIR system, representing a computable value...
Definition: Value.h:84
bool use_empty() const
Returns true if this value has no uses.
Definition: Value.h:202
RAII guard to reset the insertion point of the builder when destroyed.
Definition: Builders.h:279
LogicalResult vectorizeAffineLoopNest(std::vector< SmallVector< AffineForOp, 2 >> &loops, const VectorizationStrategy &strategy)
External utility to vectorize affine loops from a single loop nest using an n-D vectorization strateg...
std::unique_ptr< OperationPass< FuncOp > > createSuperVectorizePass(ArrayRef< int64_t > virtualVectorSize)
Creates a pass to vectorize loops, operations and data types using a target-independent, n-D super-vector abstraction.
Type getType() const
Return the type of this value.
Definition: Value.h:117
Do not split vector transfer operations.
static Operation * vectorizeAffineStore(AffineStoreOp storeOp, VectorizationState &state)
Vectorizes an affine store with the vectorization strategy in &#39;state&#39; by generating a &#39;vector...
static void eraseLoopNest(AffineForOp forOp)
Erases a loop nest, including all its nested operations.
static VectorType vectorType(CodeGen &codegen, Type etp)
Constructs vector type.
SmallVector< int64_t, 8 > vectorSizes
Definition: Utils.h:85
NestedPattern Op(FilterFunctionType filter=defaultFilterFunction)
Operation * getDefiningOp() const
If this value is the result of an operation, return the operation that defines it.
Definition: Value.cpp:20
MLIRContext is the top-level object for a collection of MLIR operations.
Definition: MLIRContext.h:55
static bool isUniformDefinition(Value value, const VectorizationStrategy *strategy)
Returns true if the provided value is vector uniform given the vectorization strategy.
static VectorType getVectorType(Type scalarTy, const VectorizationStrategy *strategy)
Returns the vector type resulting from applying the provided vectorization strategy on the scalar typ...
unsigned getNumResults()
Return the number of results held by this operation.
Definition: Operation.h:273
static void vectorizeLoopIfProfitable(Operation *loop, unsigned depthInPattern, unsigned patternDepth, VectorizationStrategy *strategy)
bool isVectorizableLoopBody(AffineForOp loop, NestedPattern &vectorTransferMatcher)
Checks whether the loop is structurally vectorizable; i.e.
static Operation * vectorizeAffineLoad(AffineLoadOp loadOp, VectorizationState &state)
Vectorizes an affine load with the vectorization strategy in &#39;state&#39; by generating a &#39;vector...
OperationName getName()
The name of an operation is the key identifier for it.
Definition: Operation.h:57
bool isa() const
Definition: Types.h:234
result_range getResults()
Definition: Operation.h:284
This class helps build Operations.
Definition: Builders.h:177
This class provides an abstraction over the different types of ranges over Values.
static Value createMask(AffineForOp vecForOp, VectorizationState &state)
Creates a mask used to filter out garbage elements in the last iteration of unaligned loops...
static bool isNeutralElementConst(arith::AtomicRMWKind reductionKind, Value value, VectorizationState &state)
Returns true if value is a constant equal to the neutral element of the given vectorizable reduction...
ReductionLoopMap reductionLoops
Definition: Utils.h:92
static LogicalResult vectorizeLoopNest(std::vector< SmallVector< AffineForOp, 2 >> &loops, const VectorizationStrategy &strategy)
Internal implementation to vectorize affine loops from a single loop nest using an n-D vectorization ...
Operation * getMatchedOperation() const
Definition: NestedMatcher.h:55
static arith::ConstantOp vectorizeConstant(arith::ConstantOp constOp, VectorizationState &state)
Tries to transform a scalar constant into a vector constant.