mlir.dialects._scf_ops_gen¶
Attributes¶
Classes¶
This operation accepts the continuation (i.e., inverse of exit) condition |
|
The |
|
The |
|
|
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
This operation represents a generic "while"/"do-while" loop that keeps |
|
The |
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Module Contents¶
- mlir.dialects._scf_ops_gen._ods_ir¶
- class mlir.dialects._scf_ops_gen._Dialect(descriptor: object)¶
Bases:
_ods_ir- DIALECT_NAMESPACE = 'scf'¶
- class mlir.dialects._scf_ops_gen.ConditionOp(condition, args, *, loc=None, ip=None)¶
Bases:
_ods_irThis operation accepts the continuation (i.e., inverse of exit) condition of the
scf.whileconstruct. If its first argument is true, the “after” region ofscf.whileis executed, with the remaining arguments forwarded to the entry block of the region. Otherwise, the loop terminates.- OPERATION_NAME = 'scf.condition'¶
- _ODS_REGIONS = (0, True)¶
- condition() _ods_ir¶
- args() _ods_ir¶
- mlir.dialects._scf_ops_gen.condition(condition, args, *, loc=None, ip=None) ConditionOp¶
- class mlir.dialects._scf_ops_gen.ExecuteRegionOp(result, *, no_inline=None, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.execute_regionoperation is used to allow multiple blocks within SCF and other operations which can hold only one block. Thescf.execute_regionoperation executes the region held exactly once and cannot have any operands. As such, its region has no arguments. All SSA values that dominate the op can be accessed inside the op. The op’s region can have multiple blocks and the blocks can have multiple distinct terminators. Values returned from this op’s region define the op’s results. The optional ‘no_inline’ flag can be set to request the ExecuteRegionOp to be preserved as much as possible and not being inlined in the parent block until an explicit lowering step.Example:
scf.for %i = 0 to 128 step %c1 { %y = scf.execute_region -> i32 { %x = load %A[%i] : memref<128xi32> scf.yield %x : i32 } } // the same as above but with no_inline attribute scf.for %i = 0 to 128 step %c1 { %y = scf.execute_region -> i32 no_inline { %x = load %A[%i] : memref<128xi32> scf.yield %x : i32 } } affine.for %i = 0 to 100 { "foo"() : () -> () %v = scf.execute_region -> i64 { cf.cond_br %cond, ^bb1, ^bb2 ^bb1: %c1 = arith.constant 1 : i64 cf.br ^bb3(%c1 : i64) ^bb2: %c2 = arith.constant 2 : i64 cf.br ^bb3(%c2 : i64) ^bb3(%x : i64): scf.yield %x : i64 } "bar"(%v) : (i64) -> () }
- OPERATION_NAME = 'scf.execute_region'¶
- _ODS_REGIONS = (1, True)¶
- no_inline() bool¶
- region() _ods_ir¶
- mlir.dialects._scf_ops_gen.execute_region(result, *, no_inline=None, loc=None, ip=None) _ods_ir | _ods_ir | ExecuteRegionOp¶
- class mlir.dialects._scf_ops_gen.ForOp(results_, lowerBound, upperBound, step, initArgs, *, unsignedCmp=None, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.foroperation represents a loop taking 3 SSA value as operands that represent the lower bound, upper bound and step respectively. The operation defines an SSA value for its induction variable. It has one region capturing the loop body. The induction variable is represented as an argument of this region. This SSA value is a signless integer or index. The step is a value of same type but required to be positive, the lower and upper bounds can be also negative or zero. The lower and upper bounds specify a half-open range: the iteration is executed iff the comparison of induction variable value is less than the upper bound and bigger or equal to the lower bound.By default, the integer comparison is signed. If the
unsignedCmpunit attribute is specified, the integer comparison is unsigned.The body region must contain exactly one block that terminates with
scf.yield. Calling ForOp::build will create such a region and insert the terminator implicitly if none is defined, so will the parsing even in cases when it is absent from the custom format. For example:// Index case. scf.for %iv = %lb to %ub step %step { ... // body } ... // Unsigned integer case. scf.for unsigned %iv_32 = %lb_32 to %ub_32 step %step_32 : i32 { ... // body }
scf.forcan also operate on loop-carried variables and returns the final values after loop termination. The initial values of the variables are passed as additional SSA operands to thescf.forfollowing the 3 loop control SSA values mentioned above (lower bound, upper bound and step). The operation region has an argument for the induction variable, followed by one argument for each loop-carried variable, representing the value of the variable at the current iteration.The region must terminate with a
scf.yieldthat passes the current values of all loop-carried variables to the next iteration, or to thescf.forresult, if at the last iteration. The static type of a loop-carried variable may not change with iterations; its runtime type is allowed to change. Note, that when the loop-carried variables are present, calling ForOp::build will not insert the terminator implicitly. The caller must insertscf.yieldin that case.scf.forresults hold the final values after the last iteration. For example, to sum-reduce a memref:func.func @reduce(%buffer: memref<1024xf32>, %lb: index, %ub: index, %step: index) -> (f32) { // Initial sum set to 0. %sum_0 = arith.constant 0.0 : f32 // iter_args binds initial values to the loop's region arguments. %sum = scf.for %iv = %lb to %ub step %step iter_args(%sum_iter = %sum_0) -> (f32) { %t = load %buffer[%iv] : memref<1024xf32> %sum_next = arith.addf %sum_iter, %t : f32 // Yield current iteration sum to next iteration %sum_iter or to %sum // if final iteration. scf.yield %sum_next : f32 } return %sum : f32 }
If the
scf.fordefines any values, a yield must be explicitly present. The number and types of thescf.forresults must match the initial values in theiter_argsbinding and the yield operands.Another example with a nested
scf.if(seescf.iffor details) to perform conditional reduction:func.func @conditional_reduce(%buffer: memref<1024xf32>, %lb: index, %ub: index, %step: index) -> (f32) { %sum_0 = arith.constant 0.0 : f32 %c0 = arith.constant 0.0 : f32 %sum = scf.for %iv = %lb to %ub step %step iter_args(%sum_iter = %sum_0) -> (f32) { %t = load %buffer[%iv] : memref<1024xf32> %cond = arith.cmpf "ugt", %t, %c0 : f32 %sum_next = scf.if %cond -> (f32) { %new_sum = arith.addf %sum_iter, %t : f32 scf.yield %new_sum : f32 } else { scf.yield %sum_iter : f32 } scf.yield %sum_next : f32 } return %sum : f32 }
- OPERATION_NAME = 'scf.for'¶
- _ODS_REGIONS = (1, True)¶
- lowerBound() _ods_ir¶
- upperBound() _ods_ir¶
- step() _ods_ir¶
- initArgs() _ods_ir¶
- unsignedCmp() bool¶
- results_() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._scf_ops_gen.for_(results_, lower_bound, upper_bound, step, init_args, *, unsigned_cmp=None, loc=None, ip=None) _ods_ir | _ods_ir | ForOp¶
- class mlir.dialects._scf_ops_gen.ForallOp(results_, dynamicLowerBound, dynamicUpperBound, dynamicStep, staticLowerBound, staticUpperBound, staticStep, outputs, *, mapping=None, loc=None, ip=None)¶
Bases:
_ods_irscf.forallis a target-independent multi-dimensional parallel region application operation. It has exactly one block that represents the parallel body and it takes index operands that specify lower bounds, upper bounds and steps.The op also takes a variadic number of tensor operands (
shared_outs). The future buffers corresponding to these tensors are shared among all threads. Shared tensors should be accessed via their corresponding block arguments. If multiple threads write to a shared buffer in a racy fashion, these writes will execute in some unspecified order. Tensors that are not shared can be used inside the body (i.e., the op is not isolated from above); however, if a use of such a tensor bufferizes to a memory write, the tensor is privatized, i.e., a thread-local copy of the tensor is used. This ensures that memory side effects of a thread are not visible to other threads (or in the parent body), apart from explicitly shared tensors.The name “thread” conveys the fact that the parallel execution is mapped (i.e. distributed) to a set of virtual threads of execution, one function application per thread. Further lowerings are responsible for specifying how this is materialized on concrete hardware resources.
An optional
mappingis an attribute array that specifies processing units with their dimension, how it remaps 1-1 to a set of concrete processing element resources (e.g. a CUDA grid dimension or a level of concrete nested async parallelism). It is expressed via any attribute that implements the device mapping interface. It is the reponsibility of the lowering mechanism to interpret themappingattributes in the context of the concrete target the op is lowered to, or to ignore it when the specification is ill-formed or unsupported for a particular target.The only allowed terminator is
scf.forall.in_parallel.scf.forallreturns one value pershared_outoperand. The actions of thescf.forall.in_parallelterminators specify how to combine the partial results of all parallel invocations into a full value, in some unspecified order. The “destination” of each such op must be ashared_outblock argument of thescf.forallop.The actions involved in constructing the return values are further described by
tensor.parallel_insert_slice.scf.forallacts as an implicit synchronization point.When the parallel function body has side effects, their order is unspecified across threads.
scf.forallcan be printed in two different ways depending on whether the loop is normalized or not. The loop is ‘normalized’ when all lower bounds are equal to zero and steps are equal to one. In that case,lowerBoundandstepoperands will be omitted during printing.Normalized loop example:
// // Sequential context. // %matmul_and_pointwise:2 = scf.forall (%thread_id_1, %thread_id_2) in (%num_threads_1, %numthread_id_2) shared_outs(%o1 = %C, %o2 = %pointwise) -> (tensor<?x?xT>, tensor<?xT>) { // // Parallel context, each thread with id = (%thread_id_1, %thread_id_2) // runs its version of the code. // %sA = tensor.extract_slice %A[f((%thread_id_1, %thread_id_2))]: tensor<?x?xT> to tensor<?x?xT> %sB = tensor.extract_slice %B[g((%thread_id_1, %thread_id_2))]: tensor<?x?xT> to tensor<?x?xT> %sC = tensor.extract_slice %o1[h((%thread_id_1, %thread_id_2))]: tensor<?x?xT> to tensor<?x?xT> %sD = linalg.matmul ins(%sA, %sB : tensor<?x?xT>, tensor<?x?xT>) outs(%sC : tensor<?x?xT>) %spointwise = subtensor %o2[i((%thread_id_1, %thread_id_2))]: tensor<?xT> to tensor<?xT> %sE = linalg.add ins(%spointwise : tensor<?xT>) outs(%sD : tensor<?xT>) scf.forall.in_parallel { tensor.parallel_insert_slice %sD into %o1[h((%thread_id_1, %thread_id_2))]: tensor<?x?xT> into tensor<?x?xT> tensor.parallel_insert_slice %spointwise into %o2[i((%thread_id_1, %thread_id_2))]: tensor<?xT> into tensor<?xT> } } // Implicit synchronization point. // Sequential context. //
Loop with loop bounds example:
// // Sequential context. // %pointwise = scf.forall (%i, %j) = (0, 0) to (%dim1, %dim2) step (%tileSize1, %tileSize2) shared_outs(%o1 = %out) -> (tensor<?x?xT>, tensor<?xT>) { // // Parallel context. // %sA = tensor.extract_slice %A[%i, %j][%tileSize1, %tileSize2][1, 1] : tensor<?x?xT> to tensor<?x?xT> %sB = tensor.extract_slice %B[%i, %j][%tileSize1, %tileSize2][1, 1] : tensor<?x?xT> to tensor<?x?xT> %sC = tensor.extract_slice %o[%i, %j][%tileSize1, %tileSize2][1, 1] : tensor<?x?xT> to tensor<?x?xT> %add = linalg.map {"arith.addf"} ins(%sA, %sB : tensor<?x?xT>, tensor<?x?xT>) outs(%sC : tensor<?x?xT>) scf.forall.in_parallel { tensor.parallel_insert_slice %add into %o[%i, %j][%tileSize1, %tileSize2][1, 1] : tensor<?x?xT> into tensor<?x?xT> } } // Implicit synchronization point. // Sequential context. //
Example with mapping attribute:
// // Sequential context. Here `mapping` is expressed as GPU thread mapping // attributes // %matmul_and_pointwise:2 = scf.forall (%thread_id_1, %thread_id_2) in (%num_threads_1, %numthread_id_2) shared_outs(...) -> (tensor<?x?xT>, tensor<?xT>) { // // Parallel context, each thread with id = **(%thread_id_2, %thread_id_1)** // runs its version of the code. // scf.forall.in_parallel { ... } } { mapping = [#gpu.thread<y>, #gpu.thread<x>] } // Implicit synchronization point. // Sequential context. //
Example with privatized tensors:
%t0 = ... %t1 = ... %r = scf.forall ... shared_outs(%o = t0) -> tensor<?xf32> { // %t0 and %t1 are privatized. %t0 is definitely copied for each thread // because the scf.forall op's %t0 use bufferizes to a memory // write. In the absence of other conflicts, %t1 is copied only if there // are uses of %t1 in the body that bufferize to a memory read and to a // memory write. "some_use"(%t0) "some_use"(%t1) }
- OPERATION_NAME = 'scf.forall'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- dynamicLowerBound() _ods_ir¶
- dynamicUpperBound() _ods_ir¶
- dynamicStep() _ods_ir¶
- outputs() _ods_ir¶
- staticLowerBound() _ods_ir¶
- staticUpperBound() _ods_ir¶
- staticStep() _ods_ir¶
- mapping() _ods_ir | None¶
- results_() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._scf_ops_gen.forall(results_, dynamic_lower_bound, dynamic_upper_bound, dynamic_step, static_lower_bound, static_upper_bound, static_step, outputs, *, mapping=None, loc=None, ip=None) _ods_ir | _ods_ir | ForallOp¶
- class mlir.dialects._scf_ops_gen.IfOp(results_, condition, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.ifoperation represents an if-then-else construct for conditionally executing two regions of code. The operand to an if operation is a boolean value. For example:scf.if %b { ... } else { ... }
scf.ifmay also produce results. Which values are returned depends on which execution path is taken.Example:
%x, %y = scf.if %b -> (f32, f32) { %x_true = ... %y_true = ... scf.yield %x_true, %y_true : f32, f32 } else { %x_false = ... %y_false = ... scf.yield %x_false, %y_false : f32, f32 }
The “then” region has exactly 1 block. The “else” region may have 0 or 1 block. In case the
scf.ifproduces results, the “else” region must also have exactly 1 block.The blocks are always terminated with
scf.yield. Ifscf.ifdefines no values, thescf.yieldcan be left out, and will be inserted implicitly. Otherwise, it must be explicit.Example:
scf.if %b { ... }
The types of the yielded values must match the result types of the
scf.if.- OPERATION_NAME = 'scf.if'¶
- _ODS_REGIONS = (2, True)¶
- condition() _ods_ir¶
- results_() _ods_ir¶
- thenRegion() _ods_ir¶
- elseRegion() _ods_ir¶
- class mlir.dialects._scf_ops_gen.InParallelOp(*, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.forall.in_parallelis a designated terminator for thescf.foralloperation.It has a single region with a single block that contains a flat list of ops. Each such op participates in the aggregate formation of a single result of the enclosing
scf.forall. The result number corresponds to the position of the op in the terminator.- OPERATION_NAME = 'scf.forall.in_parallel'¶
- _ODS_REGIONS = (1, True)¶
- region() _ods_ir¶
- mlir.dialects._scf_ops_gen.forall_in_parallel(*, loc=None, ip=None) InParallelOp¶
- class mlir.dialects._scf_ops_gen.IndexSwitchOp(results_, arg, cases, num_caseRegions, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.index_switchis a control-flow operation that branches to one of the given regions based on the values of the argument and the cases. The argument is always of typeindex.The operation always has a “default” region and any number of case regions denoted by integer constants. Control-flow transfers to the case region whose constant value equals the value of the argument. If the argument does not equal any of the case values, control-flow transfer to the “default” region.
Example:
%0 = scf.index_switch %arg0 : index -> i32 case 2 { %1 = arith.constant 10 : i32 scf.yield %1 : i32 } case 5 { %2 = arith.constant 20 : i32 scf.yield %2 : i32 } default { %3 = arith.constant 30 : i32 scf.yield %3 : i32 }
- OPERATION_NAME = 'scf.index_switch'¶
- _ODS_REGIONS = (1, False)¶
- arg() _ods_ir¶
- cases() _ods_ir¶
- results_() _ods_ir¶
- defaultRegion() _ods_ir¶
- caseRegions() _ods_ir¶
- mlir.dialects._scf_ops_gen.index_switch(results_, arg, cases, num_case_regions, *, loc=None, ip=None) _ods_ir | _ods_ir | IndexSwitchOp¶
- class mlir.dialects._scf_ops_gen.ParallelOp(results_, lowerBound, upperBound, step, initVals, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.paralleloperation represents a loop nest taking 4 groups of SSA values as operands that represent the lower bounds, upper bounds, steps and initial values, respectively. The operation defines a variadic number of SSA values for its induction variables. It has one region capturing the loop body. The induction variables are represented as an argument of this region. These SSA values always have type index, which is the size of the machine word. The steps are values of type index, required to be positive. The lower and upper bounds specify a half-open range: the range includes the lower bound but does not include the upper bound. The initial values have the same types as results ofscf.parallel. If there are no results, the keywordinitcan be omitted.Semantically we require that the iteration space can be iterated in any order, and the loop body can be executed in parallel. If there are data races, the behavior is undefined.
The parallel loop operation supports reduction of values produced by individual iterations into a single result. This is modeled using the
scf.reduceterminator operation (seescf.reducefor details). The i-th result of anscf.paralleloperation is associated with the i-th initial value operand, the i-th operand of thescf.reduceoperation (the value to be reduced) and the i-th region of thescf.reduceoperation (the reduction function). Consequently, we require that the number of results of anscf.parallelop matches the number of initial values and the the number of reductions in thescf.reduceterminator.The body region must contain exactly one block that terminates with a
scf.reduceoperation. If anscf.parallelop has no reductions, the terminator has no operands and no regions. Thescf.parallelparser will automatically insert the terminator for ops that have no reductions if it is absent.Example:
%init = arith.constant 0.0 : f32 %r:2 = scf.parallel (%iv) = (%lb) to (%ub) step (%step) init (%init, %init) -> f32, f32 { %elem_to_reduce1 = load %buffer1[%iv] : memref<100xf32> %elem_to_reduce2 = load %buffer2[%iv] : memref<100xf32> scf.reduce(%elem_to_reduce1, %elem_to_reduce2 : f32, f32) { ^bb0(%lhs : f32, %rhs: f32): %res = arith.addf %lhs, %rhs : f32 scf.reduce.return %res : f32 }, { ^bb0(%lhs : f32, %rhs: f32): %res = arith.mulf %lhs, %rhs : f32 scf.reduce.return %res : f32 } }
- OPERATION_NAME = 'scf.parallel'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- lowerBound() _ods_ir¶
- upperBound() _ods_ir¶
- step() _ods_ir¶
- initVals() _ods_ir¶
- results_() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._scf_ops_gen.parallel(results_, lower_bound, upper_bound, step, init_vals, *, loc=None, ip=None) _ods_ir | _ods_ir | ParallelOp¶
- class mlir.dialects._scf_ops_gen.ReduceOp(operands_, num_reductions, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.reduceoperation is the terminator forscf.paralleloperations. It can model an arbitrary number of reductions. It has one region per reduction. Each region has one block with two arguments which have the same type as the corresponding operand ofscf.reduce. The operands of the op are the values that should be reduce; one value per reduction.The i-th reduction (i.e., the i-th region and the i-th operand) corresponds the i-th initial value and the i-th result of the enclosing
scf.parallelop.The
scf.reduceoperation contains regions whose entry blocks expect two arguments of the same type as the corresponding operand. As the iteration order of the enclosing parallel loop and hence reduction order is unspecified, the results of the reductions may be non-deterministic unless the reductions are associative and commutative.The result of a reduction region (
scf.reduce.returnoperand) must have the same type as the correspondingscf.reduceoperand and the correspondingscf.parallelinitial value.Example:
%operand = arith.constant 1.0 : f32 scf.reduce(%operand : f32) { ^bb0(%lhs : f32, %rhs: f32): %res = arith.addf %lhs, %rhs : f32 scf.reduce.return %res : f32 }
- OPERATION_NAME = 'scf.reduce'¶
- _ODS_REGIONS = (0, False)¶
- operands_() _ods_ir¶
- reductions() _ods_ir¶
- class mlir.dialects._scf_ops_gen.ReduceReturnOp(result, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.reduce.returnoperation is a special terminator operation for the block insidescf.reduceregions. It terminates the region. It should have the same operand type as the corresponding operand of the enclosingscf.reduceop.Example:
scf.reduce.return %res : f32
- OPERATION_NAME = 'scf.reduce.return'¶
- _ODS_REGIONS = (0, True)¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._scf_ops_gen.reduce_return(result, *, loc=None, ip=None) ReduceReturnOp¶
- class mlir.dialects._scf_ops_gen.WhileOp(results_, inits, *, loc=None, ip=None)¶
Bases:
_ods_irThis operation represents a generic “while”/”do-while” loop that keeps iterating as long as a condition is satisfied. There is no restriction on the complexity of the condition. It consists of two regions (with single block each): “before” region and “after” region. The names of regions indicates whether they execute before or after the condition check. Therefore, if the main loop payload is located in the “before” region, the operation is a “do-while” loop. Otherwise, it is a “while” loop.
The “before” region terminates with a special operation,
scf.condition, that accepts as its first operand ani1value indicating whether to proceed to the “after” region (value istrue) or not. The two regions communicate by means of region arguments. Initially, the “before” region accepts as arguments the operands of thescf.whileoperation and uses them to evaluate the condition. It forwards the trailing, non-condition operands of thescf.conditionterminator either to the “after” region if the control flow is transferred there or to results of thescf.whileoperation otherwise. The “after” region takes as arguments the values produced by the “before” region and usesscf.yieldto supply new arguments for the “before” region, into which it transfers the control flow unconditionally.A simple “while” loop can be represented as follows.
%res = scf.while (%arg1 = %init1) : (f32) -> f32 { // "Before" region. // In a "while" loop, this region computes the condition. %condition = call @evaluate_condition(%arg1) : (f32) -> i1 // Forward the argument (as result or "after" region argument). scf.condition(%condition) %arg1 : f32 } do { ^bb0(%arg2: f32): // "After" region. // In a "while" loop, this region is the loop body. %next = call @payload(%arg2) : (f32) -> f32 // Forward the new value to the "before" region. // The operand types must match the types of the `scf.while` operands. scf.yield %next : f32 }
A simple “do-while” loop can be represented by reducing the “after” block to a simple forwarder.
%res = scf.while (%arg1 = %init1) : (f32) -> f32 { // "Before" region. // In a "do-while" loop, this region contains the loop body. %next = call @payload(%arg1) : (f32) -> f32 // And also evaluates the condition. %condition = call @evaluate_condition(%arg1) : (f32) -> i1 // Loop through the "after" region. scf.condition(%condition) %next : f32 } do { ^bb0(%arg2: f32): // "After" region. // Forwards the values back to "before" region unmodified. scf.yield %arg2 : f32 }
Note that the types of region arguments need not to match with each other. The op expects the operand types to match with argument types of the “before” region; the result types to match with the trailing operand types of the terminator of the “before” region, and with the argument types of the “after” region. The following scheme can be used to share the results of some operations executed in the “before” region with the “after” region, avoiding the need to recompute them.
%res = scf.while (%arg1 = %init1) : (f32) -> i64 { // One can perform some computations, e.g., necessary to evaluate the // condition, in the "before" region and forward their results to the // "after" region. %shared = call @shared_compute(%arg1) : (f32) -> i64 // Evaluate the condition. %condition = call @evaluate_condition(%arg1, %shared) : (f32, i64) -> i1 // Forward the result of the shared computation to the "after" region. // The types must match the arguments of the "after" region as well as // those of the `scf.while` results. scf.condition(%condition) %shared : i64 } do { ^bb0(%arg2: i64) { // Use the partial result to compute the rest of the payload in the // "after" region. %res = call @payload(%arg2) : (i64) -> f32 // Forward the new value to the "before" region. // The operand types must match the types of the `scf.while` operands. scf.yield %res : f32 }
The custom syntax for this operation is as follows.
op ::= `scf.while` assignments `:` function-type region `do` region `attributes` attribute-dict initializer ::= /* empty */ | `(` assignment-list `)` assignment-list ::= assignment | assignment `,` assignment-list assignment ::= ssa-value `=` ssa-value- OPERATION_NAME = 'scf.while'¶
- _ODS_REGIONS = (2, True)¶
- inits() _ods_ir¶
- results_() _ods_ir¶
- before() _ods_ir¶
- after() _ods_ir¶
- mlir.dialects._scf_ops_gen.while_(results_, inits, *, loc=None, ip=None) _ods_ir | _ods_ir | WhileOp¶
- class mlir.dialects._scf_ops_gen.YieldOp(results_, *, loc=None, ip=None)¶
Bases:
_ods_irThe
scf.yieldoperation yields an SSA value from the SCF dialect op region and terminates the regions. The semantics of how the values are yielded is defined by the parent operation. Ifscf.yieldhas any operands, the operands must match the parent operation’s results. If the parent operation defines no values, then thescf.yieldmay be left out in the custom syntax and the builders will insert one implicitly. Otherwise, it has to be present in the syntax to indicate which values are yielded.- OPERATION_NAME = 'scf.yield'¶
- _ODS_REGIONS = (0, True)¶
- results_() _ods_ir¶