# 'shape' Dialect

Types and operations for shape dialect This dialect contains operations for shape inference.

Note: Unless explicitly stated, all functions that return a shape and take shapes as input, return the invalid shape if one of its operands is an invalid shape. This avoids flagging multiple errors for one verification failure. The dialect itself does not specify how errors should be combined (there are multiple different options, from always choosing first operand, concatting etc. on how to combine them).

## Type constraint definition ¶

### shape ¶

`shape.shape`

represents either an unranked shape, a ranked shape with
possibly unknown dimensions or an invalid shape. The rank is of type
`shape.size`

and, if rank is known, the extent is a 1D tensor of type
`shape.size`

.

Shape is printed:

`[*]`

if it is an unranked shape`[?, 2]`

if a rank 2 tensor with one unknown dimension`[3, 4]`

is a rank 2 static tensor`[]`

is a scalar`[1]`

is a rank 1 tensor with 1 element`[invalid]`

for an invalid shape

### size ¶

`shape.size`

represents a non-negative integer with support for being
unknown and invalid.

Operations on `shape.size`

types are specialized to handle unknown/dynamic
value. So, for example, `<unknown> + x == <unknown>`

for all non-error `x : !shape.size`

(e.g., an unknown value does not become known due to addition).

### value shape ¶

`shape.value_shape`

represents the value produced by an operation (this
corresponds to `Value`

in the compiler) and a shape. Conceptually this is a
tuple of a value (potentially unknown) and `shape.shape`

. The value and
shape can either or both be unknown. If both the `value`

and `shape`

are
known, then the shape of `value`

is conformant with `shape`

. That is, the
shape of the value conforms to the shape of the ValueShape, so that if we
have ValueShape `(value, shape)`

then `join(shape_of(value), shape)`

would
be error free and in particular it means that if both are statically known,
then they are equal.

### witness ¶

A witness is a structural device in the compiler to maintain ordering of code relying on information obtained from passing assertions. Witnesses do not represent any physical data.

“cstr_” operations will return witnesses and be lowered into assertion logic when not resolvable at compile time.

“assuming_” operations will take witnesses as input and represent only information to the compiler, so they do not exist in executing code. Code that is dependent on “assuming_” operations can assume all cstr operations transitively before are honored as true.

These abstractions are intended to allow the compiler more freedom with assertions by merely showing the assertion through dataflow at this time rather than a side effecting operation that acts as a barrier. This can be viewed similarly to a compiler representation of promises from asynchronous, possibly crashing assertions. Reliant code will not be reordered to before the code and non-reliant code can be reordered freely, and there are no guarantees on the final ordering of the assertions or their related code.

## Operation definition ¶

`shape.fn_lib_terminator`

(::mlir::shape::ShapeFunctionLibraryTerminatorOp) ¶

A pseudo op that marks the end of a shape function library

Syntax:

```
operation ::= `shape.fn_lib_terminator` attr-dict
```

`shape_fn_lib_terminator`

is a special pseudo terminator operation for the
shape function library. It has no semantic meaning beyond keeping the body
well-formed.

`shape.add`

(::mlir::shape::AddOp) ¶

Addition of sizes and indices

Syntax:

```
operation ::= `shape.add` $lhs `,` $rhs attr-dict `:` type($lhs) `,` type($rhs) `->` type($result)
```

Adds two sizes or indices. If either operand is an error it will be
propagated to the result. The operands can be of type `size`

or `index`

. If
at least one of the operands can hold an error, i.e. if it is of type `size`

,
the result must be of type `size`

. If error propagation is not possible
because both operands are of type `index`

then the result may be of type
`size`

or `index`

.

#### Operands: ¶

Operand | Description |
---|---|

`lhs` | size or index |

`rhs` | size or index |

#### Results: ¶

Result | Description |
---|---|

`result` | size or index |

`shape.any`

(::mlir::shape::AnyOp) ¶

Return any combination of the input shapes

Syntax:

```
operation ::= `shape.any` $inputs attr-dict `:` type($inputs) `->` type($result)
```

This operation takes multiple input shapes or extent tensors and returns some combination of their dimensions. This can be best seen with examples below.

The result is undefined, but still side-effect free, in cases where the inputs have differing ranks or differ in extents of shared dimensions.

Example:

```
%s0 = shape.any [2,?], [?,3] // [2,3]
%s1 = shape.any [?,?], [1,2] // [1,2]
```

#### Operands: ¶

Operand | Description |
---|---|

`inputs` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | shape or extent tensor |

`shape.assuming_all`

(::mlir::shape::AssumingAllOp) ¶

Return a logical AND of all witnesses

Syntax:

```
operation ::= `shape.assuming_all` $inputs attr-dict
```

Used to simplify constraints as any single failing precondition is enough to prevent execution.

“assuming” operations represent an execution order restriction to the compiler, information for dependent code to rely on (by assuming), and nothing else. They should not exist after a program is fully lowered and ready to execute.

Example:

```
%w0 = shape.cstr_broadcastable [2,2], [3,1,2] // Passing
%w1 = shape.cstr_broadcastable [2,2], [3,2] // Failure
%w2 = shape.cstr_eq [1,2], [1,2], [1,2] // Passing
%wf = shape.assuming_all %w0, %w1 // Failure
%wt = shape.assuming_all %w0, %w2 // Passing
```

#### Operands: ¶

Operand | Description |
---|---|

`inputs` | witness |

#### Results: ¶

Result | Description |
---|---|

`result` | witness |

`shape.assuming`

(::mlir::shape::AssumingOp) ¶

Execute the region

Executes the region assuming all witnesses are true.

“assuming” operations represent an execution order restriction to the compiler, information for dependent code to rely on (by assuming), and nothing else. They should not exist after a program is fully lowered and ready to execute.

#### Operands: ¶

Operand | Description |
---|---|

`witness` | witness |

#### Results: ¶

Result | Description |
---|---|

`results` | any type |

`shape.assuming_yield`

(::mlir::shape::AssumingYieldOp) ¶

Yield operation

Syntax:

```
operation ::= `shape.assuming_yield` attr-dict ($operands^ `:` type($operands))?
```

This yield operation represents a return operation within the
`shape.assuming`

operation region. The operation takes variable number of
operands and produces no results. The operand number and types must match
the number and types of parent `shape.assuming`

results.

#### Operands: ¶

Operand | Description |
---|---|

`operands` | any type |

`shape.broadcast`

(::mlir::shape::BroadcastOp) ¶

Returns the broadcasted output shape of two or more inputs

Syntax:

```
operation ::= `shape.broadcast` $shapes attr-dict `:` type($shapes) `->` type($result)
```

Returns the broadcasted shape for input shapes or extent tensors. The rest
of this description is simplified for the 2 input case but can be extended
to more inputs. Both operands can be of type `shape.shape`

or
`tensor<?xindex>`

. The result is of type `shape.shape`

and, if both
operands are tensors, may be of type `tensor<?xindex>`

.

If the two operand shapes are of different rank the smaller one is padded with 1’s from the left. The resulting broadcasted shape is then defined as

```
result[i] = lhs[i] if lhs[i] == rhs[i]
= lhs[i] if rhs[i] == 1
= rhs[i] if lhs[i] == 1.
```

In case the resulting shape is undefined, i.e. if corresponding extents are different from each other but none is 1, the result is an error shape. Likewise error values are propagated if any of the operands holds an error value. If the result type is an extent tensor (and can therefore not hold the error value) the behavior may be undefined. The optional string attribute can be used to describe the error case.

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`error` | ::mlir::StringAttr | string attribute |

#### Operands: ¶

Operand | Description |
---|---|

`shapes` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | shape or extent tensor |

`shape.concat`

(::mlir::shape::ConcatOp) ¶

Concatenates two shapes

Syntax:

```
operation ::= `shape.concat` $lhs `,` $rhs attr-dict
```

Creates a shape whose dimensions consist of first the dimensions from `lhs`

followed by the dimensions of `rhs`

.

Example: concat([2,3], [4,5]) -> [2,3,4,5] concat([], []) -> [] concat([], [4,5,6]) -> [4,5,6]

#### Operands: ¶

Operand | Description |
---|---|

`lhs` | shape |

`rhs` | shape |

#### Results: ¶

Result | Description |
---|---|

`result` | shape |

`shape.const_shape`

(::mlir::shape::ConstShapeOp) ¶

Creates a constant shape or extent tensor

Creates a constant shape or extent tensor. The individual extents are given
as the `shape`

attribute. The number of these values equals the shape’s
rank.

```
%0 = shape.const_shape [] : !shape.shape
%1 = shape.const_shape [1, 2, 3] : !shape.shape
%2 = shape.const_shape [4, 5, 6] : tensor<?xindex>
```

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`shape` | ::mlir::DenseIntElementsAttr | index elements attribute |

#### Results: ¶

Result | Description |
---|---|

`result` | shape or extent tensor |

`shape.const_size`

(::mlir::shape::ConstSizeOp) ¶

Creates a constant of type `shape.size`

Syntax:

```
operation ::= `shape.const_size` $value attr-dict
```

Creates a `shape.size`

type representing the constant size given by `value`

.

```
%x = shape.const_size 10
```

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`value` | ::mlir::IntegerAttr | index attribute |

#### Results: ¶

Result | Description |
---|---|

`result` | size |

`shape.const_witness`

(::mlir::shape::ConstWitnessOp) ¶

An operation that returns a statically known witness value

Syntax:

```
operation ::= `shape.const_witness` $passing attr-dict
```

This operation represents a statically known witness result. This can be often used to canonicalize/fold constraint and assuming code that will always pass.

```
%0 = shape.const_shape [1,2,3]
%1 = shape.const_shape [1,2,3]
%w0 = shape.cstr_eq(%0, %1) // Can be folded to "const_witness true"
%w1 = shape.const_witness true
%w2 = shape.assuming_all(%w0, %w2) // Can be folded to "const_witness true"
```

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`passing` | ::mlir::BoolAttr | bool attribute |

#### Results: ¶

Result | Description |
---|---|

`result` | witness |

`shape.cstr_broadcastable`

(::mlir::shape::CstrBroadcastableOp) ¶

Determines if 2+ shapes can be successfully broadcasted

Syntax:

```
operation ::= `shape.cstr_broadcastable` $shapes attr-dict `:` type($shapes)
```

Given input shapes or extent tensors, return a witness specifying if they are broadcastable. This broadcastable follows the same logic as what shape.broadcast documents.

“cstr” operations represent runtime assertions.

Example:

```
%w0 = shape.cstr_broadcastable [2,2], [3,1,2] // Passing
%w1 = shape.cstr_broadcastable [2,2], [3,2] // Failure
```

#### Operands: ¶

Operand | Description |
---|---|

`shapes` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | witness |

`shape.cstr_eq`

(::mlir::shape::CstrEqOp) ¶

Determines if all input shapes are equal

Syntax:

```
operation ::= `shape.cstr_eq` $inputs attr-dict
```

Given 1 or more input shapes, determine if all shapes are the exact same.

“cstr” operations represent runtime assertions.

Example:

```
%w0 = shape.cstr_eq [1,2], [1,2], [1,2] // Passing
%w1 = shape.cstr_eq [2,2], [1,2] // Failure
```

#### Operands: ¶

Operand | Description |
---|---|

`inputs` | shape |

#### Results: ¶

Result | Description |
---|---|

`result` | witness |

`shape.cstr_require`

(::mlir::shape::CstrRequireOp) ¶

Represents a runtime assertion that an i1 is `true`

Syntax:

```
operation ::= `shape.cstr_require` $pred `,` $msg attr-dict
```

Represents a runtime assertion that an i1 is true. It returns a !shape.witness to order this assertion.

For simplicity, prefer using other cstr_* ops if they are available for a given constraint.

Example:

```
%bool = ...
%w0 = shape.cstr_require %bool, "msg" // Passing if `%bool` is true.
```

Since this op can be used to express many different possible assertions
(depending on whatever computation calculated `pred`

), the `msg`

should clarify the nature of the assertion for users.

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`msg` | ::mlir::StringAttr | string attribute |

#### Operands: ¶

Operand | Description |
---|---|

`pred` | 1-bit signless integer |

#### Results: ¶

Result | Description |
---|---|

`result` | witness |

`shape.debug_print`

(::mlir::shape::DebugPrintOp) ¶

Prints the input shape or size

Prints the input dim or shape and passes through input.

Note: This is intended for testing and debugging only.

#### Operands: ¶

Operand | Description |
---|---|

`input` | shape or size |

#### Results: ¶

Result | Description |
---|---|

`output` | shape or size |

`shape.div`

(::mlir::shape::DivOp) ¶

Division of sizes and indices

Syntax:

```
operation ::= `shape.div` $lhs `,` $rhs attr-dict `:` type($lhs) `,` type($rhs) `->` type($result)
```

Divides two sizes or indices. If either operand is an error it will be
propagated to the result. The operands can be of type `size`

or `index`

.
If at least one of the operands can hold an error, i.e. if it is of type
`size`

, the result must be of type `size`

. If error propagation is not
possible because both operands are of type `index`

then the result may be
of type `size`

or `index`

. If both operands and result are of type `index`

,
their runtime values could be negative. The result is rounded toward
negative infinity, i.e. floor(lhs / rhs), such that

```
div(lhs, rhs) * rhs + mod(lhs, rhs) = lhs
```

always holds. If any of the values is of type `size`

, the behavior for
negative value is undefined.

#### Operands: ¶

Operand | Description |
---|---|

`lhs` | size or index |

`rhs` | size or index |

#### Results: ¶

Result | Description |
---|---|

`result` | size or index |

`shape.from_extent_tensor`

(::mlir::shape::FromExtentTensorOp) ¶

Creates a shape from a tensor of extents

Syntax:

```
operation ::= `shape.from_extent_tensor` $input attr-dict `:` type($input)
```

Creates a shape from a 1D integral tensor of extents. The rank of the resulting shape equals the number of elements in the tensor, and the extents match the values of the elements.

#### Operands: ¶

Operand | Description |
---|---|

`input` | 1D tensor of index values |

#### Results: ¶

Result | Description |
---|---|

`result` | shape |

`shape.from_extents`

(::mlir::shape::FromExtentsOp) ¶

Creates a shape from extents

Syntax:

```
operation ::= `shape.from_extents` $extents attr-dict `:` type($extents)
```

Creates a shape from multiple SSA values representing the extents of the shape.

```
// Rank 2 shape.
%s0 = shape.from_extents %a, %b
// Rank 0 shape.
%s1 = shape.from_extents
```

#### Operands: ¶

Operand | Description |
---|---|

`extents` | size or index |

#### Results: ¶

Result | Description |
---|---|

`shape` | shape |

`shape.function_library`

(::mlir::shape::FunctionLibraryOp) ¶

Represents shape functions and corresponding ops

Represents a list of shape functions and the ops whose shape transfer functions they represent.

Example:

```
shape.function_library {
func @same_result_shape(%arg: !shape.value_shape) -> !shape.shape {
%0 = shape.shape_of %arg : !shape.value_shape -> !shape.shape
return %0 : !shape.shape
}
} mapping {
std.atan = @same_result_shape
}
```

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`mapping` | ::mlir::DictionaryAttr | dictionary of named attribute values |

`shape.get_extent`

(::mlir::shape::GetExtentOp) ¶

Gets the specified extent from a shape or extent tensor

Syntax:

```
operation ::= `shape.get_extent` $shape `,` $dim attr-dict `:` type($shape) `,` type($dim) `->` type($extent)
```

Gets the extent indexed by `dim`

from the `shape`

operand. If the shape is
an error then it returns an invalid size.

#### Operands: ¶

Operand | Description |
---|---|

`shape` | shape or extent tensor |

`dim` | size or index |

#### Results: ¶

Result | Description |
---|---|

`extent` | size or index |

`shape.index_to_size`

(::mlir::shape::IndexToSizeOp) ¶

Converts a standard index to a shape size

Syntax:

```
operation ::= `shape.index_to_size` $arg attr-dict
```

Converts a standard index to a `shape.size`

. This operation and its
inverse, `size_to_index`

, facilitate index conversion between the standard
and the shape dialect.

The behavior is undefined for negative indices.

#### Operands: ¶

Operand | Description |
---|---|

`arg` | index |

#### Results: ¶

Result | Description |
---|---|

`result` | size |

`shape.is_broadcastable`

(::mlir::shape::IsBroadcastableOp) ¶

Determines if 2+ shapes can be successfully broadcasted

Syntax:

```
operation ::= `shape.is_broadcastable` $shapes attr-dict `:` type($shapes)
```

Given multiple input shapes or extent tensors, return a predicate specifying if they are broadcastable. This broadcastable follows the same logic as what shape.broadcast documents.

Concretely, shape.is_broadcastable returning true implies that shape.broadcast will not give an error, and shape.cstr_broadcastable will not result in an assertion failure. Similarly, false implies an error or assertion failure.

Example:

```
%true = shape.is_broadcastable [2,2], [3,1,2]
%false = shape.is_broadcastable [2,2], [3,2]
```

#### Operands: ¶

Operand | Description |
---|---|

`shapes` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | 1-bit signless integer |

`shape.join`

(::mlir::shape::JoinOp) ¶

Returns the least general shape.shape of its operands

An operation that computes the least general shape of input operands.
This effectively asserts that corresponding static dimensions are equal.
The behavior is to match each element of the `shape.shape`

and propagate the
most restrictive information, returning an invalid shape if there are
contradictory requirements. E.g., using pseudo code

```
shape.join([*], [*]) -> [*]
shape.join([*], [1, ?]) -> [1, ?]
shape.join([1, 2], [1, ?]) -> [1, 2]
shape.join([*], [1, 2]) -> [1, 2]
shape.join([], []) -> []
shape.join([], [*]) -> []
shape.join([], [?, ?]) -> [invalid]
shape.join([1, ?], [2, ?, ?]) -> [invalid]
```

`shape.join`

also allows specifying an optional error string, that may be
used to return an error to the user upon mismatch of dimensions.

```
%c = shape.join %a, %b, error="<reason>" : !shape.shape
```

#### Attributes: ¶

Attribute | MLIR Type | Description |
---|---|---|

`error` | ::mlir::StringAttr | string attribute |

#### Operands: ¶

Operand | Description |
---|---|

`arg0` | shape or size |

`arg1` | shape or size |

#### Results: ¶

Result | Description |
---|---|

`result` | shape or size |

`shape.mul`

(::mlir::shape::MulOp) ¶

Multiplication of sizes and indices

Syntax:

```
operation ::= `shape.mul` $lhs `,` $rhs attr-dict `:` type($lhs) `,` type($rhs) `->` type($result)
```

Multiplies two sizes or indices. If either operand is an error it will be
propagated to the result. The operands can be of type `size`

or `index`

. If
at least one of the operands can hold an error, i.e. if it is of type `size`

,
the result must be of type `size`

. If error propagation is not possible
because both operands are of type `index`

then the result may be of type
`size`

or `index`

.

#### Operands: ¶

Operand | Description |
---|---|

`lhs` | size or index |

`rhs` | size or index |

#### Results: ¶

Result | Description |
---|---|

`result` | size or index |

`shape.num_elements`

(::mlir::shape::NumElementsOp) ¶

Returns the number of elements for a given shape

Syntax:

```
operation ::= `shape.num_elements` $shape attr-dict `:` type($shape) `->` type($result)
```

Returns the number of elements for a given shape which is the product of its
extents. If the argument is of type `shape`

then the result will be of type
`size`

and potential errors will be propagated. Otherwise, if the argument
is and extent tensor `tensor<?xindex>`

then the result will be of type
`index`

.

#### Operands: ¶

Operand | Description |
---|---|

`shape` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | size or index |

`shape.rank`

(::mlir::shape::RankOp) ¶

Gets the rank of a shape

Syntax:

```
operation ::= `shape.rank` $shape attr-dict `:` type($shape) `->` type($rank)
```

Returns the rank of the shape or extent tensor, i.e. the number of extents.

#### Operands: ¶

Operand | Description |
---|---|

`shape` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`rank` | size or index |

`shape.reduce`

(::mlir::shape::ReduceOp) ¶

Returns an expression reduced over a shape or extent tensor

An operation that takes as input a shape or extent tensor, and a number of initial values. This operation has a region that is applied repeatedly for every extent of the input. Starting with the initial values, the individual extents are then aggregated as defined by the associated region.

Conceptually this op performs the following reduction:

```
res[] = init;
for (int i = 0, i < shape.rank(); i++) {
res = reduce(i, shape[i], res[0], ..., res[n]);
}
```

Where `reduce`

represents the region attached and the result of the reduce
op is the last computed output of the reduce region. As an example, the
number of elements can be computed as follows:

```
func @reduce(%shape : !shape.shape, %init : !shape.size) -> !shape.size {
%num_elements = shape.reduce(%shape, %init) -> !shape.size {
^bb0(%index: index, %dim: !shape.size, %acc: !shape.size):
%updated_acc = "shape.mul"(%acc, %dim) :
(!shape.size, !shape.size) -> !shape.size
shape.yield %updated_acc : !shape.size
}
return %num_elements : !shape.size
}
```

#### Operands: ¶

Operand | Description |
---|---|

`shape` | shape or extent tensor |

`initVals` | any type |

#### Results: ¶

Result | Description |
---|---|

`result` | any type |

`shape.shape_eq`

(::mlir::shape::ShapeEqOp) ¶

Returns whether the input shapes or extent tensors are equal

Syntax:

```
operation ::= `shape.shape_eq` $lhs `,` $rhs attr-dict `:` type($lhs) `,` type($rhs)
```

Takes two shape or extent tensor operands and determines whether they are equal. When extent tensors are compared to shapes they are regarded as their equivalent non-error shapes. Error shapes can be tested for equality like any other shape value, meaning that the error value is equal to itself.

#### Operands: ¶

Operand | Description |
---|---|

`lhs` | shape or extent tensor |

`rhs` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | 1-bit signless integer |

`shape.shape_of`

(::mlir::shape::ShapeOfOp) ¶

Returns shape of a value or shaped type operand

Syntax:

```
operation ::= `shape.shape_of` $arg attr-dict `:` type($arg) `->` type($result)
```

The operation takes a value or a shaped operand as an argument and it returns a shape or extent tensor.

#### Operands: ¶

Operand | Description |
---|---|

`arg` | shaped of any type values or value shape |

#### Results: ¶

Result | Description |
---|---|

`result` | shape or extent tensor |

`shape.size_to_index`

(::mlir::shape::SizeToIndexOp) ¶

Casts between index types of the shape and standard dialect

Syntax:

```
operation ::= `shape.size_to_index` $arg attr-dict `:` type($arg)
```

Converts a `shape.size`

to a standard index. This operation and its
inverse, `index_to_size`

, facilitate index conversion between the standard
and the shape dialect. The behavior is undefined for unknown and invalid
arguments.

#### Operands: ¶

Operand | Description |
---|---|

`arg` | size or index |

#### Results: ¶

Result | Description |
---|---|

`result` | index |

`shape.split_at`

(::mlir::shape::SplitAtOp) ¶

Splits a shape at a given index

Splits a shape at a given dimension `index`

, returning two shapes.
If `index`

is negative, it is treated as indexing from the back of the
shape. This negative-handling behavior is important when handling unranked
shapes, where the positive index is not necessarily knowable due to a
dynamic number of leading dimensions.

Examples:

- split_at([4,5,6], index=0) -> [], [4,5,6]
- split_at([4,5,6], index=1) -> [4], [5,6]
- split_at([4,5,6], index=2) -> [4,5], [6]
- split_at([4,5,6], index=3) -> [4,5,6], []
- split_at([4,5,6], index=4) -> error
- split_at([4,5,6], index=-1) -> [4,5], [6]
- split_at([4,5,6], index=-2) -> [4], [5,6]
- split_at([4,5,6], index=-3) -> [], [4,5,6]
- split_at([4,5,6], index=-4) -> error

Requires:

`index`

is in the range [-rank(operand),rank(operand)]

#### Operands: ¶

Operand | Description |
---|---|

`operand` | shape or extent tensor |

`index` | size or index |

#### Results: ¶

Result | Description |
---|---|

`head` | shape |

`tail` | shape |

`shape.to_extent_tensor`

(::mlir::shape::ToExtentTensorOp) ¶

Creates a dimension tensor from a shape

Syntax:

```
operation ::= `shape.to_extent_tensor` $input attr-dict `:` type($input) `->` type($result)
```

Converts a shape to a 1D integral tensor of extents. The number of elements in the tensor equals the rank of the shape, and the elements equal the extents of the shape.

If the shape represents an error, this op’s behavior is undefined.

#### Operands: ¶

Operand | Description |
---|---|

`input` | shape or extent tensor |

#### Results: ¶

Result | Description |
---|---|

`result` | tensor of index values |

`shape.with_shape`

(::mlir::shape::WithOp) ¶

Returns ValueShape with given shape

Syntax:

```
operation ::= `shape.with_shape` operands attr-dict `:` type($operand) `,` type($shape)
```

Returns ValueShape with the shape updated to match the shape operand. That
is a new ValueShape tuple is created with value equal to `operand`

's
value and shape equal to `shape`

. If the ValueShape and given `shape`

are
non-conformant, then the returned ValueShape will represent an error of
this mismatch. Similarly if either inputs are in an error state, then an
error is propagated.

Usage: %0 = shape.with_shape %1, %2 : tensor<…>, !shape.shape

This is used, for example, where one combines shape function calculations and/or call one shape function from another. E.g.,

```
func @shape_foobah(%a: !shape.value_shape,
%b: !shape.value_shape,
%c: !shape.value_shape) -> !shape.shape {
%0 = call @shape_foo(%a, %b) :
(!shape.value_shape, !shape.value_shape) -> !shape.shape
%1 = shape.with_shape %b, %0 : !shape.value_shape, !shape.shape
%2 = call @shape_bah(%c, %1) :
(!shape.value_shape, !shape.value_shape) -> !shape.shape
return %2 : !shape.shape
}
```

This op need not be a refinement of the shape. In non-error cases the input
ValueShape’s value and shape are conformant and so too for the output, but
the result may be less specified than `operand`

's shape as `shape`

is
merely used to construct the new ValueShape. If join behavior is desired
then a join op should be used.

#### Operands: ¶

Operand | Description |
---|---|

`operand` | shaped of any type values or value shape |

`shape` | shape |

#### Results: ¶

Result | Description |
---|---|

`result` | value shape |

`shape.yield`

(::mlir::shape::YieldOp) ¶

Returns the value to parent op

Syntax:

```
operation ::= `shape.yield` attr-dict ($operands^ `:` type($operands))?
```

#### Operands: ¶

Operand | Description |
---|---|

`operands` | any type |