mlir.dialects._shape_ops_gen

Attributes

Classes

_Dialect

AddOp

Adds two sizes or indices. If either operand is an error it will be

AnyOp

This operation takes multiple input shapes or extent tensors and returns

AssumingAllOp

Used to simplify constraints as any single failing precondition is enough

AssumingOp

Executes the region assuming all witnesses are true.

AssumingYieldOp

This yield operation represents a return operation within the

BroadcastOp

Returns the broadcasted shape for input shapes or extent tensors. The rest

ConcatOp

Creates a shape whose dimensions consist of first the dimensions from lhs

ConstShapeOp

Creates a constant shape or extent tensor. The individual extents are given

ConstSizeOp

Creates a shape.size type representing the constant size given by value.

ConstWitnessOp

This operation represents a statically known witness result. This can be

CstrBroadcastableOp

Given input shapes or extent tensors, return a witness specifying if they

CstrEqOp

Given 1 or more input shapes, determine if all shapes are the exact same.

CstrRequireOp

Represents a runtime assertion that an i1 is true. It returns a

DebugPrintOp

Prints the input dim or shape and passes through input.

DimOp

Gets the extent indexed by dim from the shape of the value operand. If

DivOp

Divides two sizes or indices. If either operand is an error it will be

FromExtentTensorOp

Creates a shape from a 1D integral tensor of extents. The rank of the

FromExtentsOp

Creates a shape from multiple SSA values representing the extents of

FuncOp

An operation with a name containing a single SSACFG region which

FunctionLibraryOp

Represents a list of shape functions and the ops whose shape transfer

GetExtentOp

Gets the extent indexed by dim from the shape operand. If the shape is

IndexToSizeOp

Converts a standard index to a shape.size. This operation and its

IsBroadcastableOp

Given multiple input shapes or extent tensors, return a predicate

MaxOp

Computes the elementwise maximum of two sizes or shapes with equal ranks.

MeetOp

An operation that computes the least general shape or dim of input operands.

MinOp

Computes the elementwise minimum of two sizes or shapes with equal ranks.

MulOp

Multiplies two sizes or indices. If either operand is an error it will be

NumElementsOp

Returns the number of elements for a given shape which is the product of

RankOp

Returns the rank of the shape or extent tensor, i.e. the number of extents.

ReduceOp

An operation that takes as input a shape or extent tensor, and a number of

ReturnOp

The shape.return operation represents a return operation within a

ShapeEqOp

Takes one or more shape or extent tensor operands and determines whether

ShapeOfOp

The operation takes a value or a shaped operand as an argument and it

SizeToIndexOp

Converts a shape.size to a standard index. This operation and its

SplitAtOp

Splits a shape at a given dimension index, returning two shapes. If

ToExtentTensorOp

Converts a shape to a 1D integral tensor of extents. The number of elements

ValueAsShapeOp

The operations takes a ValueShape and returns a Shape corresponding to the

ValueOfOp

The operation takes !shape.value_shape, a.k.a. (value, shape) tuple as an

WithOp

Returns ValueShape with the shape updated to match the shape operand. That

YieldOp

Functions

add(→ _ods_ir)

any(→ _ods_ir)

assuming_all(→ _ods_ir)

assuming(→ Union[_ods_ir, _ods_ir, AssumingOp])

assuming_yield(→ AssumingYieldOp)

broadcast(→ _ods_ir)

concat(→ _ods_ir)

const_shape(→ _ods_ir)

const_size(→ _ods_ir)

const_witness(→ _ods_ir)

cstr_broadcastable(→ _ods_ir)

cstr_eq(→ _ods_ir)

cstr_require(→ _ods_ir)

debug_print(→ _ods_ir)

dim(→ _ods_ir)

div(→ _ods_ir)

from_extent_tensor(→ _ods_ir)

from_extents(→ _ods_ir)

func(→ FuncOp)

function_library(→ FunctionLibraryOp)

get_extent(→ _ods_ir)

index_to_size(→ _ods_ir)

is_broadcastable(→ _ods_ir)

max(→ _ods_ir)

meet(→ _ods_ir)

min(→ _ods_ir)

mul(→ _ods_ir)

num_elements(→ _ods_ir)

rank(→ _ods_ir)

reduce(→ Union[_ods_ir, _ods_ir, ReduceOp])

return_(→ ReturnOp)

shape_eq(→ _ods_ir)

shape_of(→ _ods_ir)

size_to_index(→ _ods_ir)

split_at(→ _ods_ir)

to_extent_tensor(→ _ods_ir)

value_as_shape(→ _ods_ir)

value_of(→ _ods_ir)

with_shape(→ _ods_ir)

yield_(→ YieldOp)

Module Contents

mlir.dialects._shape_ops_gen._ods_ir
class mlir.dialects._shape_ops_gen._Dialect(descriptor: object)

Bases: _ods_ir

DIALECT_NAMESPACE = 'shape'
class mlir.dialects._shape_ops_gen.AddOp(lhs, rhs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Adds two sizes or indices. If either operand is an error it will be propagated to the result. The operands can be of type size or index. If at least one of the operands can hold an error, i.e. if it is of type size, the result must be of type size. If error propagation is not possible because both operands are of type index then the result may be of type size or index.

OPERATION_NAME = 'shape.add'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.add(lhs, rhs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.AnyOp(result, inputs, *, loc=None, ip=None)

Bases: _ods_ir

This operation takes multiple input shapes or extent tensors and returns some combination of their dimensions. This can be best seen with examples below.

The result is undefined, but still side-effect free, in cases where the inputs have differing ranks or differ in extents of shared dimensions.

Example:

%s0 = shape.any [2,?], [?,3] // [2,3]
%s1 = shape.any [?,?], [1,2] // [1,2]
OPERATION_NAME = 'shape.any'
_ODS_REGIONS = (0, True)
inputs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.any(result, inputs, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.AssumingAllOp(inputs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Used to simplify constraints as any single failing precondition is enough to prevent execution.

“assuming” operations represent an execution order restriction to the compiler, information for dependent code to rely on (by assuming), and nothing else. They should not exist after a program is fully lowered and ready to execute.

Example:

%w0 = shape.cstr_broadcastable [2,2], [3,1,2] // Passing
%w1 = shape.cstr_broadcastable [2,2], [3,2] // Failure
%w2 = shape.cstr_eq [1,2], [1,2], [1,2] // Passing
%wf = shape.assuming_all %w0, %w1 // Failure
%wt = shape.assuming_all %w0, %w2 // Passing
OPERATION_NAME = 'shape.assuming_all'
_ODS_REGIONS = (0, True)
inputs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.assuming_all(inputs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.AssumingOp(results_, witness, *, loc=None, ip=None)

Bases: _ods_ir

Executes the region assuming all witnesses are true.

“assuming” operations represent an execution order restriction to the compiler, information for dependent code to rely on (by assuming), and nothing else. They should not exist after a program is fully lowered and ready to execute.

OPERATION_NAME = 'shape.assuming'
_ODS_REGIONS = (1, True)
witness() _ods_ir
results_() _ods_ir
doRegion() _ods_ir
mlir.dialects._shape_ops_gen.assuming(results_, witness, *, loc=None, ip=None) _ods_ir | _ods_ir | AssumingOp
class mlir.dialects._shape_ops_gen.AssumingYieldOp(operands_, *, loc=None, ip=None)

Bases: _ods_ir

This yield operation represents a return operation within the shape.assuming operation region. The operation takes variable number of operands and produces no results. The operand number and types must match the number and types of parent shape.assuming results.

OPERATION_NAME = 'shape.assuming_yield'
_ODS_REGIONS = (0, True)
operands_() _ods_ir
mlir.dialects._shape_ops_gen.assuming_yield(operands_, *, loc=None, ip=None) AssumingYieldOp
class mlir.dialects._shape_ops_gen.BroadcastOp(result, shapes, *, error=None, loc=None, ip=None)

Bases: _ods_ir

Returns the broadcasted shape for input shapes or extent tensors. The rest of this description is simplified for the 2 input case but can be extended to more inputs. Both operands can be of type shape.shape or tensor<?xindex>. The result is of type shape.shape and, if both operands are tensors, may be of type tensor<?xindex>.

If the two operand shapes are of different rank the smaller one is padded with 1’s from the left. The resulting broadcasted shape is then defined as

result[i] = lhs[i] if lhs[i] == rhs[i]
          = lhs[i] if rhs[i] == 1
          = rhs[i] if lhs[i] == 1.

In case the resulting shape is undefined, i.e. if corresponding extents are different from each other but none is 1, the result is an error shape. Likewise error values are propagated if any of the operands holds an error value. If the result type is an extent tensor (and can therefore not hold the error value) the behavior may be undefined. The optional string attribute can be used to describe the error case.

OPERATION_NAME = 'shape.broadcast'
_ODS_REGIONS = (0, True)
shapes() _ods_ir
error() _ods_ir | None
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.broadcast(result, shapes, *, error=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ConcatOp(result, lhs, rhs, *, loc=None, ip=None)

Bases: _ods_ir

Creates a shape whose dimensions consist of first the dimensions from lhs followed by the dimensions of rhs.

Example: concat([2,3], [4,5]) -> [2,3,4,5] concat([], []) -> [] concat([], [4,5,6]) -> [4,5,6]

OPERATION_NAME = 'shape.concat'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.concat(result, lhs, rhs, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ConstShapeOp(shape, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Creates a constant shape or extent tensor. The individual extents are given as the shape attribute. The number of these values equals the shape’s rank.

%0 = shape.const_shape [] : !shape.shape
%1 = shape.const_shape [1, 2, 3] : !shape.shape
%2 = shape.const_shape [4, 5, 6] : tensor<3xindex>
OPERATION_NAME = 'shape.const_shape'
_ODS_REGIONS = (0, True)
shape() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.const_shape(shape, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ConstSizeOp(value, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Creates a shape.size type representing the constant size given by value.

%x = shape.const_size 10
OPERATION_NAME = 'shape.const_size'
_ODS_REGIONS = (0, True)
value() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.const_size(value, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ConstWitnessOp(passing, *, results=None, loc=None, ip=None)

Bases: _ods_ir

This operation represents a statically known witness result. This can be often used to canonicalize/fold constraint and assuming code that will always pass.

%0 = shape.const_shape [1,2,3]
%1 = shape.const_shape [1,2,3]
%w0 = shape.cstr_eq(%0, %1) // Can be folded to "const_witness true"
%w1 = shape.const_witness true
%w2 = shape.assuming_all(%w0, %w2) // Can be folded to "const_witness true"
OPERATION_NAME = 'shape.const_witness'
_ODS_REGIONS = (0, True)
passing() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.const_witness(passing, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.CstrBroadcastableOp(shapes, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Given input shapes or extent tensors, return a witness specifying if they are broadcastable. This broadcastable follows the same logic as what shape.broadcast documents.

“cstr” operations represent runtime assertions.

Example:

%w0 = shape.cstr_broadcastable [2,2], [3,1,2] // Passing
%w1 = shape.cstr_broadcastable [2,2], [3,2] // Failure
OPERATION_NAME = 'shape.cstr_broadcastable'
_ODS_REGIONS = (0, True)
shapes() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.cstr_broadcastable(shapes, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.CstrEqOp(shapes, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Given 1 or more input shapes, determine if all shapes are the exact same.

“cstr” operations represent runtime assertions.

Example:

%w0 = shape.cstr_eq [1,2], [1,2], [1,2] // Passing
%w1 = shape.cstr_eq [2,2], [1,2] // Failure
OPERATION_NAME = 'shape.cstr_eq'
_ODS_REGIONS = (0, True)
shapes() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.cstr_eq(shapes, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.CstrRequireOp(pred, msg, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Represents a runtime assertion that an i1 is true. It returns a !shape.witness to order this assertion.

For simplicity, prefer using other cstr_* ops if they are available for a given constraint.

Example:

%bool = ...
%w0 = shape.cstr_require %bool, "msg" // Passing if `%bool` is true.

Since this op can be used to express many different possible assertions (depending on whatever computation calculated pred), the msg should clarify the nature of the assertion for users.

OPERATION_NAME = 'shape.cstr_require'
_ODS_REGIONS = (0, True)
pred() _ods_ir
msg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.cstr_require(pred, msg, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.DebugPrintOp(output, input, *, loc=None, ip=None)

Bases: _ods_ir

Prints the input dim or shape and passes through input.

Note: This is intended for testing and debugging only.

OPERATION_NAME = 'shape.debug_print'
_ODS_REGIONS = (0, True)
input() _ods_ir
output() _ods_ir
mlir.dialects._shape_ops_gen.debug_print(output, input, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.DimOp(value, index, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Gets the extent indexed by dim from the shape of the value operand. If the index is error or out-of-bound then it returns an invalid size if the return type carries error information else the behavior is undefined.

This is a convenience op that performs the equivalent of getting the extent of a shape (e.g., dim(x, i) == get_extent(shape_of(x), i)).

OPERATION_NAME = 'shape.dim'
_ODS_REGIONS = (0, True)
value() _ods_ir
index() _ods_ir
extent() _ods_ir
mlir.dialects._shape_ops_gen.dim(value, index, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.DivOp(lhs, rhs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Divides two sizes or indices. If either operand is an error it will be propagated to the result. The operands can be of type size or index. If at least one of the operands can hold an error, i.e. if it is of type size, the result must be of type size. If error propagation is not possible because both operands are of type index then the result may be of type size or index. If both operands and result are of type index, their runtime values could be negative. The result is rounded toward negative infinity, i.e. floor(lhs / rhs), such that

div(lhs, rhs) * rhs + mod(lhs, rhs) = lhs

always holds. If any of the values is of type size, the behavior for negative value is undefined.

OPERATION_NAME = 'shape.div'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.div(lhs, rhs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.FromExtentTensorOp(input, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Creates a shape from a 1D integral tensor of extents. The rank of the resulting shape equals the number of elements in the tensor, and the extents match the values of the elements.

OPERATION_NAME = 'shape.from_extent_tensor'
_ODS_REGIONS = (0, True)
input() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.from_extent_tensor(input, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.FromExtentsOp(extents, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Creates a shape from multiple SSA values representing the extents of the shape.

// Rank 2 shape.
%s0 = shape.from_extents %a, %b
// Rank 0 shape.
%s1 = shape.from_extents
OPERATION_NAME = 'shape.from_extents'
_ODS_REGIONS = (0, True)
extents() _ods_ir
shape() _ods_ir
mlir.dialects._shape_ops_gen.from_extents(extents, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.FuncOp(sym_name, function_type, *, arg_attrs=None, res_attrs=None, sym_visibility=None, loc=None, ip=None)

Bases: _ods_ir

An operation with a name containing a single SSACFG region which represents a shape transfer function or helper function for shape transfer function.

OPERATION_NAME = 'shape.func'
_ODS_REGIONS = (1, True)
sym_name() _ods_ir
function_type() _ods_ir
arg_attrs() _ods_ir | None
res_attrs() _ods_ir | None
sym_visibility() _ods_ir | None
body() _ods_ir
mlir.dialects._shape_ops_gen.func(sym_name, function_type, *, arg_attrs=None, res_attrs=None, sym_visibility=None, loc=None, ip=None) FuncOp
class mlir.dialects._shape_ops_gen.FunctionLibraryOp(sym_name, mapping, *, sym_visibility=None, loc=None, ip=None)

Bases: _ods_ir

Represents a list of shape functions and the ops whose shape transfer functions they represent.

Example:

shape.function_library {
  func @same_result_shape(%arg: !shape.value_shape) -> !shape.shape {
    %0 = shape_of %arg : !shape.value_shape -> !shape.shape
    return %0 : !shape.shape
  }
} mapping {
  std.atan = @same_result_shape
}
OPERATION_NAME = 'shape.function_library'
_ODS_REGIONS = (1, True)
sym_name() _ods_ir
sym_visibility() _ods_ir | None
mapping() _ods_ir
body() _ods_ir
mlir.dialects._shape_ops_gen.function_library(sym_name, mapping, *, sym_visibility=None, loc=None, ip=None) FunctionLibraryOp
class mlir.dialects._shape_ops_gen.GetExtentOp(shape, dim, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Gets the extent indexed by dim from the shape operand. If the shape is an error then it returns an invalid size.

OPERATION_NAME = 'shape.get_extent'
_ODS_REGIONS = (0, True)
shape() _ods_ir
dim() _ods_ir
extent() _ods_ir
mlir.dialects._shape_ops_gen.get_extent(shape, dim, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.IndexToSizeOp(arg, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Converts a standard index to a shape.size. This operation and its inverse, size_to_index, facilitate index conversion between the standard and the shape dialect.

The behavior is undefined for negative indices.

OPERATION_NAME = 'shape.index_to_size'
_ODS_REGIONS = (0, True)
arg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.index_to_size(arg, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.IsBroadcastableOp(shapes, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Given multiple input shapes or extent tensors, return a predicate specifying if they are broadcastable. This broadcastable follows the same logic as what shape.broadcast documents.

Concretely, shape.is_broadcastable returning true implies that shape.broadcast will not give an error, and shape.cstr_broadcastable will not result in an assertion failure. Similarly, false implies an error or assertion failure.

Example:

%true = shape.is_broadcastable [2,2], [3,1,2]
%false = shape.is_broadcastable [2,2], [3,2]
OPERATION_NAME = 'shape.is_broadcastable'
_ODS_REGIONS = (0, True)
shapes() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.is_broadcastable(shapes, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.MaxOp(lhs, rhs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Computes the elementwise maximum of two sizes or shapes with equal ranks. If either operand is an error, then an error will be propagated to the result. If the input types mismatch or the ranks do not match, then the result is an error.

OPERATION_NAME = 'shape.max'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.max(lhs, rhs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.MeetOp(arg0, arg1, *, error=None, results=None, loc=None, ip=None)

Bases: _ods_ir

An operation that computes the least general shape or dim of input operands. This effectively asserts that corresponding static dimensions are equal. The behavior is to match each element of the shape/size and propagate the most restrictive information, returning an invalid shape if there are contradictory requirements. E.g., using pseudo code

shape.meet([*], [*]) -> [*]
shape.meet([*], [1, ?]) -> [1, ?]
shape.meet([1, 2], [1, ?]) -> [1, 2]
shape.meet([*], [1, 2]) -> [1, 2]
shape.meet([], []) -> []
shape.meet([], [*]) -> []
shape.meet([], [?, ?]) -> [invalid]
shape.meet([1, ?], [2, ?, ?]) -> [invalid]

shape.meet also allows specifying an optional error string, that may be used to return an error to the user upon mismatch of dimensions.

%c = shape.meet %a, %b, error="<reason>" : !shape.shape, !shape.shape -> !shape.shape
OPERATION_NAME = 'shape.meet'
_ODS_REGIONS = (0, True)
arg0() _ods_ir
arg1() _ods_ir
error() _ods_ir | None
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.meet(arg0, arg1, *, error=None, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.MinOp(lhs, rhs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Computes the elementwise minimum of two sizes or shapes with equal ranks. If either operand is an error, then an error will be propagated to the result. If the input types mismatch or the ranks do not match, then the result is an error.

OPERATION_NAME = 'shape.min'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.min(lhs, rhs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.MulOp(lhs, rhs, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Multiplies two sizes or indices. If either operand is an error it will be propagated to the result. The operands can be of type size or index. If at least one of the operands can hold an error, i.e. if it is of type size, the result must be of type size. If error propagation is not possible because both operands are of type index then the result may be of type size or index.

OPERATION_NAME = 'shape.mul'
_ODS_REGIONS = (0, True)
lhs() _ods_ir
rhs() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.mul(lhs, rhs, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.NumElementsOp(shape, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Returns the number of elements for a given shape which is the product of its extents. If the argument is of type shape then the result will be of type size and potential errors will be propagated. Otherwise, if the argument is and extent tensor tensor<?xindex> then the result will be of type index.

OPERATION_NAME = 'shape.num_elements'
_ODS_REGIONS = (0, True)
shape() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.num_elements(shape, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.RankOp(shape, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Returns the rank of the shape or extent tensor, i.e. the number of extents.

OPERATION_NAME = 'shape.rank'
_ODS_REGIONS = (0, True)
shape() _ods_ir
rank() _ods_ir
mlir.dialects._shape_ops_gen.rank(shape, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ReduceOp(result, shape, initVals, *, loc=None, ip=None)

Bases: _ods_ir

An operation that takes as input a shape or extent tensor, and a number of initial values. This operation has a region that is applied repeatedly for every extent of the input. Starting with the initial values, the individual extents are then aggregated as defined by the associated region.

Conceptually this op performs the following reduction:

res[] = init;
for (int i = 0, i < shape.rank(); i++) {
  res = reduce(i, shape[i], res[0], ..., res[n]);
}

Where reduce represents the region attached and the result of the reduce op is the last computed output of the reduce region. As an example, the number of elements can be computed as follows:

func.func @reduce(%shape : !shape.shape, %init : !shape.size) ->
    !shape.size {
  %num_elements = shape.reduce(%shape, %init) -> !shape.size  {
    ^bb0(%index: index, %dim: !shape.size, %acc: !shape.size):
      %updated_acc = "shape.mul"(%acc, %dim) :
        (!shape.size, !shape.size) -> !shape.size
      shape.yield %updated_acc : !shape.size
  }
  return %num_elements : !shape.size
}
OPERATION_NAME = 'shape.reduce'
_ODS_REGIONS = (1, True)
shape() _ods_ir
initVals() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

region() _ods_ir
mlir.dialects._shape_ops_gen.reduce(result, shape, init_vals, *, loc=None, ip=None) _ods_ir | _ods_ir | ReduceOp
class mlir.dialects._shape_ops_gen.ReturnOp(operands_, *, loc=None, ip=None)

Bases: _ods_ir

The shape.return operation represents a return operation within a function. The operation takes variable number of operands and produces no results.

OPERATION_NAME = 'shape.return'
_ODS_REGIONS = (0, True)
operands_() _ods_ir
mlir.dialects._shape_ops_gen.return_(operands_, *, loc=None, ip=None) ReturnOp
class mlir.dialects._shape_ops_gen.ShapeEqOp(shapes, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Takes one or more shape or extent tensor operands and determines whether they are equal. When extent tensors are compared to shapes they are regarded as their equivalent non-error shapes. Error shapes can be tested for equality like any other shape value, meaning that the error value is equal to itself.

OPERATION_NAME = 'shape.shape_eq'
_ODS_REGIONS = (0, True)
shapes() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.shape_eq(shapes, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ShapeOfOp(arg, *, results=None, loc=None, ip=None)

Bases: _ods_ir

The operation takes a value or a shaped operand as an argument and it returns a shape or extent tensor.

OPERATION_NAME = 'shape.shape_of'
_ODS_REGIONS = (0, True)
arg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.shape_of(arg, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.SizeToIndexOp(arg, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Converts a shape.size to a standard index. This operation and its inverse, index_to_size, facilitate index conversion between the standard and the shape dialect. The behavior is undefined for unknown and invalid arguments.

OPERATION_NAME = 'shape.size_to_index'
_ODS_REGIONS = (0, True)
arg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.size_to_index(arg, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.SplitAtOp(head, tail, operand, index, *, loc=None, ip=None)

Bases: _ods_ir

Splits a shape at a given dimension index, returning two shapes. If index is negative, it is treated as indexing from the back of the shape. This negative-handling behavior is important when handling unranked shapes, where the positive index is not necessarily knowable due to a dynamic number of leading dimensions. If the result is in extent tensor form out of bounds indices result in undefined behavior.

Examples:

  • split_at([4,5,6], index=0) -> [], [4,5,6]

  • split_at([4,5,6], index=1) -> [4], [5,6]

  • split_at([4,5,6], index=2) -> [4,5], [6]

  • split_at([4,5,6], index=3) -> [4,5,6], []

  • split_at([4,5,6], index=4) -> error

  • split_at([4,5,6], index=-1) -> [4,5], [6]

  • split_at([4,5,6], index=-2) -> [4], [5,6]

  • split_at([4,5,6], index=-3) -> [], [4,5,6]

  • split_at([4,5,6], index=-4) -> error

Requires:

  • index is in the range [-rank(operand),rank(operand)]

OPERATION_NAME = 'shape.split_at'
_ODS_REGIONS = (0, True)
operand() _ods_ir
index() _ods_ir
head() _ods_ir
tail() _ods_ir
mlir.dialects._shape_ops_gen.split_at(head, tail, operand, index, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ToExtentTensorOp(result, input, *, loc=None, ip=None)

Bases: _ods_ir

Converts a shape to a 1D integral tensor of extents. The number of elements in the tensor equals the rank of the shape, and the elements equal the extents of the shape.

If the shape represents an error, this op’s behavior is undefined.

OPERATION_NAME = 'shape.to_extent_tensor'
_ODS_REGIONS = (0, True)
input() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.to_extent_tensor(result, input, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ValueAsShapeOp(result, arg, *, loc=None, ip=None)

Bases: _ods_ir

The operations takes a ValueShape and returns a Shape corresponding to the value. If the input value cannot be shape (e.g., not a 1D tensor of integral value representing sizes) then this propagages the error shape. E.g.,

// The following
%0 = arith.constant dense<[1,2]> : tensor<2xi32>
%shape = shape.value_as_shape %0 : tensor<2xi32> -> !shape.shape
// is equivalent to
%shape' = shape.const_shape [1, 2] : !shape.shape

This operation is the complement of shape_of wrt ValueShape values.

OPERATION_NAME = 'shape.value_as_shape'
_ODS_REGIONS = (0, True)
arg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.value_as_shape(result, arg, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.ValueOfOp(result, arg, *, loc=None, ip=None)

Bases: _ods_ir

The operation takes !shape.value_shape, a.k.a. (value, shape) tuple as an argument, and returns its value. The behavior is undefined for unknown and invalid arguments.

OPERATION_NAME = 'shape.value_of'
_ODS_REGIONS = (0, True)
arg() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.value_of(result, arg, *, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.WithOp(operand, shape, *, results=None, loc=None, ip=None)

Bases: _ods_ir

Returns ValueShape with the shape updated to match the shape operand. That is a new ValueShape tuple is created with value equal to operand’s value and shape equal to shape. If the ValueShape and given shape are non-conformant, then the returned ValueShape will represent an error of this mismatch. Similarly if either inputs are in an error state, then an error is propagated.

Usage: %0 = shape.with_shape %1, %2 : tensor<…>, !shape.shape

This is used, for example, where one combines shape function calculations and/or call one shape function from another. E.g.,

func.func @shape_foobah(%a: !shape.value_shape,
                   %b: !shape.value_shape,
                   %c: !shape.value_shape) -> !shape.shape {
  %0 = call @shape_foo(%a, %b) :
    (!shape.value_shape, !shape.value_shape) -> !shape.shape
  %1 = shape.with_shape %b, %0 : !shape.value_shape, !shape.shape
  %2 = call @shape_bah(%c, %1) :
    (!shape.value_shape, !shape.value_shape) -> !shape.shape
  return %2 : !shape.shape
}

This op need not be a refinement of the shape. In non-error cases the input ValueShape’s value and shape are conformant and so too for the output, but the result may be less specified than operand’s shape as shape is merely used to construct the new ValueShape. If join behavior is desired then a join op should be used.

OPERATION_NAME = 'shape.with_shape'
_ODS_REGIONS = (0, True)
operand() _ods_ir
shape() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._shape_ops_gen.with_shape(operand, shape, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._shape_ops_gen.YieldOp(operands_, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'shape.yield'
_ODS_REGIONS = (0, True)
operands_() _ods_ir
mlir.dialects._shape_ops_gen.yield_(operands_, *, loc=None, ip=None) YieldOp