mlir.dialects._linalg_ops_gen

Attributes

Classes

_Dialect

AbsOp

No numeric casting is performed on the input operand.

AddOp

The shapes and element types must be identical. The appropriate casts,

BatchMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

BatchMatvecOp

Numeric casting is performed on the operands to the inner multiply, promoting

BatchMmt4DOp

Besides the outermost batch dimension has the same semantic as

BatchReduceMatmulOp

Numeric casting is performed on the operands to the inner multiply,

BatchVecmatOp

Numeric casting is performed on the operands to the inner multiply, promoting

BroadcastOp

Broadcast the input into the given shape by adding dimensions.

CeilOp

No numeric casting is performed on the input operand.

ContractOp

The semantics of contracting inputs A and B on top of C to produce

Conv1DNcwFcwOp

Layout:

Conv1DNwcWcfOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv1DOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv2DNchwFchwOp

Layout:

Conv2DNchwFchwQOp

Layout:

Conv2DNgchwFgchwOp

Layout:

Conv2DNgchwGfchwOp

Layout:

Conv2DNgchwGfchwQOp

Layout:

Conv2DNhwcFhwcOp

Layout:

Conv2DNhwcFhwcQOp

Layout:

Conv2DNhwcHwcfOp

Layout:

Conv2DNhwcHwcfQOp

Layout:

Conv2DNhwgcGfhwcOp

Layout:

Conv2DNhwgcGfhwcQOp

Layout:

Conv2DOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNcdhwFcdhwOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNdhwcDhwcfOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNdhwcDhwcfQOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DOp

Numeric casting is performed on the operands to the inner multiply, promoting

CopyOp

Numeric casting is performed on the input operand, promoting it to the same

DepthwiseConv1DNcwCwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv1DNwcWcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv1DNwcWcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNchwChwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcQOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcmQOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNcdhwCdhwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNdhwcDhwcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNdhwcDhwcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DivOp

The shapes and element types must be identical. The appropriate casts,

DivUnsignedOp

The shapes and element types must be identical. The appropriate casts,

DotOp

Numeric casting is performed on the operands to the inner multiply, promoting

ElementwiseOp

The attribute kind describes arithmetic operation to perform. The

ErfOp

No numeric casting is performed on the input operand.

ExpOp

No numeric casting is performed on the input operand.

FillOp

Works for arbitrary ranked output tensors since the operation performs scalar

FillRng2DOp

The operation generations pseudo random numbers using a linear congruential

FloorOp

No numeric casting is performed on the input operand.

GenericOp

Generic Linalg op form where the key properties of the computation are

IndexOp

The linalg.index operation returns the iteration index of the immediately

PackOp

The "pack" operation converts a source tensor of rank n into a result

SoftmaxOp

linalg.softmax computes a numerically stable version of softmax.

UnPackOp

The "unpack" operation converts a source tensor of rank n with a tiled and

WinogradFilterTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

WinogradInputTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

WinogradOutputTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

YieldOp

linalg.yield is a special terminator operation for blocks inside regions

LogOp

No numeric casting is performed on the input operand.

MapOp

Models elementwise operations on tensors in terms of arithmetic operations

MatmulOp

Numeric casting is performed on the operands to the inner multiply,

MatvecOp

Numeric casting is performed on the operands to the inner multiply, promoting

MaxOp

The shapes and element types must be identical. The appropriate casts,

MinOp

The shapes and element types must be identical. The appropriate casts,

Mmt4DOp

Differences from linalg.matmul:

MulOp

The shapes and element types must be identical. The appropriate casts,

NegFOp

No numeric casting is performed on the input operand.

PoolingNchwMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNchwSumOp

Layout:

PoolingNcwMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNcwSumOp

Layout:

PoolingNdhwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNdhwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNdhwcSumOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMaxUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMinUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcSumOp

Layout:

PoolingNwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMaxUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMinUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcSumOp

Layout:

PowFOp

Only applies to floating point values.

QuantizedBatchMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

QuantizedMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

ReciprocalOp

No numeric casting is performed on the input operand.

ReduceOp

Executes combiner on the dimensions of inputs and returns the

RoundOp

No numeric casting is performed on the input operand.

RsqrtOp

No numeric casting is performed on the input operand.

SelectOp

The shapes and element types must be identical. The appropriate casts,

SqrtOp

No numeric casting is performed on the input operand.

SquareOp

No numeric casting is performed on the input operand.

SubOp

The shapes and element types must be identical. The appropriate casts,

TanhOp

No numeric casting is performed on the input operand.

TransposeOp

Permutes the dimensions of input according to the given permutation.

VecmatOp

Numeric casting is performed on the operands to the inner multiply, promoting

Functions

abs(→ Union[_ods_ir, _ods_ir, AbsOp])

add(→ Union[_ods_ir, _ods_ir, AddOp])

batch_matmul(→ Union[_ods_ir, _ods_ir, BatchMatmulOp])

batch_matvec(→ Union[_ods_ir, _ods_ir, BatchMatvecOp])

batch_mmt4d(→ Union[_ods_ir, _ods_ir, BatchMmt4DOp])

batch_reduce_matmul(→ Union[_ods_ir, _ods_ir, ...)

batch_vecmat(→ Union[_ods_ir, _ods_ir, BatchVecmatOp])

broadcast(→ Union[_ods_ir, _ods_ir, BroadcastOp])

ceil(→ Union[_ods_ir, _ods_ir, CeilOp])

contract(→ Union[_ods_ir, _ods_ir, ContractOp])

conv_1d_ncw_fcw(→ Union[_ods_ir, _ods_ir, Conv1DNcwFcwOp])

conv_1d_nwc_wcf(→ Union[_ods_ir, _ods_ir, Conv1DNwcWcfOp])

conv_1d(→ Union[_ods_ir, _ods_ir, Conv1DOp])

conv_2d_nchw_fchw(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nchw_fchw_q(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_ngchw_fgchw(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_ngchw_gfchw(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_ngchw_gfchw_q(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwc_fhwc(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwc_fhwc_q(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwc_hwcf(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwc_hwcf_q(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwgc_gfhwc(→ Union[_ods_ir, _ods_ir, ...)

conv_2d_nhwgc_gfhwc_q(→ Union[_ods_ir, _ods_ir, ...)

conv_2d(→ Union[_ods_ir, _ods_ir, Conv2DOp])

conv_3d_ncdhw_fcdhw(→ Union[_ods_ir, _ods_ir, ...)

conv_3d_ndhwc_dhwcf(→ Union[_ods_ir, _ods_ir, ...)

conv_3d_ndhwc_dhwcf_q(→ Union[_ods_ir, _ods_ir, ...)

conv_3d(→ Union[_ods_ir, _ods_ir, Conv3DOp])

copy(→ Union[_ods_ir, _ods_ir, CopyOp])

depthwise_conv_1d_ncw_cw(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_1d_nwc_wc(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_1d_nwc_wcm(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_2d_nchw_chw(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_2d_nhwc_hwc(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_2d_nhwc_hwc_q(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_2d_nhwc_hwcm(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_2d_nhwc_hwcm_q(→ Union[_ods_ir, ...)

depthwise_conv_3d_ncdhw_cdhw(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_3d_ndhwc_dhwc(→ Union[_ods_ir, _ods_ir, ...)

depthwise_conv_3d_ndhwc_dhwcm(→ Union[_ods_ir, ...)

div(→ Union[_ods_ir, _ods_ir, DivOp])

div_unsigned(→ Union[_ods_ir, _ods_ir, DivUnsignedOp])

dot(→ Union[_ods_ir, _ods_ir, DotOp])

elementwise(→ Union[_ods_ir, _ods_ir, ElementwiseOp])

erf(→ Union[_ods_ir, _ods_ir, ErfOp])

exp(→ Union[_ods_ir, _ods_ir, ExpOp])

fill(→ Union[_ods_ir, _ods_ir, FillOp])

fill_rng_2d(→ Union[_ods_ir, _ods_ir, FillRng2DOp])

floor(→ Union[_ods_ir, _ods_ir, FloorOp])

generic(→ Union[_ods_ir, _ods_ir, GenericOp])

index(→ _ods_ir)

pack(→ _ods_ir)

softmax(→ Union[_ods_ir, _ods_ir, SoftmaxOp])

unpack(→ _ods_ir)

winograd_filter_transform(→ _ods_ir)

winograd_input_transform(→ _ods_ir)

winograd_output_transform(→ _ods_ir)

yield_(→ YieldOp)

log(→ Union[_ods_ir, _ods_ir, LogOp])

map(→ Union[_ods_ir, _ods_ir, MapOp])

matmul(→ Union[_ods_ir, _ods_ir, MatmulOp])

matvec(→ Union[_ods_ir, _ods_ir, MatvecOp])

max(→ Union[_ods_ir, _ods_ir, MaxOp])

min(→ Union[_ods_ir, _ods_ir, MinOp])

mmt4d(→ Union[_ods_ir, _ods_ir, Mmt4DOp])

mul(→ Union[_ods_ir, _ods_ir, MulOp])

negf(→ Union[_ods_ir, _ods_ir, NegFOp])

pooling_nchw_max(→ Union[_ods_ir, _ods_ir, ...)

pooling_nchw_sum(→ Union[_ods_ir, _ods_ir, ...)

pooling_ncw_max(→ Union[_ods_ir, _ods_ir, PoolingNcwMaxOp])

pooling_ncw_sum(→ Union[_ods_ir, _ods_ir, PoolingNcwSumOp])

pooling_ndhwc_max(→ Union[_ods_ir, _ods_ir, ...)

pooling_ndhwc_min(→ Union[_ods_ir, _ods_ir, ...)

pooling_ndhwc_sum(→ Union[_ods_ir, _ods_ir, ...)

pooling_nhwc_max(→ Union[_ods_ir, _ods_ir, ...)

pooling_nhwc_max_unsigned(→ Union[_ods_ir, _ods_ir, ...)

pooling_nhwc_min(→ Union[_ods_ir, _ods_ir, ...)

pooling_nhwc_min_unsigned(→ Union[_ods_ir, _ods_ir, ...)

pooling_nhwc_sum(→ Union[_ods_ir, _ods_ir, ...)

pooling_nwc_max(→ Union[_ods_ir, _ods_ir, PoolingNwcMaxOp])

pooling_nwc_max_unsigned(→ Union[_ods_ir, _ods_ir, ...)

pooling_nwc_min(→ Union[_ods_ir, _ods_ir, PoolingNwcMinOp])

pooling_nwc_min_unsigned(→ Union[_ods_ir, _ods_ir, ...)

pooling_nwc_sum(→ Union[_ods_ir, _ods_ir, PoolingNwcSumOp])

powf(→ Union[_ods_ir, _ods_ir, PowFOp])

quantized_batch_matmul(→ Union[_ods_ir, _ods_ir, ...)

quantized_matmul(→ Union[_ods_ir, _ods_ir, ...)

reciprocal(→ Union[_ods_ir, _ods_ir, ReciprocalOp])

reduce(→ Union[_ods_ir, _ods_ir, ReduceOp])

round(→ Union[_ods_ir, _ods_ir, RoundOp])

rsqrt(→ Union[_ods_ir, _ods_ir, RsqrtOp])

select(→ Union[_ods_ir, _ods_ir, SelectOp])

sqrt(→ Union[_ods_ir, _ods_ir, SqrtOp])

square(→ Union[_ods_ir, _ods_ir, SquareOp])

sub(→ Union[_ods_ir, _ods_ir, SubOp])

tanh(→ Union[_ods_ir, _ods_ir, TanhOp])

transpose(→ Union[_ods_ir, _ods_ir, TransposeOp])

vecmat(→ Union[_ods_ir, _ods_ir, VecmatOp])

Module Contents

mlir.dialects._linalg_ops_gen._ods_ir
class mlir.dialects._linalg_ops_gen._Dialect(descriptor: object)

Bases: _ods_ir

DIALECT_NAMESPACE = 'linalg'
class mlir.dialects._linalg_ops_gen.AbsOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.abs'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.abs(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AbsOp
class mlir.dialects._linalg_ops_gen.AddOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.add sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.add'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.add(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AddOp
class mlir.dialects._linalg_ops_gen.BatchMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be appiled by specifying the explicit attribute
'indexing_maps' as shown below. This is a list attribute, so must include maps for all
arguments if specified.

Example Transpose:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```

Example Broadcast:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (k)>,           // broadcast
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```

Example Broadcast and Transpose:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>,        // broadcast
                     affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```
OPERATION_NAME = 'linalg.batch_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.batch_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatmulOp
class mlir.dialects._linalg_ops_gen.BatchMatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.batch_matvec'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.batch_matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatvecOp
class mlir.dialects._linalg_ops_gen.BatchMmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.

OPERATION_NAME = 'linalg.batch_mmt4d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.batch_mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMmt4DOp
class mlir.dialects._linalg_ops_gen.BatchReduceMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be applied by specifying the explicit attribute ‘indexing_maps’ as shown below. This is a list attribute, so must include maps for all arguments if specified.

Example Transpose:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (k)>,         // broadcast
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast and Transpose:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>,        // broadcast
                     affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>)
    outs(%arg2: memref<3x7xf32>)
OPERATION_NAME = 'linalg.batch_reduce_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.batch_reduce_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchReduceMatmulOp
class mlir.dialects._linalg_ops_gen.BatchVecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.batch_vecmat'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.batch_vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchVecmatOp
class mlir.dialects._linalg_ops_gen.BroadcastOp(result, input, init, dimensions, *, loc=None, ip=None)

Bases: _ods_ir

Broadcast the input into the given shape by adding dimensions.

Example:

%bcast = linalg.broadcast
    ins(%input:tensor<16xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
OPERATION_NAME = 'linalg.broadcast'
_ODS_REGIONS = (1, True)
input() _ods_ir
init() _ods_ir
dimensions() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

region() _ods_ir
mlir.dialects._linalg_ops_gen.broadcast(result, input, init, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | BroadcastOp
class mlir.dialects._linalg_ops_gen.CeilOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.ceil'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.ceil(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | CeilOp
class mlir.dialects._linalg_ops_gen.ContractOp(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None)

Bases: _ods_ir

The semantics of contracting inputs A and B on top of C to produce output D is given by

D[H] = (SUM_{(I J) \ H} A[I] * B[J]) + C[H]

where I, J, and H are tuples of (pairwise distinct) dimension identifiers - meant to range over valid indices - corresponding to the results of the mandatory (projected permutation) indexing_maps for A, B and C. SUM_{dims} means reduce over all valid indices for the dimensions in the set dims (with I, J, and K treated as sets of dim identifiers).

The iteration space consists of all dimensions in I, J and H, i.e. the domain of each of the ``affine_map``s. Like for einsums, the iteration type of each dim is inferred and is either:

  • reduction: the dim is used to index into A and B but not C. Per the

above semantics, these dims will be contracted, i.e. reduced over. * parallel: the dim is used to index into C and at least one of A and B, and - deriving from matmul terminology - is either an “M-like” dim (if used on A and C), an “N-like” dim (if used on B and C) or a “batch”-dim (if used to index into A, B, and C).

For example, batch-matmul is given by I = b, m, k , J = b, k, n , H = b, m, n (with k as a contracting reduction-dimension while m, n and b have parallel iteration-type) and gets represented as:

%D = linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>,
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: tensor<?x?x?xf32>, tensor<?x?x?xf32>)
    outs(%C: tensor<?x?x?xf32>) -> tensor<?x?x?xf32>

Note that by permuting dims in the affine_map``s' results, accesses to to the inputs and output can be arbitrarily transposed. Similarly, arbitrary broadcasts can be achieved through leaving out dims on either input operand. For example, the following is a variant of batch-matmul with a transposition applied to ``A while B’s 2D-matrix gets broadcasted along the batch dim:

linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>,
                     affine_map<(batch, m, n, k) -> (k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: memref<?x?x?xf32>, memref<?x?xf32>)
    outs(%C: memref<?x?x?xf32>)

Numeric casting is performed on the operands to the inner multiplication, promoting/truncating them to the same data type as the accumulator/output.

TODO: Allow control over the combining/accumulating op and possibly the multiplication op.

OPERATION_NAME = 'linalg.contract'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir
cast() _ods_ir | None
result_tensors() _ods_ir
combiner() _ods_ir
mlir.dialects._linalg_ops_gen.contract(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | ContractOp
class mlir.dialects._linalg_ops_gen.Conv1DNcwFcwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCW.

  • Kernel: FCW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d_ncw_fcw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_1d_ncw_fcw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNcwFcwOp
class mlir.dialects._linalg_ops_gen.Conv1DNwcWcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d_nwc_wcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_1d_nwc_wcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNwcWcfOp
class mlir.dialects._linalg_ops_gen.Conv1DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_1d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DOp
class mlir.dialects._linalg_ops_gen.Conv2DNchwFchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nchw_fchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nchw_fchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwOp
class mlir.dialects._linalg_ops_gen.Conv2DNchwFchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nchw_fchw_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nchw_fchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwQOp
class mlir.dialects._linalg_ops_gen.Conv2DNgchwFgchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: FGCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_ngchw_fgchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_ngchw_fgchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwFgchwOp
class mlir.dialects._linalg_ops_gen.Conv2DNgchwGfchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_ngchw_gfchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwOp
class mlir.dialects._linalg_ops_gen.Conv2DNgchwGfchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_ngchw_gfchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwQOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwcFhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwc_fhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwcFhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwc_fhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcQOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwcHwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwc_hwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwcHwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwc_hwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfQOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwgcGfhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwgc_gfhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcOp
class mlir.dialects._linalg_ops_gen.Conv2DNhwgcGfhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d_nhwgc_gfhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcQOp
class mlir.dialects._linalg_ops_gen.Conv2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DOp
class mlir.dialects._linalg_ops_gen.Conv3DNcdhwFcdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d_ncdhw_fcdhw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_3d_ncdhw_fcdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNcdhwFcdhwOp
class mlir.dialects._linalg_ops_gen.Conv3DNdhwcDhwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_3d_ndhwc_dhwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfOp
class mlir.dialects._linalg_ops_gen.Conv3DNdhwcDhwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_3d_ndhwc_dhwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfQOp
class mlir.dialects._linalg_ops_gen.Conv3DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.conv_3d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DOp
class mlir.dialects._linalg_ops_gen.CopyOp(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.copy'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.copy(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | CopyOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNcwCwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_1d_ncw_cw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_1d_ncw_cw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNcwCwOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNwcWcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_1d_nwc_wc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNwcWcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_1d_nwc_wcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcmOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNchwChwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nchw_chw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nchw_chw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNchwChwOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcQOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcmQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwcm_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmQOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNcdhwCdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ncdhw_cdhw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ncdhw_cdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNcdhwCdhwOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNdhwcDhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ndhwc_dhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcOp
class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNdhwcDhwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ndhwc_dhwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcmOp
class mlir.dialects._linalg_ops_gen.DivOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.div'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.div(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivOp
class mlir.dialects._linalg_ops_gen.DivUnsignedOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.div_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.div_unsigned(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivUnsignedOp
class mlir.dialects._linalg_ops_gen.DotOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.dot'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.dot(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DotOp
class mlir.dialects._linalg_ops_gen.ElementwiseOp(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None)

Bases: _ods_ir

The attribute kind describes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select).

By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type.

Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand.

Iterator-types are always all parallel. Iterator-types are needed for constructing the underlying structured op.

The number of dims of the iterator-types are inferred from the rank of the result type.

Example:

Defining a unary linalg.elementwise with default indexing-map:

%exp = linalg.elementwise
    kind=#linalg.elementwise_kind<exp>
    ins(%x : tensor<4x16x8xf32>)
    outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32>

Defining a binary linalg.elementwise with user-defined indexing-map:

%add = linalg.elementwise
    kind=#linalg.elementwise_kind<add>
    indexing_maps = [#transpose, #broadcast, #identity]
    ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>)
    outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32>
OPERATION_NAME = 'linalg.elementwise'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
kind() _ods_ir
indexing_maps() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.elementwise(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) _ods_ir | _ods_ir | ElementwiseOp
class mlir.dialects._linalg_ops_gen.ErfOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.erf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.erf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ErfOp
class mlir.dialects._linalg_ops_gen.ExpOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.exp'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.exp(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ExpOp
class mlir.dialects._linalg_ops_gen.FillOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.

OPERATION_NAME = 'linalg.fill'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.fill(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillOp
class mlir.dialects._linalg_ops_gen.FillRng2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.

OPERATION_NAME = 'linalg.fill_rng_2d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.fill_rng_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillRng2DOp
class mlir.dialects._linalg_ops_gen.FloorOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.floor'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.floor(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FloorOp
class mlir.dialects._linalg_ops_gen.GenericOp(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None)

Bases: _ods_ir

Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a linalg.generic op is written as:

linalg.generic #trait_attribute
    ins(%A, %B : memref<?x?xf32, stride_specification>,
                 memref<?x?xf32, stride_specification>)
    outs(%C : memref<?x?xf32, stride_specification>)
    attrs = {other-optional-attributes}
    {region}

Where #trait_attributes is an alias of a dictionary attribute containing:

  • doc [optional]: a documentation string

  • indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input

and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window

Example: Defining a #matmul_trait attribute in MLIR can be done as follows:

#matmul_accesses = [
  (m, n, k) -> (m, k),
  (m, n, k) -> (k, n),
  (m, n, k) -> (m, n)
]
#matmul_trait = {
  doc = "C(m, n) += A(m, k) * B(k, n)",
  indexing_maps = #matmul_accesses,
  library_call = "linalg_matmul",
  iterator_types = ["parallel", "parallel", "reduction"]
}

And can be reused in multiple places as:

linalg.generic #matmul_trait
  ins(%A, %B : memref<?x?xf32, stride_specification>,
               memref<?x?xf32, stride_specification>)
  outs(%C : memref<?x?xf32, stride_specification>)
  {other-optional-attributes} {
  ^bb0(%a: f32, %b: f32, %c: f32) :
    %d = arith.mulf %a, %b: f32
    %e = arith.addf %c, %d: f32
    linalg.yield %e : f32
}

This may lower to either:

call @linalg_matmul(%A, %B, %C) :
  (memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>)
  -> ()

or IR resembling:

scf.for %m = %c0 to %M step %c1 {
  scf.for %n = %c0 to %N step %c1 {
    scf.for %k = %c0 to %K step %c1 {
      %a = load %A[%m, %k] : memref<?x?xf32, stride_specification>
      %b = load %B[%k, %n] : memref<?x?xf32, stride_specification>
      %c = load %C[%m, %n] : memref<?x?xf32, stride_specification>
      %d = arith.mulf %a, %b: f32
      %e = arith.addf %c, %d: f32
      store %e, %C[%m, %n] : memref<?x?x?xf32, stride_specification>
    }
  }
}
OPERATION_NAME = 'linalg.generic'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir
iterator_types() _ods_ir
doc() _ods_ir | None
library_call() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.generic(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) _ods_ir | _ods_ir | GenericOp
class mlir.dialects._linalg_ops_gen.IndexOp(dim, *, results=None, loc=None, ip=None)

Bases: _ods_ir

The linalg.index operation returns the iteration index of the immediately enclosing linalg structured operation for the iteration dimension dim. The dim attribute specifies the position of the accessed dimension in the indexing map domain.

Example:

#map = affine_map<(i, j) -> (i, j)>
linalg.generic {indexing_maps = [#map, #map],
                iterator_types = ["parallel", "parallel"]}
  outs(%I, %J : memref<?x?xindex>, memref<?x?xindex>) {
  ^bb0(%arg0 : index, %arg1 : index):
  // Access the outer iteration dimension i
  %i = linalg.index 0 : index
  // Access the inner iteration dimension j
  %j = linalg.index 1 : index
  linalg.yield %i, %j : index, index
}

This may lower to IR resembling:

%0 = dim %I, %c0 : memref<?x?xindex>
%1 = dim %I, %c1 : memref<?x?xindex>
scf.for %i = %c0 to %0 step %c1 {
  scf.for %j = %c0 to %1 step %c1 {
    store %i, %I[%i, %j] : memref<?x?xindex>
    store %j, %J[%i, %j] : memref<?x?xindex>
  }
}
OPERATION_NAME = 'linalg.index'
_ODS_REGIONS = (0, True)
dim() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.index(dim, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.PackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None)

Bases: _ods_ir

The “pack” operation converts a source tensor of rank n into a result tensor of rank n + k with a tiled and packed layout (maybe with padding) and optionally transposes the tiled source tensor dimensions.

inner_tiles (mandatory) specifies k tile sizes. These tile sizes correspond to the least significant (“inner”) result tensor dimension sizes, in the same order. Tile sizes can be static or dynamic.

inner_dims_pos (mandatory) specifies k source tensor dimensions that are being tiled, where 0 <= k <= n.

  • inner_dims_pos[i] specifies the source tensor dimension tiled by

inner_tiles[i] where 0 <= i < k. All the values in inner_dims_pos are within [0, n). * The tiled dimensions (of size inner_tiles) are added to the end of the result tensor in the order in which they appear, i.e. shape(result)[rank(source) + i] = inner_tiles[i] for 0 <= i < k. * The following relationship for the tiled dimensions holds: shape(result)[inner_dims_pos[i]] = shape(source)[inner_dims_pos[i]] / inner_tiles[i], where (⌈/⌉ indicates CeilDiv).

Example: If inner_tiles = [16, 32], the result tensor has a shape of ...x16x32. If inner_dims_pos = [0, 1], the 0th source dimension is tiled by 16 and the 1st source dimension is tiled by 32. Other source dimensions (if any) are not tiled. If inner_dims_pos = [1, 0], the 1st dimension is tiled by 16 and the 0th dimension is tiled by 32.

Example:

// NC to NCnc
%0 = linalg.pack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32]
    into %dest : tensor<128x256xf32> -> tensor<16x8 x 8x32 xf32>
//                                             \  /   \  /
//                                 Outer Dims: 16x8   Inner Dims: 8x32

// CHW to CHWhw
%0 = linalg.pack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2]
    into %dest : tensor<3x20x24xf32> -> tensor<3x10x6 x 4x2 xf32>
//                                              \  /    \ /
//                                 Outer Dims: 3x10x6  Inner Dims: 4x2

// HCW to HCWhw
%0 = linalg.pack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2]
    into %dest : tensor<18x3x32xf32> -> tensor<9x3x8 x 4x2 xf32>
//                                              \  /   \ /
//                                 Outer Dims: 9x3x8  Inner Dims: 4x2

outer_dims_perm (optional) specifies a permutation for the outer dimensions. If specified, it must have n elements.

Example:

// CK to KCck
%0 = linalg.pack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1]
    inner_tiles = [8, 32] into %dest
    : tensor<128x256xf32> -> tensor<8x16 x 8x32 xf32>
//                                  \  /
//            compare with "NC to NCnc": outer dims are transposed

padding_value specifies a padding value at the boundary on non-perfectly divisible dimensions. Padding is optional:

  • If absent, it is assumed that for all inner tiles,

shape(source)[inner_dims_pos[i]] % inner_tiles[i] == 0, i.e. all inner tiles divide perfectly the corresponding outer dimension in the result tensor. It is UB if the tile does not perfectly divide the dimension. * If present, it will pad along high dimensions (high-padding) to make the tile complete. Note that it is not allowed to have artificial padding that is not strictly required by linalg.pack (i.e., padding past what is needed to complete the last tile along each packed dimension). It is UB if extra padding is requested. It is not possible to verify the requirements statically with dynamic shapes, so they are treated as UB.

Example:

%0 = linalg.pack %arg0 padding_value(%pad : f32) outer_dims_perm = [2, 1, 0]
    inner_dims_pos = [1] inner_tiles = [2] into %arg1
    : tensor<200x127x256xf32> -> tensor<256x64x200x2xf32>
//                 \
//                padded and tiled dim
//
// Source dimension 1 is tiled. 64 does not divide 127 evenly, so 1 padded
// element is added at the end.
//
// Note: Only tiled dimensions can be padded.

Invalid example that has artificial padding:

%0 = linalg.pack %src padding_value(%cst : f32) inner_dims_pos = [0]
    inner_tiles = [8] into %dest
    : tensor<9xf32> -> tensor<3x8xf32>
//                             \
//            expect tensor<2x8xf32> because CeilDiv(9, 8) = 2
OPERATION_NAME = 'linalg.pack'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
source() _ods_ir
dest() _ods_ir
padding_value() _ods_ir | None
inner_tiles() _ods_ir
outer_dims_perm() _ods_ir | None
inner_dims_pos() _ods_ir
static_inner_tiles() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.pack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.SoftmaxOp(result, input, output, dimension, *, loc=None, ip=None)

Bases: _ods_ir

linalg.softmax computes a numerically stable version of softmax.

For a given input tensor and a specified dimension d, compute:

  1. the max m along that dimension d

  2. f(x) = exp(x - m)

  3. sum f(x) along dimension d to get l(x).

  4. compute the final result f(x) / l(x).

This is an aggregate linalg operation that further reduces to a small DAG of structured operations.

Warning: Regarding the tiling capabilities, the implementation doesn’t check that the provided dimensions make sense. This is the responsability of the transformation calling the tiling to ensure that the provided sizes for each dimension make sense with respect to the semantic of softmax.

OPERATION_NAME = 'linalg.softmax'
_ODS_REGIONS = (0, True)
input() _ods_ir
output() _ods_ir
dimension() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.softmax(result, input, output, dimension, *, loc=None, ip=None) _ods_ir | _ods_ir | SoftmaxOp
class mlir.dialects._linalg_ops_gen.UnPackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None)

Bases: _ods_ir

The “unpack” operation converts a source tensor of rank n with a tiled and packed layout to a result tensor of rank n - k.

inner_tiles (mandatory) specifies k tile sizes. These tile sizes correspond to the least significant (“inner”) source tensor dimension sizes. The behavior of this op is undefined if:

  • inner_tiles do not exactly match with the corresponding source tensor

dimension sizes. * Or, inner_tiles[i] does not divide the size of dimension inner_dims_pos[i] (assuming that outer_dims_perm is not specified) evenly.

inner_dims_pos (mandatory) specifies k result tensor (i.e. unpacked tensor) dimensions that were tiled with the inner_tiles to create the packed source tensor. The source tensor (i.e. packed tensor) dimensions can be unpacked given inner_dims_pos as follows.

  • For 0 <= i < k the following relationship holds:

shape(result)[inner_dims_pos[i]] <= shape(source)[n-k+i] * shape(source)[inner_dims_pos[i]]. * For 0 <= j < n-k and j not in inner_dims_pos the following relationship holds: shape(result)[j] = shape(source)[j].

outer_dims_perm (optional) specifies a permutation for the outer dimensions. If specified, it must have n - k elements. If specified, this permutation is applied before combining any dimensions.

Note, the unpack operation may drop any padding introduced by the pack operation and hence the following holds NumElementsOf(source) >= NumElementsOf(result).

Examples:

// NCnc to NC:
%0 = linalg.unpack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32]
    into %dest : tensor<16x8 x 8x32 xf32> -> tensor<128x256xf32>
//                      \  /   \  /
//          Outer Dims: 16x8  Inner Dims: 8x32

// CK to KCck:
%0 = linalg.unpack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1]
    inner_tiles = [8, 32]
    into %dest : tensor<8x16 x 8x32 xf32> -> tensor<128x256xf32>
//                      \  /   \  /
//          Outer Dims: 8x16  Inner Dims: 8x32

// CHW to CHWhw:
%0 = linalg.unpack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2]
    into %dest : tensor<3x10x6 x 4x2 xf32> -> tensor<3x20x24xf32>
//                       \  /    \ /
//          Outer Dims: 3x10x6  Inner Dims: 4x2

// HCW to HCWhw
%0 = linalg.unpack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2]
    into %dest : tensor<9x3x8 x 4x2 xf32> -> tensor<18x3x32xf32>
//                       \  /   \ /
//          Outer Dims: 9x3x8   Inner Dims: 4x2
OPERATION_NAME = 'linalg.unpack'
_ODS_REGIONS = (0, True)
source() _ods_ir
dest() _ods_ir
inner_tiles() _ods_ir
outer_dims_perm() _ods_ir | None
inner_dims_pos() _ods_ir
static_inner_tiles() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.unpack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.WinogradFilterTransformOp(result, filter, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of filter transformation (G x g x G^T) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_filter_transform'
_ODS_REGIONS = (0, True)
filter() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.winograd_filter_transform(result, filter, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.WinogradInputTransformOp(result, input, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of input transformation (B^T x d x B) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_input_transform'
_ODS_REGIONS = (0, True)
input() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.winograd_input_transform(result, input, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.WinogradOutputTransformOp(result, value, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of output transformation (A^T x y x A) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_output_transform'
_ODS_REGIONS = (0, True)
value() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects._linalg_ops_gen.winograd_output_transform(result, value, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects._linalg_ops_gen.YieldOp(values, *, loc=None, ip=None)

Bases: _ods_ir

linalg.yield is a special terminator operation for blocks inside regions in linalg generic ops. It returns values to the immediately enclosing linalg generic op.

Example:

linalg.yield %f0, %f1 : f32, f32
OPERATION_NAME = 'linalg.yield'
_ODS_REGIONS = (0, True)
values() _ods_ir
mlir.dialects._linalg_ops_gen.yield_(values, *, loc=None, ip=None) YieldOp
class mlir.dialects._linalg_ops_gen.LogOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.log'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.log(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | LogOp
class mlir.dialects._linalg_ops_gen.MapOp(result, inputs, init, *, loc=None, ip=None)

Bases: _ods_ir

Models elementwise operations on tensors in terms of arithmetic operations on the corresponding elements.

Example:

%add = linalg.map
    ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
    outs(%init: tensor<64xf32>)
    (%lhs_elem: f32, %rhs_elem: f32) {
      %0 = arith.addf %lhs_elem, %rhs_elem: f32
      linalg.yield %0: f32
    }

Shortened print form is available for simple maps where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments with operands matching block arguments in order, and the yield operand is the result of the payload operation.

The example above will be printed using the shortened form as:

%add = linalg.map { arith.addf }
    ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
    outs(%init: tensor<64xf32>)
OPERATION_NAME = 'linalg.map'
_ODS_REGIONS = (1, True)
inputs() _ods_ir
init() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mapper() _ods_ir
mlir.dialects._linalg_ops_gen.map(result, inputs, init, *, loc=None, ip=None) _ods_ir | _ods_ir | MapOp
class mlir.dialects._linalg_ops_gen.MatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be appiled by specifying the explicit attribute ‘indexing_maps’ as shown below.This is a list attribute, so the list must include all the maps if specified.

Example Transpose:

linalg.matmul
    indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose
                     affine_map<(m, n, k) -> (k, n)>,
                     affine_map<(m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5x3xf32>,memref<5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast:

linalg.matmul
   indexing_maps = [affine_map<(m, n, k) -> (k)>,     // broadcast
                    affine_map<(m, n, k) -> (k, n)>,
                    affine_map<(m, n, k) -> (m, n)>]
   ins(%arg0, %arg1 : memref<3xf32>, memref<5x7xf32>)
   outs(%arg2: memref<3x7xf32>)

Example Broadcast and transpose:

linalg.matmul
    indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose
                     affine_map<(m, n, k) -> (k)>,    // broadcast
                     affine_map<(m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5x3xf32>, memref<7xf32>)
    outs(%arg2: memref<3x7xf32>)
OPERATION_NAME = 'linalg.matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | MatmulOp
class mlir.dialects._linalg_ops_gen.MatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.matvec'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MatvecOp
class mlir.dialects._linalg_ops_gen.MaxOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.max sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.max(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MaxOp
class mlir.dialects._linalg_ops_gen.MinOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.min sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.min(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MinOp
class mlir.dialects._linalg_ops_gen.Mmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Differences from linalg.matmul:

  • The right hand side is transposed, whence the ‘t’ in ‘mmt’.

  • The input and output tensors have a 4D shape instead of a 2D shape. They

are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with ‘0’ suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.

OPERATION_NAME = 'linalg.mmt4d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Mmt4DOp
class mlir.dialects._linalg_ops_gen.MulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.mul sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.mul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.mul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MulOp
class mlir.dialects._linalg_ops_gen.NegFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.negf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.negf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | NegFOp
class mlir.dialects._linalg_ops_gen.PoolingNchwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nchw_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nchw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwMaxOp
class mlir.dialects._linalg_ops_gen.PoolingNchwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nchw_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nchw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwSumOp
class mlir.dialects._linalg_ops_gen.PoolingNcwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ncw_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_ncw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwMaxOp
class mlir.dialects._linalg_ops_gen.PoolingNcwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCW.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ncw_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_ncw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwSumOp
class mlir.dialects._linalg_ops_gen.PoolingNdhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_ndhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMaxOp
class mlir.dialects._linalg_ops_gen.PoolingNdhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_ndhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMinOp
class mlir.dialects._linalg_ops_gen.PoolingNdhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_ndhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcSumOp
class mlir.dialects._linalg_ops_gen.PoolingNhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxOp
class mlir.dialects._linalg_ops_gen.PoolingNhwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_max_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nhwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxUnsignedOp
class mlir.dialects._linalg_ops_gen.PoolingNhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinOp
class mlir.dialects._linalg_ops_gen.PoolingNhwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_min_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nhwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinUnsignedOp
class mlir.dialects._linalg_ops_gen.PoolingNhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcSumOp
class mlir.dialects._linalg_ops_gen.PoolingNwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxOp
class mlir.dialects._linalg_ops_gen.PoolingNwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_max_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxUnsignedOp
class mlir.dialects._linalg_ops_gen.PoolingNwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinOp
class mlir.dialects._linalg_ops_gen.PoolingNwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_min_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinUnsignedOp
class mlir.dialects._linalg_ops_gen.PoolingNwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NWC.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.pooling_nwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcSumOp
class mlir.dialects._linalg_ops_gen.PowFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Only applies to floating point values.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.powf sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.powf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.powf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | PowFOp
class mlir.dialects._linalg_ops_gen.QuantizedBatchMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

OPERATION_NAME = 'linalg.quantized_batch_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.quantized_batch_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedBatchMatmulOp
class mlir.dialects._linalg_ops_gen.QuantizedMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

OPERATION_NAME = 'linalg.quantized_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.quantized_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedMatmulOp
class mlir.dialects._linalg_ops_gen.ReciprocalOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.reciprocal'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.reciprocal(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ReciprocalOp
class mlir.dialects._linalg_ops_gen.ReduceOp(result, inputs, inits, dimensions, *, loc=None, ip=None)

Bases: _ods_ir

Executes combiner on the dimensions of inputs and returns the reduced result. The dimensions attribute needs to list the reduction dimensions in increasing order.

Example:

%reduce = linalg.reduce
    ins(%input:tensor<16x32x64xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
    (%in: f32, %out: f32) {
      %0 = arith.addf %out, %in: f32
      linalg.yield %0: f32
    }

Shortened print form is available for simple reduces where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments, the first block argument (init) is the last operand of the payload operation with remaining operands matching remaining block arguments in order, and the yield operand is the result of the payload operation.

The example above will be printed using the shortened form as:

%reduce = linalg.reduce { arith.addf }
    ins(%input:tensor<16x32x64xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
OPERATION_NAME = 'linalg.reduce'
_ODS_REGIONS = (1, True)
inputs() _ods_ir
inits() _ods_ir
dimensions() _ods_ir
combiner() _ods_ir
mlir.dialects._linalg_ops_gen.reduce(result, inputs, inits, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | ReduceOp
class mlir.dialects._linalg_ops_gen.RoundOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.round'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.round(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RoundOp
class mlir.dialects._linalg_ops_gen.RsqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.rsqrt'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.rsqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RsqrtOp
class mlir.dialects._linalg_ops_gen.SelectOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.select sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.select'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.select(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SelectOp
class mlir.dialects._linalg_ops_gen.SqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.sqrt'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.sqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SqrtOp
class mlir.dialects._linalg_ops_gen.SquareOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.square'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.square(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SquareOp
class mlir.dialects._linalg_ops_gen.SubOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.sub sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.sub'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.sub(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SubOp
class mlir.dialects._linalg_ops_gen.TanhOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.tanh'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.tanh(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | TanhOp
class mlir.dialects._linalg_ops_gen.TransposeOp(result, input, init, permutation, *, loc=None, ip=None)

Bases: _ods_ir

Permutes the dimensions of input according to the given permutation. dim(result, i) = dim(input, permutation[i])

This op actually moves data, unlike memref.transpose which is a metadata operation only that produces a transposed “view”.

Example:

%transpose = linalg.transpose
    ins(%input:tensor<16x64xf32>)
    outs(%init:tensor<64x16xf32>)
    permutation = [1, 0]
OPERATION_NAME = 'linalg.transpose'
_ODS_REGIONS = (1, True)
input() _ods_ir
init() _ods_ir
permutation() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

region() _ods_ir
mlir.dialects._linalg_ops_gen.transpose(result, input, init, permutation, *, loc=None, ip=None) _ods_ir | _ods_ir | TransposeOp
class mlir.dialects._linalg_ops_gen.VecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.vecmat'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects._linalg_ops_gen.vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | VecmatOp