mlir.dialects.linalg

Submodules

Attributes

Classes

_Dialect

AbsOp

No numeric casting is performed on the input operand.

AddOp

The shapes and element types must be identical. The appropriate casts,

BatchMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

BatchMatvecOp

Numeric casting is performed on the operands to the inner multiply, promoting

BatchMmt4DOp

Besides the outermost batch dimension has the same semantic as

BatchReduceMatmulOp

Numeric casting is performed on the operands to the inner multiply,

BatchVecmatOp

Numeric casting is performed on the operands to the inner multiply, promoting

BroadcastOp

Broadcast the input into the given shape by adding dimensions.

CeilOp

No numeric casting is performed on the input operand.

ContractOp

The semantics of contracting inputs A and B on top of C to produce

Conv1DNcwFcwOp

Layout:

Conv1DNwcWcfOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv1DOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv2DNchwFchwOp

Layout:

Conv2DNchwFchwQOp

Layout:

Conv2DNgchwFgchwOp

Layout:

Conv2DNgchwGfchwOp

Layout:

Conv2DNgchwGfchwQOp

Layout:

Conv2DNhwcFhwcOp

Layout:

Conv2DNhwcFhwcQOp

Layout:

Conv2DNhwcHwcfOp

Layout:

Conv2DNhwcHwcfQOp

Layout:

Conv2DNhwgcGfhwcOp

Layout:

Conv2DNhwgcGfhwcQOp

Layout:

Conv2DOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNcdhwFcdhwOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNdhwcDhwcfOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DNdhwcDhwcfQOp

Numeric casting is performed on the operands to the inner multiply, promoting

Conv3DOp

Numeric casting is performed on the operands to the inner multiply, promoting

CopyOp

Numeric casting is performed on the input operand, promoting it to the same

DepthwiseConv1DNcwCwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv1DNwcWcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv1DNwcWcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNchwChwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcQOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv2DNhwcHwcmQOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNcdhwCdhwOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNdhwcDhwcOp

Numeric casting is performed on the operands to the inner multiply, promoting

DepthwiseConv3DNdhwcDhwcmOp

Numeric casting is performed on the operands to the inner multiply, promoting

DivOp

The shapes and element types must be identical. The appropriate casts,

DivUnsignedOp

The shapes and element types must be identical. The appropriate casts,

DotOp

Numeric casting is performed on the operands to the inner multiply, promoting

ElementwiseOp

ErfOp

No numeric casting is performed on the input operand.

ExpOp

No numeric casting is performed on the input operand.

FillOp

Works for arbitrary ranked output tensors since the operation performs scalar

FillRng2DOp

The operation generations pseudo random numbers using a linear congruential

FloorOp

No numeric casting is performed on the input operand.

GenericOp

Generic Linalg op form where the key properties of the computation are

IndexOp

The linalg.index operation returns the iteration index of the immediately

PackOp

The "pack" operation converts a source tensor of rank n into a result

SoftmaxOp

linalg.softmax computes a numerically stable version of softmax.

UnPackOp

The "unpack" operation converts a source tensor of rank n with a tiled and

WinogradFilterTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

WinogradInputTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

WinogradOutputTransformOp

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched

YieldOp

linalg.yield is a special terminator operation for blocks inside regions

LogOp

No numeric casting is performed on the input operand.

MapOp

Models elementwise operations on tensors in terms of arithmetic operations

MatmulOp

Numeric casting is performed on the operands to the inner multiply,

MatvecOp

Numeric casting is performed on the operands to the inner multiply, promoting

MaxOp

The shapes and element types must be identical. The appropriate casts,

MinOp

The shapes and element types must be identical. The appropriate casts,

Mmt4DOp

Differences from linalg.matmul:

MulOp

The shapes and element types must be identical. The appropriate casts,

NegFOp

No numeric casting is performed on the input operand.

PoolingNchwMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNchwSumOp

Layout:

PoolingNcwMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNcwSumOp

Layout:

PoolingNdhwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNdhwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNdhwcSumOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMaxUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcMinUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNhwcSumOp

Layout:

PoolingNwcMaxOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMaxUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMinOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcMinUnsignedOp

Numeric casting is performed on the input operand, promoting it to the same

PoolingNwcSumOp

Layout:

PowFOp

Only applies to floating point values.

QuantizedBatchMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

QuantizedMatmulOp

Numeric casting is performed on the operands to the inner multiply, promoting

ReciprocalOp

No numeric casting is performed on the input operand.

ReduceOp

Executes combiner on the dimensions of inputs and returns the

RoundOp

No numeric casting is performed on the input operand.

RsqrtOp

No numeric casting is performed on the input operand.

SelectOp

The shapes and element types must be identical. The appropriate casts,

SqrtOp

No numeric casting is performed on the input operand.

SquareOp

No numeric casting is performed on the input operand.

SubOp

The shapes and element types must be identical. The appropriate casts,

TanhOp

No numeric casting is performed on the input operand.

TransposeOp

Permutes the dimensions of input according to the given permutation.

VecmatOp

Numeric casting is performed on the operands to the inner multiply, promoting

BinaryFn

Binary function namespace.

ElementwiseArityGroup

allowed 32-bit signless integer cases: 1, 2, 3

ElementwiseCaseLimits

allowed 32-bit signless integer cases:

ElementwiseKind

allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23

IteratorType

Iterator type

TernaryFn

Ternary function namespace.

TypeFn

Type conversion function namespace.

UnaryFn

Unary function namespace.

WinogradConv2DFmr

allowed 32-bit signless integer cases: 0, 1, 2

DefinedOpCallable

Callable that wraps any defined op function.

TensorExpression

An expression that can appear on the RHS of a comprehension.

TensorUse

A used tensor represented by its (tensor_name, indices).

TensorFn

Application of a tensor function.

TensorReduceFn

Application of a reduction function.

const

Returns the given constant floating point or integer value.

index

Returns the iteration index for a given dimension name.

FunctionKind

Generic enumeration.

UnaryFnType

Unary function.

UnaryFn

Unary function namespace.

BinaryFnType

Binary function.

BinaryFn

Binary function namespace.

TernaryFnType

Ternary function.

TernaryFn

Ternary function namespace.

TypeFnType

Type conversion function.

TypeFn

Type conversion function namespace.

ReduceFnUse

Reduction function use.

ReduceFnType

Reduction function.

ReduceFn

OperandKind

Generic enumeration.

OperandDef

Definition of an operand passed to an operation.

TensorDef

Tensor operand definition.

ScalarDef

Scalar operand definition.

IndexAttrDef

Index attribute definition.

UnaryFnAttrDef

Unary function attribute definition.

BinaryFnAttrDef

Binary function attribute definition.

TernaryFnAttrDef

Ternary function attribute definition.

TypeFnAttrDef

Type conversion function attribute definition.

Comprehension

Represents a single comprehension.

OpInterfaceDef

An interface that an op implements.

OpDefinitionDef

A method that an op implements.

OpMetadataDef

Metadata about the op (generally not behavior impacting).

LinalgOpDef

Definition of a linalg op.

AffineBuildState

Internal state for the AffineExprDef._create impls.

AffineExprDef

Base class for an affine expression being defined.

DimDef

Represents a named dimension.

SymbolDef

Represents a named symbol.

ScalarAssign

An assignment to a named argument (LHS of a comprehension).

ScalarFn

A type of ScalarExpression that applies a function.

ScalarArg

A type of ScalarExpression that references a named argument.

ScalarConst

A type of ScalarExpression representing a constant.

ScalarIndex

A type of ScalarExpression accessing an iteration index.

ScalarExpression

An expression on scalar values.

TypeVar

A replaceable type variable.

YAMLObject

An object that can dump itself to a YAML stream

LinalgStructuredOpConfig

Configuration for metadata sufficient to construct a linalg named op.

LinalgOpConfig

Container for any supported linalg op type.

OperandDefConfig

Wrapper containing an operand definition with additional state.

GenericOp_

Generic Linalg op form where the key properties of the computation are

ElementwiseOp_

The attribute kind describes arithmetic operation to perform. The

Functions

_ods_equally_sized_accessor(elements, n_simple, ...)

Returns a starting position and a number of elements per variadic group

_ods_get_default_loc_context([location])

Returns a context in which the defaulted location is created. If the location

_get_op_results_or_values(...)

Returns the given sequence of values or the results of the given op.

_ods_segmented_accessor(elements, raw_segments, idx)

Returns a slice of elements corresponding to the idx-th segment.

abs([I, O])

Applies abs(x) elementwise.

add([lhs, rhs, O])

Adds two tensors elementwise.

batch_matmul(*ins, outs[, indexing_maps, cast])

batch_matvec([A, B, C])

Performs a batched matrix-vector multiplication.

batch_mmt4d([lhs, rhs, accum])

Performs a batched matrix-matrix-transpose multiplication of two

batch_reduce_matmul(*ins, outs[, indexing_maps, cast])

batch_vecmat([A, B, C])

Performs a batched matrix-vector multiplication.

broadcast(input, *, outs, dimensions)

ceil([I, O])

Applies ceil(x) elementwise.

contract(*ins, outs, indexing_maps[, cast])

conv_1d_ncw_fcw([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_1d_nwc_wcf([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_1d([I, K, O])

Performs 1-D convolution with no channels.

conv_2d_nchw_fchw([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nchw_fchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_ngchw_fgchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_ngchw_gfchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_ngchw_gfchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero-point offsets.

conv_2d_nhwc_fhwc([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_fhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nhwc_hwcf([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_hwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nhwgc_gfhwc([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_nhwgc_gfhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero point offsets.

conv_2d([I, K, O])

Performs 2-D convolution with no channels.

conv_3d_ncdhw_fcdhw([I, K, O, strides, dilations])

Performs 3-D convolution.

conv_3d_ndhwc_dhwcf([I, K, O, strides, dilations])

Performs 3-D convolution.

conv_3d_ndhwc_dhwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 3-D convolution with zero point offsets.

conv_3d([I, K, O])

Performs 3-D convolution with no channels.

copy([I, O, cast])

Copies the tensor elementwise.

depthwise_conv_1d_ncw_cw([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_nwc_wc([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_nwc_wcm([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_2d_nchw_chw([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwc([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwc_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_3d_ncdhw_cdhw([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ndhwc_dhwc([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ndhwc_dhwcm([I, K, O, strides, ...])

Performs depth-wise 3-D convolution.

div([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise.

div_unsigned([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise. For integer

dot([A, B, C])

Performs a dot product of two vectors to a scalar result.

elementwise(*ins, outs, kind[, indexing_maps])

erf([I, O])

Applies erf(x) elementwise.

exp([I, O])

Applies exp(x) elementwise.

fill([value, O])

Fills the output tensor with the given value.

fill_rng_2d([min, max, seed, O])

Fills the output tensor with pseudo random numbers.

floor([I, O])

Applies floor(x) elementwise.

generic

index

Returns the iteration index for a given dimension name.

pack(→ opdsl.ops.core_named_ops.ir.Value)

softmax(→ Union[_ods_ir, _ods_ir, SoftmaxOp])

unpack(→ opdsl.ops.core_named_ops.ir.Value)

winograd_filter_transform(→ _ods_ir)

winograd_input_transform(→ _ods_ir)

winograd_output_transform(→ _ods_ir)

yield_(→ YieldOp)

log([I, O])

Applies log(x) elementwise.

map

matmul(*ins, outs[, indexing_maps, cast])

matvec([A, y, x])

Performs a matrix-vector multiplication.

max([lhs, rhs, O])

Takes the max (signed) between two inputs, elementwise.

min([lhs, rhs, O])

Takes the min (signed) between two inputs, elementwise.

mmt4d([lhs, rhs, accum])

Performs a matrix-matrix-transpose multiplication of two 4D inputs.

mul([lhs, rhs, O])

Multiplies two tensors elementwise.

negf([I, O])

Applies negf(x) elementwise.

pooling_nchw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nchw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_ncw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_ncw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_ndhwc_max([I, K, O, strides, dilations])

Performs 3D max pooling.

pooling_ndhwc_min([I, K, O, strides, dilations])

Performs 3D min pooling.

pooling_ndhwc_sum([I, K, O, strides, dilations])

Performs 3D sum pooling.

pooling_nhwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nhwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_nhwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nhwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_nhwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_nwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_nwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

powf([lhs, rhs, O])

Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use linalg.square.

quantized_batch_matmul([A, B, AZp, BZp, C])

Performs a batched matrix multiplication of two 3D inputs.

quantized_matmul([A, B, AZp, BZp, C])

Performs a matrix multiplication of two 2D inputs.

reciprocal([I, O])

Applies reciprocal(x) elementwise.

reduce

round([I, O])

Applies round(x) elementwise.

rsqrt([I, O])

Applies rsqrt(x) elementwise.

select([cond, lhs, rhs, O])

Chooses one value based on a binary condition supplied as its first operand.

sqrt([I, O])

Applies sqrt(x) elementwise.

square([I, O])

Applies square(x) elementwise.

sub([lhs, rhs, O])

Subtracts two tensors elementwise.

tanh([I, O])

Applies tanh(x) elementwise.

transpose(input, *, outs, permutation)

vecmat([y, A, x])

Performs a vector-matrix multiplication.

register_attribute_builder(kind[, replace])

_binaryfn(x, context)

_elementwisearitygroup(x, context)

_elementwisecaselimits(x, context)

_elementwisekind(x, context)

_iteratortype(x, context)

_ternaryfn(x, context)

_typefn(x, context)

_unaryfn(x, context)

_winogradconv2dfmr(x, context)

_binaryfnattr(x, context)

_elementwisekindattr(x, context)

_iteratortypeenum(x, context)

_ternaryfnattr(x, context)

_typefnattr(x, context)

_unaryfnattr(x, context)

_iteratortypeenum(x, context)

copy([I, O, cast])

Copies the tensor elementwise.

exp([I, O])

Applies exp(x) elementwise.

log([I, O])

Applies log(x) elementwise.

abs([I, O])

Applies abs(x) elementwise.

ceil([I, O])

Applies ceil(x) elementwise.

floor([I, O])

Applies floor(x) elementwise.

negf([I, O])

Applies negf(x) elementwise.

reciprocal([I, O])

Applies reciprocal(x) elementwise.

round([I, O])

Applies round(x) elementwise.

sqrt([I, O])

Applies sqrt(x) elementwise.

rsqrt([I, O])

Applies rsqrt(x) elementwise.

square([I, O])

Applies square(x) elementwise.

tanh([I, O])

Applies tanh(x) elementwise.

erf([I, O])

Applies erf(x) elementwise.

add([lhs, rhs, O])

Adds two tensors elementwise.

sub([lhs, rhs, O])

Subtracts two tensors elementwise.

mul([lhs, rhs, O])

Multiplies two tensors elementwise.

div([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise.

div_unsigned([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise. For integer

max([lhs, rhs, O])

Takes the max (signed) between two inputs, elementwise.

min([lhs, rhs, O])

Takes the min (signed) between two inputs, elementwise.

powf([lhs, rhs, O])

Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use linalg.square.

select([cond, lhs, rhs, O])

Chooses one value based on a binary condition supplied as its first operand.

quantized_matmul([A, B, AZp, BZp, C])

Performs a matrix multiplication of two 2D inputs.

mmt4d([lhs, rhs, accum])

Performs a matrix-matrix-transpose multiplication of two 4D inputs.

batch_mmt4d([lhs, rhs, accum])

Performs a batched matrix-matrix-transpose multiplication of two

quantized_batch_matmul([A, B, AZp, BZp, C])

Performs a batched matrix multiplication of two 3D inputs.

matvec([A, y, x])

Performs a matrix-vector multiplication.

vecmat([y, A, x])

Performs a vector-matrix multiplication.

batch_matvec([A, B, C])

Performs a batched matrix-vector multiplication.

batch_vecmat([A, B, C])

Performs a batched matrix-vector multiplication.

dot([A, B, C])

Performs a dot product of two vectors to a scalar result.

conv_1d([I, K, O])

Performs 1-D convolution with no channels.

conv_2d([I, K, O])

Performs 2-D convolution with no channels.

conv_3d([I, K, O])

Performs 3-D convolution with no channels.

conv_1d_nwc_wcf([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_1d_ncw_fcw([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_2d_nhwc_hwcf([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_fhwc([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_hwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nhwc_fhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nchw_fchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nchw_fchw([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_ngchw_fgchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_ngchw_gfchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_nhwgc_gfhwc([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_nhwgc_gfhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero point offsets.

conv_2d_ngchw_gfchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero-point offsets.

conv_3d_ndhwc_dhwcf([I, K, O, strides, dilations])

Performs 3-D convolution.

conv_3d_ndhwc_dhwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 3-D convolution with zero point offsets.

conv_3d_ncdhw_fcdhw([I, K, O, strides, dilations])

Performs 3-D convolution.

depthwise_conv_1d_nwc_wc([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_ncw_cw([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_nwc_wcm([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_2d_nhwc_hwc([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nchw_chw([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwc_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_3d_ndhwc_dhwc([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ncdhw_cdhw([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ndhwc_dhwcm([I, K, O, strides, ...])

Performs depth-wise 3-D convolution.

pooling_nhwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nchw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nhwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nhwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_nchw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nhwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nhwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_nwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_ncw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_ncw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_ndhwc_sum([I, K, O, strides, dilations])

Performs 3D sum pooling.

pooling_ndhwc_max([I, K, O, strides, dilations])

Performs 3D max pooling.

pooling_ndhwc_min([I, K, O, strides, dilations])

Performs 3D min pooling.

fill([value, O])

Fills the output tensor with the given value.

fill_rng_2d([min, max, seed, O])

Fills the output tensor with pseudo random numbers.

_get_op_result_or_value(→ mlir._mlir_libs._mlir.ir.Value)

Returns the given value or the single result of the given op.

_get_op_results_or_values(...)

Returns the given sequence of values or the results of the given op.

bind_op_def(op_def)

current_op_def(...)

_prepare_structured_op_outs(...)

linalg_structured_op(→ DefinedOpCallable)

domain(*dimensions)

implements(*interfaces)

defines(*definitions)

yaml_dump(data[, sort_keys])

yaml_dump_all(data[, sort_keys, explicit_start])

emit_generic_structured_op(op_config, *ins, outs, **attrs)

emit_named_structured_op(op_config, op_name, ...)

loc_tracebacks(→ collections.abc.Iterable[None])

Enables automatic traceback-based locations for MLIR operations.

register_attribute_builder(kind[, replace])

_affineMapAttr(x, context)

_integerSetAttr(x, context)

_boolAttr(x, context)

_dictAttr(x, context)

_indexAttr(x, context)

_i1Attr(x, context)

_i8Attr(x, context)

_i16Attr(x, context)

_i32Attr(x, context)

_i64Attr(x, context)

_si1Attr(x, context)

_si8Attr(x, context)

_si16Attr(x, context)

_si32Attr(x, context)

_si64Attr(x, context)

_ui1Attr(x, context)

_ui8Attr(x, context)

_ui16Attr(x, context)

_ui32Attr(x, context)

_ui64Attr(x, context)

_f32Attr(x, context)

_f64Attr(x, context)

_stringAttr(x, context)

_symbolNameAttr(x, context)

_symbolRefAttr(x, context)

_flatSymbolRefAttr(x, context)

_unitAttr(x, context)

_arrayAttr(x, context)

_affineMapArrayAttr(x, context)

_boolArrayAttr(x, context)

_dictArrayAttr(x, context)

_flatSymbolRefArrayAttr(x, context)

_i32ArrayAttr(x, context)

_i64ArrayAttr(x, context)

_i64SmallVectorArrayAttr(x, context)

_indexListArrayAttr(x, context)

_f32ArrayAttr(x, context)

_f64ArrayAttr(x, context)

_strArrayAttr(x, context)

_symbolRefArrayAttr(x, context)

_denseF32ArrayAttr(x, context)

_denseF64ArrayAttr(x, context)

_denseI8ArrayAttr(x, context)

_denseI16ArrayAttr(x, context)

_denseI32ArrayAttr(x, context)

_denseI64ArrayAttr(x, context)

_denseBoolArrayAttr(x, context)

_typeAttr(x, context)

_typeArrayAttr(x, context)

_memref_type_attr(x, context)

_f64ElementsAttr(x, context)

_get_op_result_or_value(→ mlir._mlir_libs._mlir.ir.Value)

Returns the given value or the single result of the given op.

_get_op_result_or_op_results(...)

_dispatch_mixed_values(→ Tuple[List[mlir.ir.Value], ...)

region_op(op_constructor[, terminator])

Decorator to define an MLIR Op specified as a python function.

transpose(input, *, outs, permutation)

broadcast(input, *, outs, dimensions)

_IteratorTypeArrayAttr(x, context)

_create_matmul_like_op(op_type, *ins, outs[, ...])

matmul(*ins, outs[, indexing_maps, cast])

batch_matmul(*ins, outs[, indexing_maps, cast])

batch_reduce_matmul(*ins, outs[, indexing_maps, cast])

contract(*ins, outs, indexing_maps[, cast])

elementwise(*ins, outs, kind[, indexing_maps])

pack(→ opdsl.ops.core_named_ops.ir.Value)

unpack(→ opdsl.ops.core_named_ops.ir.Value)

Package Contents

mlir.dialects.linalg._ods_equally_sized_accessor(elements, n_simple, n_variadic, n_preceding_simple, n_preceding_variadic)

Returns a starting position and a number of elements per variadic group assuming equally-sized groups and the given numbers of preceding groups.

elements: a sequential container. n_simple: the number of non-variadic groups in the container. n_variadic: the number of variadic groups in the container. n_preceding_simple: the number of non-variadic groups preceding the current group. n_preceding_variadic: the number of variadic groups preceding the current group.

mlir.dialects.linalg._ods_get_default_loc_context(location=None)

Returns a context in which the defaulted location is created. If the location is None, takes the current location from the stack.

mlir.dialects.linalg._get_op_results_or_values(arg: mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | Sequence[mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value]) Sequence[mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value] | mlir._mlir_libs._mlir.ir.OpResultList

Returns the given sequence of values or the results of the given op.

This is useful to implement op constructors so that they can take other ops as lists of arguments instead of requiring the caller to extract results for every op.

mlir.dialects.linalg._ods_segmented_accessor(elements, raw_segments, idx)

Returns a slice of elements corresponding to the idx-th segment.

elements: a sliceable container (operands or results). raw_segments: an mlir.ir.Attribute, of DenseI32Array subclass containing sizes of the segments. idx: index of the segment.

mlir.dialects.linalg._ods_ir
class mlir.dialects.linalg._Dialect(descriptor: object)

Bases: _ods_ir

DIALECT_NAMESPACE = 'linalg'
class mlir.dialects.linalg.AbsOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.abs'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.abs(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AbsOp
class mlir.dialects.linalg.AddOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.add sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.add'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.add(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AddOp
class mlir.dialects.linalg.BatchMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be appiled by specifying the explicit attribute
'indexing_maps' as shown below. This is a list attribute, so must include maps for all
arguments if specified.

Example Transpose:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```

Example Broadcast:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (k)>,           // broadcast
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```

Example Broadcast and Transpose:
```mlir
linalg.batch_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>,        // broadcast
                     affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>)
    outs(%arg2: memref<2x3x7xf32>)
```
OPERATION_NAME = 'linalg.batch_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.batch_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatmulOp
class mlir.dialects.linalg.BatchMatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.batch_matvec'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.batch_matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatvecOp
class mlir.dialects.linalg.BatchMmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.

OPERATION_NAME = 'linalg.batch_mmt4d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.batch_mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMmt4DOp
class mlir.dialects.linalg.BatchReduceMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be applied by specifying the explicit attribute ‘indexing_maps’ as shown below. This is a list attribute, so must include maps for all arguments if specified.

Example Transpose:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (k)>,         // broadcast
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast and Transpose:

linalg.batch_reduce_matmul
    indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>,        // broadcast
                     affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose
                     affine_map<(batch, m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>)
    outs(%arg2: memref<3x7xf32>)
OPERATION_NAME = 'linalg.batch_reduce_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.batch_reduce_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchReduceMatmulOp
class mlir.dialects.linalg.BatchVecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.batch_vecmat'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.batch_vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchVecmatOp
class mlir.dialects.linalg.BroadcastOp(result, input, init, dimensions, *, loc=None, ip=None)

Bases: _ods_ir

Broadcast the input into the given shape by adding dimensions.

Example:

%bcast = linalg.broadcast
    ins(%input:tensor<16xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
OPERATION_NAME = 'linalg.broadcast'
_ODS_REGIONS = (1, True)
input() _ods_ir
init() _ods_ir
dimensions() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

region() _ods_ir
mlir.dialects.linalg.broadcast(result, input, init, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | BroadcastOp
class mlir.dialects.linalg.CeilOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.ceil'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.ceil(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | CeilOp
class mlir.dialects.linalg.ContractOp(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None)

Bases: _ods_ir

The semantics of contracting inputs A and B on top of C to produce output D is given by

D[H] = (SUM_{(I J) \ H} A[I] * B[J]) + C[H]

where I, J, and H are tuples of (pairwise distinct) dimension identifiers - meant to range over valid indices - corresponding to the results of the mandatory (projected permutation) indexing_maps for A, B and C. SUM_{dims} means reduce over all valid indices for the dimensions in the set dims (with I, J, and K treated as sets of dim identifiers).

The iteration space consists of all dimensions in I, J and H, i.e. the domain of each of the ``affine_map``s. Like for einsums, the iteration type of each dim is inferred and is either:

  • reduction: the dim is used to index into A and B but not C. Per the

above semantics, these dims will be contracted, i.e. reduced over. * parallel: the dim is used to index into C and at least one of A and B, and - deriving from matmul terminology - is either an “M-like” dim (if used on A and C), an “N-like” dim (if used on B and C) or a “batch”-dim (if used to index into A, B, and C).

For example, batch-matmul is given by I = b, m, k , J = b, k, n , H = b, m, n (with k as a contracting reduction-dimension while m, n and b have parallel iteration-type) and gets represented as:

%D = linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>,
                     affine_map<(batch, m, n, k) -> (batch, k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: tensor<?x?x?xf32>, tensor<?x?x?xf32>)
    outs(%C: tensor<?x?x?xf32>) -> tensor<?x?x?xf32>

Note that by permuting dims in the affine_map``s' results, accesses to to the inputs and output can be arbitrarily transposed. Similarly, arbitrary broadcasts can be achieved through leaving out dims on either input operand. For example, the following is a variant of batch-matmul with a transposition applied to ``A while B’s 2D-matrix gets broadcasted along the batch dim:

linalg.contract
    indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>,
                     affine_map<(batch, m, n, k) -> (k, n)>,
                     affine_map<(batch, m, n, k) -> (batch, m, n)>]
    ins(%A, %B: memref<?x?x?xf32>, memref<?x?xf32>)
    outs(%C: memref<?x?x?xf32>)

Numeric casting is performed on the operands to the inner multiplication, promoting/truncating them to the same data type as the accumulator/output.

TODO: Allow control over the combining/accumulating op and possibly the multiplication op.

OPERATION_NAME = 'linalg.contract'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir
cast() _ods_ir | None
result_tensors() _ods_ir
combiner() _ods_ir
mlir.dialects.linalg.contract(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | ContractOp
class mlir.dialects.linalg.Conv1DNcwFcwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCW.

  • Kernel: FCW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d_ncw_fcw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_1d_ncw_fcw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNcwFcwOp
class mlir.dialects.linalg.Conv1DNwcWcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d_nwc_wcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_1d_nwc_wcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNwcWcfOp
class mlir.dialects.linalg.Conv1DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_1d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_1d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DOp
class mlir.dialects.linalg.Conv2DNchwFchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nchw_fchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nchw_fchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwOp
class mlir.dialects.linalg.Conv2DNchwFchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nchw_fchw_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nchw_fchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwQOp
class mlir.dialects.linalg.Conv2DNgchwFgchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: FGCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_ngchw_fgchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_ngchw_fgchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwFgchwOp
class mlir.dialects.linalg.Conv2DNgchwGfchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_ngchw_gfchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwOp
class mlir.dialects.linalg.Conv2DNgchwGfchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_ngchw_gfchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwQOp
class mlir.dialects.linalg.Conv2DNhwcFhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwc_fhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcOp
class mlir.dialects.linalg.Conv2DNhwcFhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwc_fhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcQOp
class mlir.dialects.linalg.Conv2DNhwcHwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwc_hwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfOp
class mlir.dialects.linalg.Conv2DNhwcHwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwc_hwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfQOp
class mlir.dialects.linalg.Conv2DNhwgcGfhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwgc_gfhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcOp
class mlir.dialects.linalg.Conv2DNhwgcGfhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d_nhwgc_gfhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcQOp
class mlir.dialects.linalg.Conv2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_2d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DOp
class mlir.dialects.linalg.Conv3DNcdhwFcdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d_ncdhw_fcdhw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_3d_ncdhw_fcdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNcdhwFcdhwOp
class mlir.dialects.linalg.Conv3DNdhwcDhwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_3d_ndhwc_dhwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfOp
class mlir.dialects.linalg.Conv3DNdhwcDhwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_3d_ndhwc_dhwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfQOp
class mlir.dialects.linalg.Conv3DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.conv_3d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.conv_3d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DOp
class mlir.dialects.linalg.CopyOp(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.copy'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.copy(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | CopyOp
class mlir.dialects.linalg.DepthwiseConv1DNcwCwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_1d_ncw_cw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_1d_ncw_cw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNcwCwOp
class mlir.dialects.linalg.DepthwiseConv1DNwcWcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_1d_nwc_wc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcOp
class mlir.dialects.linalg.DepthwiseConv1DNwcWcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_1d_nwc_wcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcmOp
class mlir.dialects.linalg.DepthwiseConv2DNchwChwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nchw_chw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_2d_nchw_chw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNchwChwOp
class mlir.dialects.linalg.DepthwiseConv2DNhwcHwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcOp
class mlir.dialects.linalg.DepthwiseConv2DNhwcHwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcQOp
class mlir.dialects.linalg.DepthwiseConv2DNhwcHwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmOp
class mlir.dialects.linalg.DepthwiseConv2DNhwcHwcmQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm_q'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmQOp
class mlir.dialects.linalg.DepthwiseConv3DNcdhwCdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ncdhw_cdhw'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_3d_ncdhw_cdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNcdhwCdhwOp
class mlir.dialects.linalg.DepthwiseConv3DNdhwcDhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwc'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcOp
class mlir.dialects.linalg.DepthwiseConv3DNdhwcDhwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwcm'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcmOp
class mlir.dialects.linalg.DivOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.div'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.div(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivOp
class mlir.dialects.linalg.DivUnsignedOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.div_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.div_unsigned(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivUnsignedOp
class mlir.dialects.linalg.DotOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.dot'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.dot(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DotOp
class mlir.dialects.linalg.ElementwiseOp(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None)

Bases: _ods_ir

The attribute kind describes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select).

By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type.

Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand.

Iterator-types are always all parallel. Iterator-types are needed for constructing the underlying structured op.

The number of dims of the iterator-types are inferred from the rank of the result type.

Example:

Defining a unary linalg.elementwise with default indexing-map:

%exp = linalg.elementwise
    kind=#linalg.elementwise_kind<exp>
    ins(%x : tensor<4x16x8xf32>)
    outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32>

Defining a binary linalg.elementwise with user-defined indexing-map:

%add = linalg.elementwise
    kind=#linalg.elementwise_kind<add>
    indexing_maps = [#transpose, #broadcast, #identity]
    ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>)
    outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32>
OPERATION_NAME = 'linalg.elementwise'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
kind() _ods_ir
indexing_maps() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.elementwise(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) _ods_ir | _ods_ir | ElementwiseOp
class mlir.dialects.linalg.ErfOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.erf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.erf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ErfOp
class mlir.dialects.linalg.ExpOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.exp'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.exp(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ExpOp
class mlir.dialects.linalg.FillOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.

OPERATION_NAME = 'linalg.fill'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.fill(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillOp
class mlir.dialects.linalg.FillRng2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.

OPERATION_NAME = 'linalg.fill_rng_2d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.fill_rng_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillRng2DOp
class mlir.dialects.linalg.FloorOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.floor'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.floor(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FloorOp
class mlir.dialects.linalg.GenericOp(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None)

Bases: _ods_ir

Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a linalg.generic op is written as:

linalg.generic #trait_attribute
    ins(%A, %B : memref<?x?xf32, stride_specification>,
                 memref<?x?xf32, stride_specification>)
    outs(%C : memref<?x?xf32, stride_specification>)
    attrs = {other-optional-attributes}
    {region}

Where #trait_attributes is an alias of a dictionary attribute containing:

  • doc [optional]: a documentation string

  • indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input

and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window

Example: Defining a #matmul_trait attribute in MLIR can be done as follows:

#matmul_accesses = [
  (m, n, k) -> (m, k),
  (m, n, k) -> (k, n),
  (m, n, k) -> (m, n)
]
#matmul_trait = {
  doc = "C(m, n) += A(m, k) * B(k, n)",
  indexing_maps = #matmul_accesses,
  library_call = "linalg_matmul",
  iterator_types = ["parallel", "parallel", "reduction"]
}

And can be reused in multiple places as:

linalg.generic #matmul_trait
  ins(%A, %B : memref<?x?xf32, stride_specification>,
               memref<?x?xf32, stride_specification>)
  outs(%C : memref<?x?xf32, stride_specification>)
  {other-optional-attributes} {
  ^bb0(%a: f32, %b: f32, %c: f32) :
    %d = arith.mulf %a, %b: f32
    %e = arith.addf %c, %d: f32
    linalg.yield %e : f32
}

This may lower to either:

call @linalg_matmul(%A, %B, %C) :
  (memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>)
  -> ()

or IR resembling:

scf.for %m = %c0 to %M step %c1 {
  scf.for %n = %c0 to %N step %c1 {
    scf.for %k = %c0 to %K step %c1 {
      %a = load %A[%m, %k] : memref<?x?xf32, stride_specification>
      %b = load %B[%k, %n] : memref<?x?xf32, stride_specification>
      %c = load %C[%m, %n] : memref<?x?xf32, stride_specification>
      %d = arith.mulf %a, %b: f32
      %e = arith.addf %c, %d: f32
      store %e, %C[%m, %n] : memref<?x?x?xf32, stride_specification>
    }
  }
}
OPERATION_NAME = 'linalg.generic'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir
iterator_types() _ods_ir
doc() _ods_ir | None
library_call() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.generic(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) _ods_ir | _ods_ir | GenericOp
class mlir.dialects.linalg.IndexOp(dim, *, results=None, loc=None, ip=None)

Bases: _ods_ir

The linalg.index operation returns the iteration index of the immediately enclosing linalg structured operation for the iteration dimension dim. The dim attribute specifies the position of the accessed dimension in the indexing map domain.

Example:

#map = affine_map<(i, j) -> (i, j)>
linalg.generic {indexing_maps = [#map, #map],
                iterator_types = ["parallel", "parallel"]}
  outs(%I, %J : memref<?x?xindex>, memref<?x?xindex>) {
  ^bb0(%arg0 : index, %arg1 : index):
  // Access the outer iteration dimension i
  %i = linalg.index 0 : index
  // Access the inner iteration dimension j
  %j = linalg.index 1 : index
  linalg.yield %i, %j : index, index
}

This may lower to IR resembling:

%0 = dim %I, %c0 : memref<?x?xindex>
%1 = dim %I, %c1 : memref<?x?xindex>
scf.for %i = %c0 to %0 step %c1 {
  scf.for %j = %c0 to %1 step %c1 {
    store %i, %I[%i, %j] : memref<?x?xindex>
    store %j, %J[%i, %j] : memref<?x?xindex>
  }
}
OPERATION_NAME = 'linalg.index'
_ODS_REGIONS = (0, True)
dim() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.index(dim, *, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.PackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None)

Bases: _ods_ir

The “pack” operation converts a source tensor of rank n into a result tensor of rank n + k with a tiled and packed layout (maybe with padding) and optionally transposes the tiled source tensor dimensions.

inner_tiles (mandatory) specifies k tile sizes. These tile sizes correspond to the least significant (“inner”) result tensor dimension sizes, in the same order. Tile sizes can be static or dynamic.

inner_dims_pos (mandatory) specifies k source tensor dimensions that are being tiled, where 0 <= k <= n.

  • inner_dims_pos[i] specifies the source tensor dimension tiled by

inner_tiles[i] where 0 <= i < k. All the values in inner_dims_pos are within [0, n). * The tiled dimensions (of size inner_tiles) are added to the end of the result tensor in the order in which they appear, i.e. shape(result)[rank(source) + i] = inner_tiles[i] for 0 <= i < k. * The following relationship for the tiled dimensions holds: shape(result)[inner_dims_pos[i]] = shape(source)[inner_dims_pos[i]] / inner_tiles[i], where (⌈/⌉ indicates CeilDiv).

Example: If inner_tiles = [16, 32], the result tensor has a shape of ...x16x32. If inner_dims_pos = [0, 1], the 0th source dimension is tiled by 16 and the 1st source dimension is tiled by 32. Other source dimensions (if any) are not tiled. If inner_dims_pos = [1, 0], the 1st dimension is tiled by 16 and the 0th dimension is tiled by 32.

Example:

// NC to NCnc
%0 = linalg.pack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32]
    into %dest : tensor<128x256xf32> -> tensor<16x8 x 8x32 xf32>
//                                             \  /   \  /
//                                 Outer Dims: 16x8   Inner Dims: 8x32

// CHW to CHWhw
%0 = linalg.pack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2]
    into %dest : tensor<3x20x24xf32> -> tensor<3x10x6 x 4x2 xf32>
//                                              \  /    \ /
//                                 Outer Dims: 3x10x6  Inner Dims: 4x2

// HCW to HCWhw
%0 = linalg.pack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2]
    into %dest : tensor<18x3x32xf32> -> tensor<9x3x8 x 4x2 xf32>
//                                              \  /   \ /
//                                 Outer Dims: 9x3x8  Inner Dims: 4x2

outer_dims_perm (optional) specifies a permutation for the outer dimensions. If specified, it must have n elements.

Example:

// CK to KCck
%0 = linalg.pack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1]
    inner_tiles = [8, 32] into %dest
    : tensor<128x256xf32> -> tensor<8x16 x 8x32 xf32>
//                                  \  /
//            compare with "NC to NCnc": outer dims are transposed

padding_value specifies a padding value at the boundary on non-perfectly divisible dimensions. Padding is optional:

  • If absent, it is assumed that for all inner tiles,

shape(source)[inner_dims_pos[i]] % inner_tiles[i] == 0, i.e. all inner tiles divide perfectly the corresponding outer dimension in the result tensor. It is UB if the tile does not perfectly divide the dimension. * If present, it will pad along high dimensions (high-padding) to make the tile complete. Note that it is not allowed to have artificial padding that is not strictly required by linalg.pack (i.e., padding past what is needed to complete the last tile along each packed dimension). It is UB if extra padding is requested. It is not possible to verify the requirements statically with dynamic shapes, so they are treated as UB.

Example:

%0 = linalg.pack %arg0 padding_value(%pad : f32) outer_dims_perm = [2, 1, 0]
    inner_dims_pos = [1] inner_tiles = [2] into %arg1
    : tensor<200x127x256xf32> -> tensor<256x64x200x2xf32>
//                 \
//                padded and tiled dim
//
// Source dimension 1 is tiled. 64 does not divide 127 evenly, so 1 padded
// element is added at the end.
//
// Note: Only tiled dimensions can be padded.

Invalid example that has artificial padding:

%0 = linalg.pack %src padding_value(%cst : f32) inner_dims_pos = [0]
    inner_tiles = [8] into %dest
    : tensor<9xf32> -> tensor<3x8xf32>
//                             \
//            expect tensor<2x8xf32> because CeilDiv(9, 8) = 2
OPERATION_NAME = 'linalg.pack'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
source() _ods_ir
dest() _ods_ir
padding_value() _ods_ir | None
inner_tiles() _ods_ir
outer_dims_perm() _ods_ir | None
inner_dims_pos() _ods_ir
static_inner_tiles() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.pack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.SoftmaxOp(result, input, output, dimension, *, loc=None, ip=None)

Bases: _ods_ir

linalg.softmax computes a numerically stable version of softmax.

For a given input tensor and a specified dimension d, compute:

  1. the max m along that dimension d

  2. f(x) = exp(x - m)

  3. sum f(x) along dimension d to get l(x).

  4. compute the final result f(x) / l(x).

This is an aggregate linalg operation that further reduces to a small DAG of structured operations.

Warning: Regarding the tiling capabilities, the implementation doesn’t check that the provided dimensions make sense. This is the responsability of the transformation calling the tiling to ensure that the provided sizes for each dimension make sense with respect to the semantic of softmax.

OPERATION_NAME = 'linalg.softmax'
_ODS_REGIONS = (0, True)
input() _ods_ir
output() _ods_ir
dimension() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.softmax(result, input, output, dimension, *, loc=None, ip=None) _ods_ir | _ods_ir | SoftmaxOp
class mlir.dialects.linalg.UnPackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None)

Bases: _ods_ir

The “unpack” operation converts a source tensor of rank n with a tiled and packed layout to a result tensor of rank n - k.

inner_tiles (mandatory) specifies k tile sizes. These tile sizes correspond to the least significant (“inner”) source tensor dimension sizes. The behavior of this op is undefined if:

  • inner_tiles do not exactly match with the corresponding source tensor

dimension sizes. * Or, inner_tiles[i] does not divide the size of dimension inner_dims_pos[i] (assuming that outer_dims_perm is not specified) evenly.

inner_dims_pos (mandatory) specifies k result tensor (i.e. unpacked tensor) dimensions that were tiled with the inner_tiles to create the packed source tensor. The source tensor (i.e. packed tensor) dimensions can be unpacked given inner_dims_pos as follows.

  • For 0 <= i < k the following relationship holds:

shape(result)[inner_dims_pos[i]] <= shape(source)[n-k+i] * shape(source)[inner_dims_pos[i]]. * For 0 <= j < n-k and j not in inner_dims_pos the following relationship holds: shape(result)[j] = shape(source)[j].

outer_dims_perm (optional) specifies a permutation for the outer dimensions. If specified, it must have n - k elements. If specified, this permutation is applied before combining any dimensions.

Note, the unpack operation may drop any padding introduced by the pack operation and hence the following holds NumElementsOf(source) >= NumElementsOf(result).

Examples:

// NCnc to NC:
%0 = linalg.unpack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32]
    into %dest : tensor<16x8 x 8x32 xf32> -> tensor<128x256xf32>
//                      \  /   \  /
//          Outer Dims: 16x8  Inner Dims: 8x32

// CK to KCck:
%0 = linalg.unpack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1]
    inner_tiles = [8, 32]
    into %dest : tensor<8x16 x 8x32 xf32> -> tensor<128x256xf32>
//                      \  /   \  /
//          Outer Dims: 8x16  Inner Dims: 8x32

// CHW to CHWhw:
%0 = linalg.unpack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2]
    into %dest : tensor<3x10x6 x 4x2 xf32> -> tensor<3x20x24xf32>
//                       \  /    \ /
//          Outer Dims: 3x10x6  Inner Dims: 4x2

// HCW to HCWhw
%0 = linalg.unpack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2]
    into %dest : tensor<9x3x8 x 4x2 xf32> -> tensor<18x3x32xf32>
//                       \  /   \ /
//          Outer Dims: 9x3x8   Inner Dims: 4x2
OPERATION_NAME = 'linalg.unpack'
_ODS_REGIONS = (0, True)
source() _ods_ir
dest() _ods_ir
inner_tiles() _ods_ir
outer_dims_perm() _ods_ir | None
inner_dims_pos() _ods_ir
static_inner_tiles() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.unpack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.WinogradFilterTransformOp(result, filter, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of filter transformation (G x g x G^T) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_filter_transform'
_ODS_REGIONS = (0, True)
filter() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.winograd_filter_transform(result, filter, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.WinogradInputTransformOp(result, input, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of input transformation (B^T x d x B) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_input_transform'
_ODS_REGIONS = (0, True)
input() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.winograd_input_transform(result, input, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.WinogradOutputTransformOp(result, value, output, fmr, *, loc=None, ip=None)

Bases: _ods_ir

Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.

The algorithm F(m x m, r x r) is

Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A

The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.

This operator is defined to represent the high level concept of output transformation (A^T x y x A) in the Winograd Conv2D algorithm.

OPERATION_NAME = 'linalg.winograd_output_transform'
_ODS_REGIONS = (0, True)
value() _ods_ir
output() _ods_ir
fmr() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mlir.dialects.linalg.winograd_output_transform(result, value, output, fmr, *, loc=None, ip=None) _ods_ir
class mlir.dialects.linalg.YieldOp(values, *, loc=None, ip=None)

Bases: _ods_ir

linalg.yield is a special terminator operation for blocks inside regions in linalg generic ops. It returns values to the immediately enclosing linalg generic op.

Example:

linalg.yield %f0, %f1 : f32, f32
OPERATION_NAME = 'linalg.yield'
_ODS_REGIONS = (0, True)
values() _ods_ir
mlir.dialects.linalg.yield_(values, *, loc=None, ip=None) YieldOp
class mlir.dialects.linalg.LogOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.log'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.log(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | LogOp
class mlir.dialects.linalg.MapOp(result, inputs, init, *, loc=None, ip=None)

Bases: _ods_ir

Models elementwise operations on tensors in terms of arithmetic operations on the corresponding elements.

Example:

%add = linalg.map
    ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
    outs(%init: tensor<64xf32>)
    (%lhs_elem: f32, %rhs_elem: f32) {
      %0 = arith.addf %lhs_elem, %rhs_elem: f32
      linalg.yield %0: f32
    }

Shortened print form is available for simple maps where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments with operands matching block arguments in order, and the yield operand is the result of the payload operation.

The example above will be printed using the shortened form as:

%add = linalg.map { arith.addf }
    ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
    outs(%init: tensor<64xf32>)
OPERATION_NAME = 'linalg.map'
_ODS_REGIONS = (1, True)
inputs() _ods_ir
init() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

mapper() _ods_ir
mlir.dialects.linalg.map(result, inputs, init, *, loc=None, ip=None) _ods_ir | _ods_ir | MapOp
class mlir.dialects.linalg.MatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

Broadcast and Transpose semantics can be appiled by specifying the explicit attribute ‘indexing_maps’ as shown below.This is a list attribute, so the list must include all the maps if specified.

Example Transpose:

linalg.matmul
    indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose
                     affine_map<(m, n, k) -> (k, n)>,
                     affine_map<(m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5x3xf32>,memref<5x7xf32>)
    outs(%arg2: memref<3x7xf32>)

Example Broadcast:

linalg.matmul
   indexing_maps = [affine_map<(m, n, k) -> (k)>,     // broadcast
                    affine_map<(m, n, k) -> (k, n)>,
                    affine_map<(m, n, k) -> (m, n)>]
   ins(%arg0, %arg1 : memref<3xf32>, memref<5x7xf32>)
   outs(%arg2: memref<3x7xf32>)

Example Broadcast and transpose:

linalg.matmul
    indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose
                     affine_map<(m, n, k) -> (k)>,    // broadcast
                     affine_map<(m, n, k) -> (m, n)>]
    ins(%arg0, %arg1 : memref<5x3xf32>, memref<7xf32>)
    outs(%arg2: memref<3x7xf32>)
OPERATION_NAME = 'linalg.matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
indexing_maps() _ods_ir | None
cast() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | MatmulOp
class mlir.dialects.linalg.MatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.matvec'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MatvecOp
class mlir.dialects.linalg.MaxOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.max sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.max(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MaxOp
class mlir.dialects.linalg.MinOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.min sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.min(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MinOp
class mlir.dialects.linalg.Mmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Differences from linalg.matmul:

  • The right hand side is transposed, whence the ‘t’ in ‘mmt’.

  • The input and output tensors have a 4D shape instead of a 2D shape. They

are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with ‘0’ suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.

OPERATION_NAME = 'linalg.mmt4d'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Mmt4DOp
class mlir.dialects.linalg.MulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.mul sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.mul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.mul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MulOp
class mlir.dialects.linalg.NegFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.negf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.negf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | NegFOp
class mlir.dialects.linalg.PoolingNchwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nchw_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nchw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwMaxOp
class mlir.dialects.linalg.PoolingNchwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCHW.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nchw_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nchw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwSumOp
class mlir.dialects.linalg.PoolingNcwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ncw_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_ncw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwMaxOp
class mlir.dialects.linalg.PoolingNcwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NCW.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ncw_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_ncw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwSumOp
class mlir.dialects.linalg.PoolingNdhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_ndhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMaxOp
class mlir.dialects.linalg.PoolingNdhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_ndhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMinOp
class mlir.dialects.linalg.PoolingNdhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_ndhwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_ndhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcSumOp
class mlir.dialects.linalg.PoolingNhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxOp
class mlir.dialects.linalg.PoolingNhwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_max_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nhwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxUnsignedOp
class mlir.dialects.linalg.PoolingNhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinOp
class mlir.dialects.linalg.PoolingNhwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_min_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nhwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinUnsignedOp
class mlir.dialects.linalg.PoolingNhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NHWC.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nhwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcSumOp
class mlir.dialects.linalg.PoolingNwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_max'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxOp
class mlir.dialects.linalg.PoolingNwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_max_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxUnsignedOp
class mlir.dialects.linalg.PoolingNwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_min'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinOp
class mlir.dialects.linalg.PoolingNwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_min_unsigned'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinUnsignedOp
class mlir.dialects.linalg.PoolingNwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)

Bases: _ods_ir

Layout:

  • Input: NWC.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.pooling_nwc_sum'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
strides() _ods_ir | None
dilations() _ods_ir | None
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.pooling_nwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcSumOp
class mlir.dialects.linalg.PowFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Only applies to floating point values.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.powf sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.powf'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.powf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | PowFOp
class mlir.dialects.linalg.QuantizedBatchMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

OPERATION_NAME = 'linalg.quantized_batch_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.quantized_batch_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedBatchMatmulOp
class mlir.dialects.linalg.QuantizedMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

OPERATION_NAME = 'linalg.quantized_matmul'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.quantized_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedMatmulOp
class mlir.dialects.linalg.ReciprocalOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.reciprocal'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.reciprocal(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ReciprocalOp
class mlir.dialects.linalg.ReduceOp(result, inputs, inits, dimensions, *, loc=None, ip=None)

Bases: _ods_ir

Executes combiner on the dimensions of inputs and returns the reduced result. The dimensions attribute needs to list the reduction dimensions in increasing order.

Example:

%reduce = linalg.reduce
    ins(%input:tensor<16x32x64xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
    (%in: f32, %out: f32) {
      %0 = arith.addf %out, %in: f32
      linalg.yield %0: f32
    }

Shortened print form is available for simple reduces where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments, the first block argument (init) is the last operand of the payload operation with remaining operands matching remaining block arguments in order, and the yield operand is the result of the payload operation.

The example above will be printed using the shortened form as:

%reduce = linalg.reduce { arith.addf }
    ins(%input:tensor<16x32x64xf32>)
    outs(%init:tensor<16x64xf32>)
    dimensions = [1]
OPERATION_NAME = 'linalg.reduce'
_ODS_REGIONS = (1, True)
inputs() _ods_ir
inits() _ods_ir
dimensions() _ods_ir
combiner() _ods_ir
mlir.dialects.linalg.reduce(result, inputs, inits, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | ReduceOp
class mlir.dialects.linalg.RoundOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.round'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.round(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RoundOp
class mlir.dialects.linalg.RsqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.rsqrt'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.rsqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RsqrtOp
class mlir.dialects.linalg.SelectOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.select sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.select'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.select(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SelectOp
class mlir.dialects.linalg.SqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.sqrt'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.sqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SqrtOp
class mlir.dialects.linalg.SquareOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.square'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.square(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SquareOp
class mlir.dialects.linalg.SubOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.sub sequence can be lowered to a linalg.generic with different affine maps for the two operands.

OPERATION_NAME = 'linalg.sub'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.sub(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SubOp
class mlir.dialects.linalg.TanhOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

No numeric casting is performed on the input operand.

OPERATION_NAME = 'linalg.tanh'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.tanh(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | TanhOp
class mlir.dialects.linalg.TransposeOp(result, input, init, permutation, *, loc=None, ip=None)

Bases: _ods_ir

Permutes the dimensions of input according to the given permutation. dim(result, i) = dim(input, permutation[i])

This op actually moves data, unlike memref.transpose which is a metadata operation only that produces a transposed “view”.

Example:

%transpose = linalg.transpose
    ins(%input:tensor<16x64xf32>)
    outs(%init:tensor<64x16xf32>)
    permutation = [1, 0]
OPERATION_NAME = 'linalg.transpose'
_ODS_REGIONS = (1, True)
input() _ods_ir
init() _ods_ir
permutation() _ods_ir
result() _ods_ir

Shortcut to get an op result if it has only one (throws an error otherwise).

region() _ods_ir
mlir.dialects.linalg.transpose(result, input, init, permutation, *, loc=None, ip=None) _ods_ir | _ods_ir | TransposeOp
class mlir.dialects.linalg.VecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)

Bases: _ods_ir

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

OPERATION_NAME = 'linalg.vecmat'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (1, True)
inputs() _ods_ir
outputs() _ods_ir
result_tensors() _ods_ir
region() _ods_ir
mlir.dialects.linalg.vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | VecmatOp
mlir.dialects.linalg.register_attribute_builder(kind, replace=False)
mlir.dialects.linalg._ods_ir
class mlir.dialects.linalg.BinaryFn

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

add = 0
sub = 1
mul = 2
div = 3
div_unsigned = 4
max_signed = 5
min_signed = 6
max_unsigned = 7
min_unsigned = 8
powf = 9
__str__()

Return str(self).

mlir.dialects.linalg._binaryfn(x, context)
class mlir.dialects.linalg.ElementwiseArityGroup

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 1, 2, 3

Unary = 1
Binary = 2
Ternary = 3
__str__()

Return str(self).

mlir.dialects.linalg._elementwisearitygroup(x, context)
class mlir.dialects.linalg.ElementwiseCaseLimits

Bases: enum.IntEnum

allowed 32-bit signless integer cases:

LastUnary = 13
LastBinary = 23
LastTernary = 24
__str__()

Return str(self).

mlir.dialects.linalg._elementwisecaselimits(x, context)
class mlir.dialects.linalg.ElementwiseKind

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23

exp = 0
log = 1
abs = 2
ceil = 3
floor = 4
negf = 5
reciprocal = 6
round = 7
sqrt = 8
rsqrt = 9
square = 10
tanh = 11
erf = 12
add = 13
sub = 14
mul = 15
div = 16
div_unsigned = 17
max_signed = 18
min_signed = 19
max_unsigned = 20
min_unsigned = 21
powf = 22
select = 23
__str__()

Return str(self).

mlir.dialects.linalg._elementwisekind(x, context)
class mlir.dialects.linalg.IteratorType

Bases: enum.IntEnum

Iterator type

parallel = 0
reduction = 1
__str__()

Return str(self).

mlir.dialects.linalg._iteratortype(x, context)
class mlir.dialects.linalg.TernaryFn

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0

select = 0
__str__()

Return str(self).

mlir.dialects.linalg._ternaryfn(x, context)
class mlir.dialects.linalg.TypeFn

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0, 1

cast_signed = 0
cast_unsigned = 1
__str__()

Return str(self).

mlir.dialects.linalg._typefn(x, context)
class mlir.dialects.linalg.UnaryFn

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12

exp = 0
log = 1
abs = 2
ceil = 3
floor = 4
negf = 5
reciprocal = 6
round = 7
sqrt = 8
rsqrt = 9
square = 10
tanh = 11
erf = 12
__str__()

Return str(self).

mlir.dialects.linalg._unaryfn(x, context)
class mlir.dialects.linalg.WinogradConv2DFmr

Bases: enum.IntEnum

allowed 32-bit signless integer cases: 0, 1, 2

F_2_3 = 0
F_4_3 = 1
F_2_5 = 2
__str__()

Return str(self).

mlir.dialects.linalg._winogradconv2dfmr(x, context)
mlir.dialects.linalg._binaryfnattr(x, context)
mlir.dialects.linalg._elementwisekindattr(x, context)
mlir.dialects.linalg._iteratortypeenum(x, context)
mlir.dialects.linalg._ternaryfnattr(x, context)
mlir.dialects.linalg._typefnattr(x, context)
mlir.dialects.linalg._unaryfnattr(x, context)
mlir.dialects.linalg._iteratortypeenum(x, context)
mlir.dialects.linalg.T1
mlir.dialects.linalg.T2
mlir.dialects.linalg.Batch
mlir.dialects.linalg.copy(I=TensorDef(T1), O=TensorDef(U, output=True), cast=TypeFnAttrDef(default=TypeFn.cast_signed))

Copies the tensor elementwise.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.exp(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies exp(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.log(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies log(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.abs(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies abs(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.ceil(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies ceil(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.floor(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies floor(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.negf(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies negf(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.reciprocal(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies reciprocal(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.round(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies round(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.sqrt(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies sqrt(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.rsqrt(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies rsqrt(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.square(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies square(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.tanh(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies tanh(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.erf(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies erf(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.add(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Adds two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.add sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.sub(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Subtracts two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.sub sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.mul(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Multiplies two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.mul sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.div(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Divides the first tensor by the second tensor, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.div_unsigned(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Divides the first tensor by the second tensor, elementwise. For integer types, performs an unsigned division.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.max(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the max (signed) between two inputs, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.max sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.min(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the min (signed) between two inputs, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.min sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.powf(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use linalg.square.

Only applies to floating point values.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.powf sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.select(cond=TensorDef(U), lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Chooses one value based on a binary condition supplied as its first operand.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.select sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.quantized_matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, S.M, S.N, output=True))

Performs a matrix multiplication of two 2D inputs.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

mlir.dialects.linalg.mmt4d(lhs=TensorDef(TV.LhsType, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, S.M, S.N, S.M0, S.N0, output=True))

Performs a matrix-matrix-transpose multiplication of two 4D inputs.

Differences from linalg.matmul:

  • The right hand side is transposed, whence the ‘t’ in ‘mmt’.

  • The input and output tensors have a 4D shape instead of a 2D shape. They

are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with ‘0’ suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.

mlir.dialects.linalg.batch_mmt4d(lhs=TensorDef(TV.LhsType, Batch, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, Batch, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, Batch, S.M, S.N, S.M0, S.N0, output=True))

Performs a batched matrix-matrix-transpose multiplication of two batched-4D (5D) inputs.

Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.

mlir.dialects.linalg.quantized_batch_matmul(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, Batch, S.M, S.N, output=True))

Performs a batched matrix multiplication of two 3D inputs.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

mlir.dialects.linalg.matvec(A=TensorDef(T1, S.M, S.N), y=TensorDef(T2, S.N), x=TensorDef(U, S.M, output=True))

Performs a matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.vecmat(y=TensorDef(T1, S.M), A=TensorDef(T2, S.M, S.N), x=TensorDef(U, S.N, output=True))

Performs a vector-matrix multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.batch_matvec(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K), C=TensorDef(U, Batch, S.M, output=True))

Performs a batched matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.batch_vecmat(A=TensorDef(T1, Batch, S.K), B=TensorDef(T2, Batch, S.K, S.N), C=TensorDef(U, Batch, S.N, output=True))

Performs a batched matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.dot(A=TensorDef(T1, S.M), B=TensorDef(T2, S.M), C=TensorDef(U, output=True))

Performs a dot product of two vectors to a scalar result.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_1d(I=TensorDef(T1, S.OW + S.KW), K=TensorDef(T2, S.KW), O=TensorDef(U, S.OW, output=True))

Performs 1-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d(I=TensorDef(T1, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KH, S.KW), O=TensorDef(U, S.OH, S.OW, output=True))

Performs 2-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_3d(I=TensorDef(T1, S.OD + S.KD, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KD, S.KH, S.KW), O=TensorDef(U, S.OD, S.OH, S.OW, output=True))

Performs 3-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_1d_nwc_wcf(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OW, S.F, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_1d_ncw_fcw(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KW), O=TensorDef(U, S.N, S.F, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs 1-D convolution.

Layout:

  • Input: NCW.

  • Kernel: FCW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_nhwc_hwcf(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_nhwc_fhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_nhwc_hwcf_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_2d_nhwc_fhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_2d_nchw_fchw_q(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_2d_nchw_fchw(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_ngchw_fgchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.FG, S.G, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NGCHW.

  • Kernel: FGCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_ngchw_gfchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_nhwgc_gfhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_2d_nhwgc_gfhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution with zero point offsets.

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_2d_ngchw_gfchw_q(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution with zero-point offsets.

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_3d_ndhwc_dhwcf(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.conv_3d_ndhwc_dhwcf_q(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution with zero point offsets.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.conv_3d_ncdhw_fcdhw(I=TensorDef(T1, S.N, S.C, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.depthwise_conv_1d_nwc_wc(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC), O=TensorDef(U, S.N, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_1d_ncw_cw(I=TensorDef(T1, S.N, S.IC, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KW), O=TensorDef(U, S.N, S.IC, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_1d_nwc_wcm(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_2d_nchw_chw(I=TensorDef(T1, S.N, S.IC, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwc(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_3d_ncdhw_cdhw(I=TensorDef(T1, S.N, S.IC, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwcm(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.CM, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nhwc_sum(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs sum pooling.

Layout:

  • Input: NHWC.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nchw_sum(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs sum pooling.

Layout:

  • Input: NCHW.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nhwc_max(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nhwc_max_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs unsigned max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nchw_max(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nhwc_min(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nhwc_min_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs unsigned min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nwc_sum(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs sum pooling.

Layout:

  • Input: NWC.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_ncw_sum(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs sum pooling.

Layout:

  • Input: NCW.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nwc_max(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nwc_max_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs unsigned max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_ncw_max(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nwc_min(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_nwc_min_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs unsigned min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_ndhwc_sum(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D sum pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_ndhwc_max(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.pooling_ndhwc_min(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.fill(value=ScalarDef(T1), O=TensorDef(U, output=True))

Fills the output tensor with the given value.

Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.

mlir.dialects.linalg.fill_rng_2d(min=ScalarDef(F64), max=ScalarDef(F64), seed=ScalarDef(I32), O=TensorDef(T, S.M, S.N, output=True))

Fills the output tensor with pseudo random numbers.

The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.

mlir.dialects.linalg._get_op_result_or_value(arg: mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value | mlir._mlir_libs._mlir.ir.OpResultList) mlir._mlir_libs._mlir.ir.Value

Returns the given value or the single result of the given op.

This is useful to implement op constructors so that they can take other ops as arguments instead of requiring the caller to extract results for every op. Raises ValueError if provided with an op that doesn’t have a single result.

mlir.dialects.linalg._get_op_results_or_values(arg: mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | Sequence[mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value]) Sequence[mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value] | mlir._mlir_libs._mlir.ir.OpResultList

Returns the given sequence of values or the results of the given op.

This is useful to implement op constructors so that they can take other ops as lists of arguments instead of requiring the caller to extract results for every op.

mlir.dialects.linalg._CONTEXT
mlir.dialects.linalg.StructuredOpOuts
mlir.dialects.linalg.bind_op_def(op_def: mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef)
mlir.dialects.linalg.current_op_def() mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef
mlir.dialects.linalg._prepare_structured_op_outs(outs: StructuredOpOuts) mlir.dialects.linalg.opdsl.lang.emitter.ValueList
class mlir.dialects.linalg.DefinedOpCallable(op_name: str, op_def: mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef)

Callable that wraps any defined op function.

op_name
op_def
__call__(*ins: mlir.dialects.linalg.opdsl.lang.emitter.Union[mlir.ir.Operation, mlir.ir.OpView, mlir.ir.Value], outs: StructuredOpOuts, **kwargs)

Emits the corresponding op definition as IR.

Most arguments are passed through to the underlying emitter. The following keyword argument is interpreted here: emit_generic: Emits a generic form as appropriate (default True). If False, a named form is emitted (which must have been built in to the compiler).

mlir.dialects.linalg.linalg_structured_op(dsl_func=None, *, op_name=None, op_class_name=None) DefinedOpCallable
mlir.dialects.linalg.domain(*dimensions: mlir.dialects.linalg.opdsl.lang.emitter.DimDef)
mlir.dialects.linalg.implements(*interfaces: mlir.dialects.linalg.opdsl.lang.emitter.OpInterfaceDef)
mlir.dialects.linalg.defines(*definitions: mlir.dialects.linalg.opdsl.lang.emitter.OpDefinitionDef)
class mlir.dialects.linalg.TensorExpression

An expression that can appear on the RHS of a comprehension.

abstract to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None])

Visits all tensor expression reachable by the expression.

collect_dim_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef])

Collects all DimDefs reachable through this expression.

collect_tensor_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[TensorUse])

Collects all TensorUses reachable through this expression.

collect_indices(indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[index])

Collects all index accesses reachable through this expression.

collect_scalar_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[ScalarDef])

Collects all ScalarDefs reachable through this expression.

__add__(rhs: TensorExpression) TensorExpression
__mul__(rhs) TensorExpression
__sub__(rhs) TensorExpression
__truediv__(rhs) TensorExpression
__hash__()
class mlir.dialects.linalg.TensorUse(operand_def: OperandDef, indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef])

Bases: TensorExpression

A used tensor represented by its (tensor_name, indices).

Note that forming a comprehension via direct assignment is performed through setitem on the TensorDef level. However, performing a reduction with compound ops (+=, =, etc) is done by doing a: TensorDef.**getitem* TensorUse.**iadd** TensorDef.**setitem**

operand_def
indices
to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
property tensor_name: str
_compute_reduce_dims(rhs: TensorExpression) mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]
__iadd__(rhs: TensorExpression) TensorReduceFn
__repr__()
class mlir.dialects.linalg.TensorFn(kind: FunctionKind, name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str], operand_def: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[OperandDef], type_var: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar], args: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[TensorExpression])

Bases: TensorExpression

Application of a tensor function.

name
kind
operand_def
type_var
args
to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None])

Visits all tensor expression reachable by the expression.

__repr__()
class mlir.dialects.linalg.TensorReduceFn(reduce_use: ReduceFnUse, args: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[TensorExpression])

Bases: TensorExpression

Application of a reduction function.

This captures the lhs (initial value) separately from the rhs.

reduce_use
lhs: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[TensorUse] = None
args
to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None])

Visits all tensor expression reachable by the expression.

__repr__()
class mlir.dialects.linalg.const(value: mlir.dialects.linalg.opdsl.lang.scalar_expr.Any)

Bases: TensorExpression

Returns the given constant floating point or integer value.

to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
__repr__()
class mlir.dialects.linalg.index(dim: mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef)

Bases: TensorExpression

Returns the iteration index for a given dimension name.

Resolves the given dimension name to obtain its position in the iteration domain of the operation.

dim_def
dim = -1
resolve_dimension_name(affine_state: mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineBuildState)
to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
__repr__()
class mlir.dialects.linalg.FunctionKind

Bases: mlir.dialects.linalg.opdsl.lang.types.Enum

Generic enumeration.

Derive from this class to define new enumerations.

UNARY = 0
BINARY = 1
TERNARY = 2
TYPE = 3
class mlir.dialects.linalg.UnaryFnType(fn_name: str)

Unary function.

A unary function takes one tensor expression and returns the function evaluation result.

fn_name
__call__(arg: TensorExpression) TensorFn
__repr__()
class mlir.dialects.linalg.UnaryFn

Unary function namespace.

exp
log
abs
ceil
floor
negf
reciprocal
round
sqrt
rsqrt
square
tanh
erf
class mlir.dialects.linalg.BinaryFnType(fn_name: str)

Binary function.

A binary function takes two tensor expressions and returns the function evaluation result.

fn_name
__call__(arg0: TensorExpression, arg1: TensorExpression) TensorFn
__repr__()
class mlir.dialects.linalg.BinaryFn

Binary function namespace.

As the integer types are signless, signedness is implement by different functions that treat integers as signed or unsigned values.

Examples:

  • max -> arith.MaxSIOp

  • max_unsigned -> arith.MaxUIOp

add
sub
mul
div
div_unsigned
max_signed
min_signed
max_unsigned
min_unsigned
powf
class mlir.dialects.linalg.TernaryFnType(fn_name: str)

Ternary function.

A ternary function takes three tensor expressions and returns the function evaluation result.

fn_name
__call__(arg0: TensorExpression, arg1: TensorExpression, arg2: TensorExpression) TensorFn
__repr__()
class mlir.dialects.linalg.TernaryFn

Ternary function namespace.

select
class mlir.dialects.linalg.TypeFnType(fn_name: str)

Type conversion function.

A type conversion function takes a target type and a tensor expression and returns the casted tensor expression.

fn_name
__call__(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, arg: TensorExpression) TensorFn
__repr__()
class mlir.dialects.linalg.TypeFn

Type conversion function namespace.

As the integer types are signless, signedness is implement by different cast functions that treat integers as signed (cast_signed) or unsigned (cast_unsigned) values.

Examples:

  • cast_signed(I32 -> I64) -> arith.ExtSIOp

  • cast_unsigned(I32 -> I64) -> arith.ExtUIOp

cast_signed
cast_unsigned
class mlir.dialects.linalg.ReduceFnUse(binary_fn: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[BinaryFnType], binary_attr: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[BinaryFnAttrDef], *reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef)

Reduction function use.

A reduction use specifies the reduction function and dimensions.

binary_fn
binary_attr
reduce_dims = ()
__call__(*args: TensorExpression) TensorReduceFn
__repr__()
class mlir.dialects.linalg.ReduceFnType(binary_fn: BinaryFnType)

Reduction function.

A binary function that reduces its RHS into its LHS.

binary_fn
__getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) ReduceFnUse
__repr__()
class mlir.dialects.linalg.ReduceFn
add
mul
max_signed
min_signed
max_unsigned
min_unsigned
class mlir.dialects.linalg.OperandKind

Bases: mlir.dialects.linalg.opdsl.lang.types.Enum

Generic enumeration.

Derive from this class to define new enumerations.

INPUT_TENSOR = 0
SCALAR = 1
OUTPUT_TENSOR = 2
INDEX_ATTR = 3
UNARY_FN_ATTR = 4
BINARY_FN_ATTR = 5
TERNARY_FN_ATTR = 6
TYPE_FN_ATTR = 7
class mlir.dialects.linalg.OperandDef(kind: OperandKind, type_var: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar] = None, size_exprs: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef]] = None, index_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]] = None, default_indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[int]] = None, default_fn: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None)

Definition of an operand passed to an operation.

Keep the meta information of Tensor, Scalar, and Attribute operands and provide the shared registration functionality.

owner: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[LinalgOpDef] = None
type_var = None
size_exprs = None
index_dims = None
default_indices = None
default_fn = None
kind
name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None
registered_index: int = -1
attach(index: int, name: str, owner: LinalgOpDef)
is_input() bool
is_tensor() bool
is_attribute() bool
__hash__()
__repr__()
class mlir.dialects.linalg.TensorDef(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, *shape: mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef, index_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]] = None, output: bool = False)

Tensor operand definition.

Tensor operands are indexed using the associated indexing_map when forwarded to the body of the structured op. A unique name identifies the tensor operands and an index determines their position in the operation’s parameter list. A tensor definition takes type, a shape, and an optional flag to mark output tensors. Additionally, a tuple of index dimensions may be used to map the tensor to the loop dimensions of the operation. This mapping is needed to compute the indexing map of shape-only tensors that have no uses.

operand_def
__getitem__(dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef]) TensorUse
__setitem__(dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef], value: TensorExpression)

Creates a new 1:1 comprehension by binding this tensor to an expression.

Note that due to the way assignment works in Python, we have to capture direct assignment as a setitem on the TensorDef.

class mlir.dialects.linalg.ScalarDef(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar)

Bases: TensorExpression

Scalar operand definition.

Scalar operands are forwarded to the body of the structured op as they are. A unique name identifies the scalars and an index determines their position in the operation’s parameter list.

operand_def
property scalar_name: str
to_scalar_expression() mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression
class mlir.dialects.linalg.IndexAttrDef(*sizes: mlir.dialects.linalg.opdsl.lang.scalar_expr.SymbolDef, default: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[int])

Index attribute definition.

Index attributes provide a way to define and set symbols that can be used in indexing expressions. Every attribute specifies a tuple of symbols that at compile-time are replaced by integer values as well as their default values.

operand_def
class mlir.dialects.linalg.UnaryFnAttrDef(default: UnaryFnType)

Unary function attribute definition.

Unary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default unary function that may be overwritten at operation instantiation time.

operand_def
__call__(arg: TensorExpression) TensorFn
class mlir.dialects.linalg.BinaryFnAttrDef(default: BinaryFnType)

Binary function attribute definition.

Binary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default binary function that may be overwritten at operation instantiation time.

operand_def
__call__(arg0: TensorExpression, arg1: TensorExpression) TensorFn
__getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) ReduceFnUse
class mlir.dialects.linalg.TernaryFnAttrDef(default: TernaryFnType)

Ternary function attribute definition.

Ternary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default Ternary function that may be overwritten at operation instantiation time.

operand_def
__call__(arg0: TensorExpression, arg1: TensorExpression) TensorFn
__getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) ReduceFnUse
class mlir.dialects.linalg.TypeFnAttrDef(default: TypeFnType)

Type conversion function attribute definition.

Type conversion function attributes provide a way to make type conversions parameterizable. Every attribute specifies a default type conversion function that may be overwritten at operation instantiation time.

operand_def
__call__(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, arg: TensorExpression) TensorFn
class mlir.dialects.linalg.Comprehension(*bindings: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[TensorUse, TensorExpression])

Represents a single comprehension.

definitions = []
values = []
property all_reduction_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef, Ellipsis]]

Gets the reduction dims for the comprehension or None.

__repr__()
class mlir.dialects.linalg.OpInterfaceDef(cpp_name: str)

An interface that an op implements.

cpp_name
mlir.dialects.linalg.ContractionOpInterface
mlir.dialects.linalg.ConvolutionOpInterface
mlir.dialects.linalg.FillOpInterface
class mlir.dialects.linalg.OpDefinitionDef(def_name: str)

A method that an op implements.

def_name
mlir.dialects.linalg.Canonicalizer
class mlir.dialects.linalg.OpMetadataDef(name: str, cpp_class_name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str], doc: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str])

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

Metadata about the op (generally not behavior impacting).

yaml_tag = '!LinalgOpMetadata'
name
cpp_class_name
doc
implements: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[OpInterfaceDef] = []
defines: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[OpDefinitionsDef] = []
to_yaml_custom_dict()
class mlir.dialects.linalg.LinalgOpDef(name: str, cpp_class_name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None, doc: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None)

Definition of a linalg op.

metadata
registered_operands: mlir.dialects.linalg.opdsl.lang.types.Dict[str, OperandDef]
domain: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef] = []
comprehensions: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[Comprehension] = []
_affine_state
add_operand(name: str, operand: OperandDef)

Registers an operand.

__repr__()
class mlir.dialects.linalg.AffineBuildState(*, global_state: AffineBuildState = None, allow_new_symbols: bool = True, allow_new_dims: bool = True)

Internal state for the AffineExprDef._create impls.

Note that a “local” AffineBuildState can be created relative to a “global” AffineBuildState. In that case, any affine expressions built will inherit symbol and dim bindings from the global state and will update both as new ones are discovered. This allows for building expressions across contexts which share a common symbol and dim space.

local_symbols: Dict[str, int]
local_dims: Dict[str, int]
allow_new_symbols = True
allow_new_dims = True
get_dim(dimname: str) int

Gets the dim position given a name.

get_symbol(symname: str) int

Geta a symbol position given a name.

property local_dim_count: int
property local_symbol_count: int
property dim_count: int
property symbol_count: int
__repr__()
class mlir.dialects.linalg.AffineExprDef

Base class for an affine expression being defined.

build(state: AffineBuildState | None = None) mlir.ir.AffineExpr

Builds the corresponding _ir.AffineExpr from the definitions.

abstract _create(state: AffineBuildState) mlir.ir.AffineExpr
static coerce_from(py_value)
visit_affine_exprs(callback)

Visits all AffineExprDefs including self.

__add__(rhs)
__mul__(rhs)
__mod__(rhs)
__floordiv__(rhs)
__truediv__(rhs)
mlir.dialects.linalg.D
class mlir.dialects.linalg.DimDef

Bases: AffineExprDef

Represents a named dimension.

ALL_DIMS: Dict[str, DimDef]
__repr__()
_create(state: AffineBuildState) mlir.ir.AffineExpr
classmethod create_expando()

Create an expando class that creates unique symbols based on attr access.

mlir.dialects.linalg.S
class mlir.dialects.linalg.SymbolDef

Bases: AffineExprDef

Represents a named symbol.

s1 = SymbolDef(“s1”) s1 Symbol(s1) s2 = SymbolDef(“s2”) s1 is s2 False s1 is SymbolDef(“s1”) True

ALL_SYMBOLS: Dict[str, SymbolDef]
__repr__()
_create(state: AffineBuildState) mlir.ir.AffineExpr
classmethod create_expando()

Create an expando class that creates unique symbols based on attr access.

class mlir.dialects.linalg.ScalarAssign(arg: str, value: ScalarExpression)

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

An assignment to a named argument (LHS of a comprehension).

yaml_tag = '!ScalarAssign'
arg
value
to_yaml_custom_dict()
__repr__()
class mlir.dialects.linalg.ScalarFn(kind: mlir.dialects.linalg.opdsl.lang.comprehension.FunctionKind, fn_name: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[str], attr_name: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[str], type_var: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar], operands: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[ScalarExpression])

A type of ScalarExpression that applies a function.

kind
fn_name
attr_name
type_var
operands
expr() ScalarExpression
__repr__()
class mlir.dialects.linalg.ScalarArg(arg: str)

A type of ScalarExpression that references a named argument.

arg
expr() ScalarExpression
__repr__()
class mlir.dialects.linalg.ScalarConst(value: str)

A type of ScalarExpression representing a constant.

value
expr() ScalarExpression
__repr__()
class mlir.dialects.linalg.ScalarIndex(dim: int)

A type of ScalarExpression accessing an iteration index.

dim
expr() ScalarExpression
__repr__()
class mlir.dialects.linalg.ScalarExpression(scalar_fn: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarFn] = None, scalar_arg: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarArg] = None, scalar_const: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarConst] = None, scalar_index: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarIndex] = None)

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

An expression on scalar values.

Can be one of:

  • ScalarFn

  • ScalarArg

  • ScalarConst

  • ScalarIndex

yaml_tag = '!ScalarExpression'
scalar_fn = None
scalar_arg = None
scalar_const = None
scalar_index = None
to_yaml_custom_dict()
class mlir.dialects.linalg.TypeVar

A replaceable type variable.

Type variables are uniqued by name.

ALL_TYPEVARS: Dict[str, TypeVar]
__repr__()
classmethod create_expando()

Create an expando class that creates unique type vars on attr access.

mlir.dialects.linalg.TV
mlir.dialects.linalg.I32
mlir.dialects.linalg.I64
mlir.dialects.linalg.F32
mlir.dialects.linalg.F64
mlir.dialects.linalg.T
mlir.dialects.linalg.U
mlir.dialects.linalg.V
mlir.dialects.linalg.yaml_dump(data, sort_keys=False, **kwargs)
mlir.dialects.linalg.yaml_dump_all(data, sort_keys=False, explicit_start=True, **kwargs)
class mlir.dialects.linalg.YAMLObject

Bases: yaml.YAMLObject

An object that can dump itself to a YAML stream and load itself from a YAML stream.

classmethod to_yaml(dumper, self)

Default to a custom dictionary mapping.

abstract to_yaml_custom_dict()
as_linalg_yaml()
class mlir.dialects.linalg.LinalgStructuredOpConfig(comprehension: mlir.dialects.linalg.opdsl.lang.comprehension.Comprehension, domain: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.DimDef], registered_operands: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef], context: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.Context] = None)

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

Configuration for metadata sufficient to construct a linalg named op.

yaml_tag = '!LinalgStructuredOpConfig'
context = None
affine_state
writes: mlir.dialects.linalg.opdsl.lang.comprehension.List[mlir.dialects.linalg.opdsl.lang.comprehension.Tuple[mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse, mlir.dialects.linalg.opdsl.lang.comprehension.TensorExpression]] = []
operands: mlir.dialects.linalg.opdsl.lang.comprehension.Dict[mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef, OperandDefConfig]
uses: mlir.dialects.linalg.opdsl.lang.comprehension.Dict[mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse, TensorUseConfig]
reduction_dims
assignments
property ordered_operands: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[OperandDefConfig]
property ordered_dims: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.Tuple[str, int]]

Gets the ordered list of dim bindings (symbolic name, position).

TODO: The original parser relies on parse ordering to arrive at the iterator types, but that ordering is not defined on the Python side, so this may be ambiguous.

property indexing_maps: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.ir.AffineMap]
property iterator_types: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[str]
add_operand(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef)
add_indexed_operand(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef)
add_tensor_use(tensor_use: mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse)
_get_scalar_map() mlir.ir.AffineMap

Create an empty affine map used to index a scalar.

_normalize_affine_map(affine_map: mlir.ir.AffineMap, with_dims: bool = True) mlir.ir.AffineMap

Normalizes an indexing map to have the max known symbols and dims.

to_yaml_custom_dict()
__repr__()
class mlir.dialects.linalg.LinalgOpConfig(metadata: mlir.dialects.linalg.opdsl.lang.comprehension.OpMetadataDef, *, structured_op: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[LinalgStructuredOpConfig] = None)

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

Container for any supported linalg op type.

This includes the concrete type by name for ease of parsing by systems that ignore tags.

yaml_tag = '!LinalgOpConfig'
metadata
structured_op = None
to_yaml_custom_dict()
static from_linalg_op_def(op_def: mlir.dialects.linalg.opdsl.lang.comprehension.LinalgOpDef, context: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.Context] = None) mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[LinalgOpConfig]

Expands a LinalgOpDef into corresponding Linalg configured ops.

__repr__()
class mlir.dialects.linalg.OperandDefConfig(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef, shape_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None, index_attr_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None)

Bases: mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject

Wrapper containing an operand definition with additional state.

yaml_tag = '!LinalgOperandDefConfig'
operand_def
shape_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None
index_attr_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None
indexing_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None
property name: str
property kind: mlir.dialects.linalg.opdsl.lang.comprehension.OperandKind
property type_var: mlir.dialects.linalg.opdsl.lang.comprehension.TypeVar
to_yaml_custom_dict()
__repr__()
mlir.dialects.linalg.emit_generic_structured_op(op_config: mlir.dialects.linalg.opdsl.lang.config.LinalgStructuredOpConfig, *ins: Value, outs: ValueList, **attrs: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[int])
mlir.dialects.linalg.emit_named_structured_op(op_config: mlir.dialects.linalg.opdsl.lang.config.LinalgStructuredOpConfig, op_name: str, op_class_name: str, *ins: Value, outs: ValueList, **attrs: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[int])
mlir.dialects.linalg.ValueList
mlir.dialects.linalg.loc_tracebacks(*, max_depth: int | None = None) collections.abc.Iterable[None]

Enables automatic traceback-based locations for MLIR operations.

Operations created within this context will have their location automatically set based on the Python call stack.

Parameters:

max_depth – Maximum number of frames to include in the location. If None, the default limit is used.

mlir.dialects.linalg.register_attribute_builder(kind, replace=False)
mlir.dialects.linalg._affineMapAttr(x, context)
mlir.dialects.linalg._integerSetAttr(x, context)
mlir.dialects.linalg._boolAttr(x, context)
mlir.dialects.linalg._dictAttr(x, context)
mlir.dialects.linalg._indexAttr(x, context)
mlir.dialects.linalg._i1Attr(x, context)
mlir.dialects.linalg._i8Attr(x, context)
mlir.dialects.linalg._i16Attr(x, context)
mlir.dialects.linalg._i32Attr(x, context)
mlir.dialects.linalg._i64Attr(x, context)
mlir.dialects.linalg._si1Attr(x, context)
mlir.dialects.linalg._si8Attr(x, context)
mlir.dialects.linalg._si16Attr(x, context)
mlir.dialects.linalg._si32Attr(x, context)
mlir.dialects.linalg._si64Attr(x, context)
mlir.dialects.linalg._ui1Attr(x, context)
mlir.dialects.linalg._ui8Attr(x, context)
mlir.dialects.linalg._ui16Attr(x, context)
mlir.dialects.linalg._ui32Attr(x, context)
mlir.dialects.linalg._ui64Attr(x, context)
mlir.dialects.linalg._f32Attr(x, context)
mlir.dialects.linalg._f64Attr(x, context)
mlir.dialects.linalg._stringAttr(x, context)
mlir.dialects.linalg._symbolNameAttr(x, context)
mlir.dialects.linalg._symbolRefAttr(x, context)
mlir.dialects.linalg._flatSymbolRefAttr(x, context)
mlir.dialects.linalg._unitAttr(x, context)
mlir.dialects.linalg._arrayAttr(x, context)
mlir.dialects.linalg._affineMapArrayAttr(x, context)
mlir.dialects.linalg._boolArrayAttr(x, context)
mlir.dialects.linalg._dictArrayAttr(x, context)
mlir.dialects.linalg._flatSymbolRefArrayAttr(x, context)
mlir.dialects.linalg._i32ArrayAttr(x, context)
mlir.dialects.linalg._i64ArrayAttr(x, context)
mlir.dialects.linalg._i64SmallVectorArrayAttr(x, context)
mlir.dialects.linalg._indexListArrayAttr(x, context)
mlir.dialects.linalg._f32ArrayAttr(x, context)
mlir.dialects.linalg._f64ArrayAttr(x, context)
mlir.dialects.linalg._strArrayAttr(x, context)
mlir.dialects.linalg._symbolRefArrayAttr(x, context)
mlir.dialects.linalg._denseF32ArrayAttr(x, context)
mlir.dialects.linalg._denseF64ArrayAttr(x, context)
mlir.dialects.linalg._denseI8ArrayAttr(x, context)
mlir.dialects.linalg._denseI16ArrayAttr(x, context)
mlir.dialects.linalg._denseI32ArrayAttr(x, context)
mlir.dialects.linalg._denseI64ArrayAttr(x, context)
mlir.dialects.linalg._denseBoolArrayAttr(x, context)
mlir.dialects.linalg._typeAttr(x, context)
mlir.dialects.linalg._typeArrayAttr(x, context)
mlir.dialects.linalg._memref_type_attr(x, context)
mlir.dialects.linalg._f64ElementsAttr(x, context)
mlir.dialects.linalg._get_op_result_or_value(arg: mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.Value | mlir._mlir_libs._mlir.ir.OpResultList) mlir._mlir_libs._mlir.ir.Value

Returns the given value or the single result of the given op.

This is useful to implement op constructors so that they can take other ops as arguments instead of requiring the caller to extract results for every op. Raises ValueError if provided with an op that doesn’t have a single result.

mlir.dialects.linalg._get_op_result_or_op_results(op: mlir._mlir_libs._mlir.ir.OpView | mlir._mlir_libs._mlir.ir.Operation) mlir._mlir_libs._mlir.ir.Operation | mlir._mlir_libs._mlir.ir.OpResult | Sequence[mlir._mlir_libs._mlir.ir.OpResult]
mlir.dialects.linalg._dispatch_mixed_values(values: MixedValues) Tuple[List[mlir.ir.Value], mlir.ir.Operation | mlir.ir.Value | mlir.ir.OpView, mlir.ir.DenseI64ArrayAttr]
mlir.dialects.linalg.region_op(op_constructor, terminator=None)

Decorator to define an MLIR Op specified as a python function.

Requires that an mlir.ir.InsertionPoint and mlir.ir.Location are active for the current thread (i.e. established in a with block).

Supports “naked” usage i.e., no parens if no args need to be passed to the Op constructor.

When applied as a decorator to a Python function, an entry block will be constructed for the Op with types as specified as type hints on the args of the function. The block arguments will be passed positionally to the Python function.

If a terminator is specified then the return from the decorated function will be passed to the terminator as the last statement in the entry block. Note, the API for the terminator is a (possibly empty) list; terminator accepting single values should be wrapped in a lambda args: term(args[0])

The identifier (name) of the function will become:

  1. A single value result if the Op returns a single value;

  2. An OpResultList (as a list) if the Op returns multiple values;

  3. The Operation if the Op returns no results.

See examples in tensor.py and transform.extras.

mlir.dialects.linalg.transpose(input: opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]], *, outs: opdsl.ops.core_named_ops.List[opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]]], permutation: opdsl.ops.core_named_ops.Union[DenseI64ArrayAttr, opdsl.ops.core_named_ops.List[int]])
mlir.dialects.linalg.broadcast(input: opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]], *, outs: opdsl.ops.core_named_ops.List[opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]]], dimensions: opdsl.ops.core_named_ops.Union[DenseI64ArrayAttr, opdsl.ops.core_named_ops.List[int]])
mlir.dialects.linalg._IteratorTypeArrayAttr(x, context)
class mlir.dialects.linalg.GenericOp_(inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None)

Bases: mlir.dialects._linalg_ops_gen.GenericOp

Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a linalg.generic op is written as:

linalg.generic #trait_attribute
    ins(%A, %B : memref<?x?xf32, stride_specification>,
                 memref<?x?xf32, stride_specification>)
    outs(%C : memref<?x?xf32, stride_specification>)
    attrs = {other-optional-attributes}
    {region}

Where #trait_attributes is an alias of a dictionary attribute containing:

  • doc [optional]: a documentation string

  • indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input

and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window

Example: Defining a #matmul_trait attribute in MLIR can be done as follows:

#matmul_accesses = [
  (m, n, k) -> (m, k),
  (m, n, k) -> (k, n),
  (m, n, k) -> (m, n)
]
#matmul_trait = {
  doc = "C(m, n) += A(m, k) * B(k, n)",
  indexing_maps = #matmul_accesses,
  library_call = "linalg_matmul",
  iterator_types = ["parallel", "parallel", "reduction"]
}

And can be reused in multiple places as:

linalg.generic #matmul_trait
  ins(%A, %B : memref<?x?xf32, stride_specification>,
               memref<?x?xf32, stride_specification>)
  outs(%C : memref<?x?xf32, stride_specification>)
  {other-optional-attributes} {
  ^bb0(%a: f32, %b: f32, %c: f32) :
    %d = arith.mulf %a, %b: f32
    %e = arith.addf %c, %d: f32
    linalg.yield %e : f32
}

This may lower to either:

call @linalg_matmul(%A, %B, %C) :
  (memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>,
   memref<?x?xf32, stride_specification>)
  -> ()

or IR resembling:

scf.for %m = %c0 to %M step %c1 {
  scf.for %n = %c0 to %N step %c1 {
    scf.for %k = %c0 to %K step %c1 {
      %a = load %A[%m, %k] : memref<?x?xf32, stride_specification>
      %b = load %B[%k, %n] : memref<?x?xf32, stride_specification>
      %c = load %C[%m, %n] : memref<?x?xf32, stride_specification>
      %d = arith.mulf %a, %b: f32
      %e = arith.addf %c, %d: f32
      store %e, %C[%m, %n] : memref<?x?x?xf32, stride_specification>
    }
  }
}
mlir.dialects.linalg.generic
mlir.dialects.linalg._create_matmul_like_op(op_type, *ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None)
mlir.dialects.linalg.matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None)
mlir.dialects.linalg.batch_matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None)
mlir.dialects.linalg.batch_reduce_matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None)
mlir.dialects.linalg.contract(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Sequence[AffineMapAttr], cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None)
class mlir.dialects.linalg.ElementwiseOp_(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None)

Bases: mlir.dialects._linalg_ops_gen.ElementwiseOp

The attribute kind describes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select).

By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type.

Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand.

Iterator-types are always all parallel. Iterator-types are needed for constructing the underlying structured op.

The number of dims of the iterator-types are inferred from the rank of the result type.

Example:

Defining a unary linalg.elementwise with default indexing-map:

%exp = linalg.elementwise
    kind=#linalg.elementwise_kind<exp>
    ins(%x : tensor<4x16x8xf32>)
    outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32>

Defining a binary linalg.elementwise with user-defined indexing-map:

%add = linalg.elementwise
    kind=#linalg.elementwise_kind<add>
    indexing_maps = [#transpose, #broadcast, #identity]
    ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>)
    outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32>
mlir.dialects.linalg.ElementwiseOp
mlir.dialects.linalg.elementwise(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], kind: opdsl.ops.core_named_ops.Union[mlir.dialects._linalg_enum_gen.ElementwiseKind, Attribute], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None)
mlir.dialects.linalg.pack(source, dest, inner_dims_pos, inner_tiles, *, padding_value=None, outer_dims_perm=None, loc=None, ip=None) opdsl.ops.core_named_ops.ir.Value
mlir.dialects.linalg.unpack(source, dest, inner_dims_pos, inner_tiles, *, outer_dims_perm=None, loc=None, ip=None) opdsl.ops.core_named_ops.ir.Value
mlir.dialects.linalg.reduce
mlir.dialects.linalg.map