mlir.dialects.linalg.opdsl.ops.core_named_ops

Attributes

Functions

copy([I, O, cast])

Copies the tensor elementwise.

exp([I, O])

Applies exp(x) elementwise.

log([I, O])

Applies log(x) elementwise.

abs([I, O])

Applies abs(x) elementwise.

ceil([I, O])

Applies ceil(x) elementwise.

floor([I, O])

Applies floor(x) elementwise.

negf([I, O])

Applies negf(x) elementwise.

reciprocal([I, O])

Applies reciprocal(x) elementwise.

round([I, O])

Applies round(x) elementwise.

sqrt([I, O])

Applies sqrt(x) elementwise.

rsqrt([I, O])

Applies rsqrt(x) elementwise.

square([I, O])

Applies square(x) elementwise.

tanh([I, O])

Applies tanh(x) elementwise.

erf([I, O])

Applies erf(x) elementwise.

add([lhs, rhs, O])

Adds two tensors elementwise.

sub([lhs, rhs, O])

Subtracts two tensors elementwise.

mul([lhs, rhs, O])

Multiplies two tensors elementwise.

div([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise.

div_unsigned([lhs, rhs, O])

Divides the first tensor by the second tensor, elementwise. For integer

max([lhs, rhs, O])

Takes the max (signed) between two inputs, elementwise.

min([lhs, rhs, O])

Takes the min (signed) between two inputs, elementwise.

powf([lhs, rhs, O])

Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use linalg.square.

select([cond, lhs, rhs, O])

Chooses one value based on a binary condition supplied as its first operand.

quantized_matmul([A, B, AZp, BZp, C])

Performs a matrix multiplication of two 2D inputs.

mmt4d([lhs, rhs, accum])

Performs a matrix-matrix-transpose multiplication of two 4D inputs.

batch_mmt4d([lhs, rhs, accum])

Performs a batched matrix-matrix-transpose multiplication of two

quantized_batch_matmul([A, B, AZp, BZp, C])

Performs a batched matrix multiplication of two 3D inputs.

matvec([A, y, x])

Performs a matrix-vector multiplication.

vecmat([y, A, x])

Performs a vector-matrix multiplication.

batch_matvec([A, B, C])

Performs a batched matrix-vector multiplication.

batch_vecmat([A, B, C])

Performs a batched matrix-vector multiplication.

dot([A, B, C])

Performs a dot product of two vectors to a scalar result.

conv_1d([I, K, O])

Performs 1-D convolution with no channels.

conv_2d([I, K, O])

Performs 2-D convolution with no channels.

conv_3d([I, K, O])

Performs 3-D convolution with no channels.

conv_1d_nwc_wcf([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_1d_ncw_fcw([I, K, O, strides, dilations])

Performs 1-D convolution.

conv_2d_nhwc_hwcf([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_fhwc([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_nhwc_hwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nhwc_fhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nchw_fchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D convolution with zero point offsets.

conv_2d_nchw_fchw([I, K, O, strides, dilations])

Performs 2-D convolution.

conv_2d_ngchw_fgchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_ngchw_gfchw([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_nhwgc_gfhwc([I, K, O, strides, dilations])

Performs 2-D grouped convolution.

conv_2d_nhwgc_gfhwc_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero point offsets.

conv_2d_ngchw_gfchw_q([I, K, IZp, KZp, O, strides, ...])

Performs 2-D grouped convolution with zero-point offsets.

conv_3d_ndhwc_dhwcf([I, K, O, strides, dilations])

Performs 3-D convolution.

conv_3d_ndhwc_dhwcf_q([I, K, IZp, KZp, O, strides, ...])

Performs 3-D convolution with zero point offsets.

conv_3d_ncdhw_fcdhw([I, K, O, strides, dilations])

Performs 3-D convolution.

depthwise_conv_1d_nwc_wc([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_ncw_cw([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_1d_nwc_wcm([I, K, O, strides, dilations])

Performs depth-wise 1-D convolution.

depthwise_conv_2d_nhwc_hwc([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nchw_chw([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwc_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm([I, K, O, strides, dilations])

Performs depth-wise 2-D convolution.

depthwise_conv_2d_nhwc_hwcm_q([I, K, IZp, KZp, O, ...])

Performs depth-wise 2-D convolution.

depthwise_conv_3d_ndhwc_dhwc([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ncdhw_cdhw([I, K, O, strides, dilations])

Performs depth-wise 3-D convolution.

depthwise_conv_3d_ndhwc_dhwcm([I, K, O, strides, ...])

Performs depth-wise 3-D convolution.

pooling_nhwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nchw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nhwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nhwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_nchw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nhwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nhwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_nwc_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_ncw_sum([I, K, O, strides, dilations])

Performs sum pooling.

pooling_nwc_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nwc_max_unsigned([I, K, O, strides, dilations])

Performs unsigned max pooling.

pooling_ncw_max([I, K, O, strides, dilations])

Performs max pooling.

pooling_nwc_min([I, K, O, strides, dilations])

Performs min pooling.

pooling_nwc_min_unsigned([I, K, O, strides, dilations])

Performs unsigned min pooling.

pooling_ndhwc_sum([I, K, O, strides, dilations])

Performs 3D sum pooling.

pooling_ndhwc_max([I, K, O, strides, dilations])

Performs 3D max pooling.

pooling_ndhwc_min([I, K, O, strides, dilations])

Performs 3D min pooling.

fill([value, O])

Fills the output tensor with the given value.

fill_rng_2d([min, max, seed, O])

Fills the output tensor with pseudo random numbers.

Module Contents

mlir.dialects.linalg.opdsl.ops.core_named_ops.T1
mlir.dialects.linalg.opdsl.ops.core_named_ops.T2
mlir.dialects.linalg.opdsl.ops.core_named_ops.Batch
mlir.dialects.linalg.opdsl.ops.core_named_ops.copy(I=TensorDef(T1), O=TensorDef(U, output=True), cast=TypeFnAttrDef(default=TypeFn.cast_signed))

Copies the tensor elementwise.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.exp(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies exp(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.log(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies log(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.abs(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies abs(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.ceil(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies ceil(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.floor(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies floor(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.negf(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies negf(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.reciprocal(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies reciprocal(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.round(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies round(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.sqrt(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies sqrt(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.rsqrt(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies rsqrt(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.square(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies square(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.tanh(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies tanh(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.erf(I=TensorDef(T1), O=TensorDef(T1, output=True))

Applies erf(x) elementwise.

No numeric casting is performed on the input operand.

mlir.dialects.linalg.opdsl.ops.core_named_ops.add(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Adds two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.add sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.sub(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Subtracts two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.sub sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.mul(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Multiplies two tensors elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.mul sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.div(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Divides the first tensor by the second tensor, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.div_unsigned(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Divides the first tensor by the second tensor, elementwise. For integer types, performs an unsigned division.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.div sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.max(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the max (signed) between two inputs, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.max sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.min(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the min (signed) between two inputs, elementwise.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.min sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.powf(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use linalg.square.

Only applies to floating point values.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.powf sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.select(cond=TensorDef(U), lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True))

Chooses one value based on a binary condition supplied as its first operand.

The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.

This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a linalg.broadcast + linalg.select sequence can be lowered to a linalg.generic with different affine maps for the two operands.

mlir.dialects.linalg.opdsl.ops.core_named_ops.quantized_matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, S.M, S.N, output=True))

Performs a matrix multiplication of two 2D inputs.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

mlir.dialects.linalg.opdsl.ops.core_named_ops.mmt4d(lhs=TensorDef(TV.LhsType, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, S.M, S.N, S.M0, S.N0, output=True))

Performs a matrix-matrix-transpose multiplication of two 4D inputs.

Differences from linalg.matmul:

  • The right hand side is transposed, whence the ‘t’ in ‘mmt’.

  • The input and output tensors have a 4D shape instead of a 2D shape. They

are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with ‘0’ suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.

mlir.dialects.linalg.opdsl.ops.core_named_ops.batch_mmt4d(lhs=TensorDef(TV.LhsType, Batch, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, Batch, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, Batch, S.M, S.N, S.M0, S.N0, output=True))

Performs a batched matrix-matrix-transpose multiplication of two batched-4D (5D) inputs.

Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.

mlir.dialects.linalg.opdsl.ops.core_named_ops.quantized_batch_matmul(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, Batch, S.M, S.N, output=True))

Performs a batched matrix multiplication of two 3D inputs.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.

mlir.dialects.linalg.opdsl.ops.core_named_ops.matvec(A=TensorDef(T1, S.M, S.N), y=TensorDef(T2, S.N), x=TensorDef(U, S.M, output=True))

Performs a matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.vecmat(y=TensorDef(T1, S.M), A=TensorDef(T2, S.M, S.N), x=TensorDef(U, S.N, output=True))

Performs a vector-matrix multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.batch_matvec(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K), C=TensorDef(U, Batch, S.M, output=True))

Performs a batched matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.batch_vecmat(A=TensorDef(T1, Batch, S.K), B=TensorDef(T2, Batch, S.K, S.N), C=TensorDef(U, Batch, S.N, output=True))

Performs a batched matrix-vector multiplication.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.dot(A=TensorDef(T1, S.M), B=TensorDef(T2, S.M), C=TensorDef(U, output=True))

Performs a dot product of two vectors to a scalar result.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_1d(I=TensorDef(T1, S.OW + S.KW), K=TensorDef(T2, S.KW), O=TensorDef(U, S.OW, output=True))

Performs 1-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d(I=TensorDef(T1, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KH, S.KW), O=TensorDef(U, S.OH, S.OW, output=True))

Performs 2-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_3d(I=TensorDef(T1, S.OD + S.KD, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KD, S.KH, S.KW), O=TensorDef(U, S.OD, S.OH, S.OW, output=True))

Performs 3-D convolution with no channels.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_1d_nwc_wcf(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OW, S.F, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_1d_ncw_fcw(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KW), O=TensorDef(U, S.N, S.F, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs 1-D convolution.

Layout:

  • Input: NCW.

  • Kernel: FCW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwc_hwcf(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwc_fhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwc_hwcf_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NHWC.

  • Kernel: HWCF.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwc_fhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NHWC.

  • Kernel: FHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nchw_fchw_q(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution with zero point offsets.

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nchw_fchw(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D convolution.

Layout:

  • Input: NCHW.

  • Kernel: FCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_ngchw_fgchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.FG, S.G, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NGCHW.

  • Kernel: FGCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_ngchw_gfchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwgc_gfhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution.

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_nhwgc_gfhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution with zero point offsets.

Layout:

  • Input: NHWGC.

  • Kernel: GFHWC.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_2d_ngchw_gfchw_q(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs 2-D grouped convolution with zero-point offsets.

Layout:

  • Input: NGCHW.

  • Kernel: GFCHW.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_3d_ndhwc_dhwcf(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_3d_ndhwc_dhwcf_q(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution with zero point offsets.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.

mlir.dialects.linalg.opdsl.ops.core_named_ops.conv_3d_ncdhw_fcdhw(I=TensorDef(T1, S.N, S.C, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_1d_nwc_wc(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC), O=TensorDef(U, S.N, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_1d_ncw_cw(I=TensorDef(T1, S.N, S.IC, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KW), O=TensorDef(U, S.N, S.IC, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_1d_nwc_wcm(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs depth-wise 1-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_2d_nhwc_hwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_2d_nchw_chw(I=TensorDef(T1, S.N, S.IC, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_2d_nhwc_hwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_2d_nhwc_hwcm(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_2d_nhwc_hwcm_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs depth-wise 2-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_3d_ndhwc_dhwc(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_3d_ncdhw_cdhw(I=TensorDef(T1, S.N, S.IC, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.

mlir.dialects.linalg.opdsl.ops.core_named_ops.depthwise_conv_3d_ndhwc_dhwcm(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.CM, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs depth-wise 3-D convolution.

Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nhwc_sum(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs sum pooling.

Layout:

  • Input: NHWC.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nchw_sum(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs sum pooling.

Layout:

  • Input: NCHW.

  • Kernel: HW.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nhwc_max(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nhwc_max_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs unsigned max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nchw_max(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nhwc_min(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nhwc_min_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1]))

Performs unsigned min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nwc_sum(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs sum pooling.

Layout:

  • Input: NWC.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_ncw_sum(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs sum pooling.

Layout:

  • Input: NCW.

  • Kernel: W.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nwc_max(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nwc_max_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs unsigned max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_ncw_max(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nwc_min(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_nwc_min_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1]))

Performs unsigned min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_ndhwc_sum(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D sum pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_ndhwc_max(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D max pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.pooling_ndhwc_min(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1]))

Performs 3D min pooling.

Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.fill(value=ScalarDef(T1), O=TensorDef(U, output=True))

Fills the output tensor with the given value.

Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.

mlir.dialects.linalg.opdsl.ops.core_named_ops.fill_rng_2d(min=ScalarDef(F64), max=ScalarDef(F64), seed=ScalarDef(I32), O=TensorDef(T, S.M, S.N, output=True))

Fills the output tensor with pseudo random numbers.

The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.