mlir.dialects._linalg_ops_gen¶
Attributes¶
Classes¶
No numeric casting is performed on the input operand. |
|
The shapes and element types must be identical. The appropriate casts, |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Besides the outermost batch dimension has the same semantic as |
|
Numeric casting is performed on the operands to the inner multiply, |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Broadcast the input into the given shape by adding |
|
No numeric casting is performed on the input operand. |
|
The semantics of contracting inputs |
|
Layout: |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Layout: |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
The shapes and element types must be identical. The appropriate casts, |
|
The shapes and element types must be identical. The appropriate casts, |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
The attribute |
|
No numeric casting is performed on the input operand. |
|
No numeric casting is performed on the input operand. |
|
Works for arbitrary ranked output tensors since the operation performs scalar |
|
The operation generations pseudo random numbers using a linear congruential |
|
No numeric casting is performed on the input operand. |
|
Generic Linalg op form where the key properties of the computation are |
|
The |
|
The "pack" operation converts a source tensor of rank |
|
linalg.softmax computes a numerically stable version of softmax. |
|
The "unpack" operation converts a source tensor of rank |
|
Winograd Conv2D algorithm will convert linalg Conv2D operator into batched |
|
Winograd Conv2D algorithm will convert linalg Conv2D operator into batched |
|
Winograd Conv2D algorithm will convert linalg Conv2D operator into batched |
|
|
|
No numeric casting is performed on the input operand. |
|
Models elementwise operations on tensors in terms of arithmetic operations |
|
Numeric casting is performed on the operands to the inner multiply, |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
The shapes and element types must be identical. The appropriate casts, |
|
The shapes and element types must be identical. The appropriate casts, |
|
Differences from linalg.matmul: |
|
The shapes and element types must be identical. The appropriate casts, |
|
No numeric casting is performed on the input operand. |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Layout: |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Layout: |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Layout: |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Numeric casting is performed on the input operand, promoting it to the same |
|
Layout: |
|
Only applies to floating point values. |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
|
No numeric casting is performed on the input operand. |
|
Executes |
|
No numeric casting is performed on the input operand. |
|
No numeric casting is performed on the input operand. |
|
The shapes and element types must be identical. The appropriate casts, |
|
No numeric casting is performed on the input operand. |
|
No numeric casting is performed on the input operand. |
|
The shapes and element types must be identical. The appropriate casts, |
|
No numeric casting is performed on the input operand. |
|
Permutes the dimensions of |
|
Numeric casting is performed on the operands to the inner multiply, promoting |
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Module Contents¶
- mlir.dialects._linalg_ops_gen._ods_ir¶
- class mlir.dialects._linalg_ops_gen._Dialect(descriptor: object)¶
Bases:
_ods_ir- DIALECT_NAMESPACE = 'linalg'¶
- class mlir.dialects._linalg_ops_gen.AbsOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.abs'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.abs(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AbsOp¶
- class mlir.dialects._linalg_ops_gen.AddOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.addsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.add'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.add(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | AddOp¶
- class mlir.dialects._linalg_ops_gen.BatchMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
Broadcast and Transpose semantics can be appiled by specifying the explicit attribute 'indexing_maps' as shown below. This is a list attribute, so must include maps for all arguments if specified. Example Transpose: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>) outs(%arg2: memref<2x3x7xf32>) ``` Example Broadcast: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>) outs(%arg2: memref<2x3x7xf32>) ``` Example Broadcast and Transpose: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>) outs(%arg2: memref<2x3x7xf32>) ```- OPERATION_NAME = 'linalg.batch_matmul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- indexing_maps() _ods_ir | None¶
- cast() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.batch_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatmulOp¶
- class mlir.dialects._linalg_ops_gen.BatchMatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.batch_matvec'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.batch_matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMatvecOp¶
- class mlir.dialects._linalg_ops_gen.BatchMmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irBesides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d.
- OPERATION_NAME = 'linalg.batch_mmt4d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.batch_mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchMmt4DOp¶
- class mlir.dialects._linalg_ops_gen.BatchReduceMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
Broadcast and Transpose semantics can be applied by specifying the explicit attribute ‘indexing_maps’ as shown below. This is a list attribute, so must include maps for all arguments if specified.
Example Transpose:
linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>) outs(%arg2: memref<3x7xf32>)
Example Broadcast:
linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>) outs(%arg2: memref<3x7xf32>)
Example Broadcast and Transpose:
linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>) outs(%arg2: memref<3x7xf32>)
- OPERATION_NAME = 'linalg.batch_reduce_matmul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- indexing_maps() _ods_ir | None¶
- cast() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.batch_reduce_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | BatchReduceMatmulOp¶
- class mlir.dialects._linalg_ops_gen.BatchVecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.batch_vecmat'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.batch_vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | BatchVecmatOp¶
- class mlir.dialects._linalg_ops_gen.BroadcastOp(result, input, init, dimensions, *, loc=None, ip=None)¶
Bases:
_ods_irBroadcast the input into the given shape by adding
dimensions.Example:
%bcast = linalg.broadcast ins(%input:tensor<16xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1]
- OPERATION_NAME = 'linalg.broadcast'¶
- _ODS_REGIONS = (1, True)¶
- input() _ods_ir¶
- init() _ods_ir¶
- dimensions() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.broadcast(result, input, init, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | BroadcastOp¶
- class mlir.dialects._linalg_ops_gen.CeilOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.ceil'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.ceil(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | CeilOp¶
- class mlir.dialects._linalg_ops_gen.ContractOp(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None)¶
Bases:
_ods_irThe semantics of contracting inputs
AandBon top ofCto produce outputDis given byD[H] = (SUM_{(I ∪ J) \ H} A[I] * B[J]) + C[H]where
I,J, andHare tuples of (pairwise distinct) dimension identifiers - meant to range over valid indices - corresponding to the results of the mandatory (projected permutation)indexing_mapsforA,BandC.SUM_{dims}means reduce over all valid indices for the dimensions in the setdims(withI,J, andKtreated as sets of dim identifiers).The iteration space consists of all dimensions in
I,JandH, i.e. the domain of each of the ``affine_map``s. Like for einsums, the iteration type of each dim is inferred and is either:reduction: the dim is used to index into
AandBbut notC. Per the
above semantics, these dims will be contracted, i.e. reduced over. * parallel: the dim is used to index into
Cand at least one ofAandB, and - deriving from matmul terminology - is either an “M-like” dim (if used onAandC), an “N-like” dim (if used onBandC) or a “batch”-dim (if used to index intoA,B, andC).For example, batch-matmul is given by
I = ⟨ b, m, k ⟩,J = ⟨ b, k, n ⟩,H = ⟨ b, m, n ⟩(withkas a contracting reduction-dimension whilem,nandbhave parallel iteration-type) and gets represented as:%D = linalg.contract indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>, affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%A, %B: tensor<?x?x?xf32>, tensor<?x?x?xf32>) outs(%C: tensor<?x?x?xf32>) -> tensor<?x?x?xf32>
Note that by permuting dims in the
affine_map``s' results, accesses to to the inputs and output can be arbitrarily transposed. Similarly, arbitrary broadcasts can be achieved through leaving out dims on either input operand. For example, the following is a variant of batch-matmul with a transposition applied to ``AwhileB’s 2D-matrix gets broadcasted along the batch dim:linalg.contract indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, affine_map<(batch, m, n, k) -> (k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%A, %B: memref<?x?x?xf32>, memref<?x?xf32>) outs(%C: memref<?x?x?xf32>)
Numeric casting is performed on the operands to the inner multiplication, promoting/truncating them to the same data type as the accumulator/output.
TODO: Allow control over the combining/accumulating op and possibly the multiplication op.
- OPERATION_NAME = 'linalg.contract'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- indexing_maps() _ods_ir¶
- cast() _ods_ir | None¶
- result_tensors() _ods_ir¶
- combiner() _ods_ir¶
- mlir.dialects._linalg_ops_gen.contract(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | ContractOp¶
- class mlir.dialects._linalg_ops_gen.Conv1DNcwFcwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NCW.
Kernel: FCW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_1d_ncw_fcw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_1d_ncw_fcw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNcwFcwOp¶
- class mlir.dialects._linalg_ops_gen.Conv1DNwcWcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_1d_nwc_wcf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_1d_nwc_wcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DNwcWcfOp¶
- class mlir.dialects._linalg_ops_gen.Conv1DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_1d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_1d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv1DOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNchwFchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NCHW.
Kernel: FCHW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_nchw_fchw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nchw_fchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNchwFchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NCHW.
Kernel: FCHW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_2d_nchw_fchw_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nchw_fchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNchwFchwQOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNgchwFgchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NGCHW.
Kernel: FGCHW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_ngchw_fgchw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_ngchw_fgchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwFgchwOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNgchwGfchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NGCHW.
Kernel: GFCHW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_ngchw_gfchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNgchwGfchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NGCHW.
Kernel: GFCHW.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_2d_ngchw_gfchw_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_ngchw_gfchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNgchwGfchwQOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwcFhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWC.
Kernel: FHWC.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwc_fhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwcFhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWC.
Kernel: FHWC.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_2d_nhwc_fhwc_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwc_fhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcFhwcQOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwcHwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWC.
Kernel: HWCF.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwc_hwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwcHwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWC.
Kernel: HWCF.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_2d_nhwc_hwcf_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwc_hwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwcHwcfQOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwgcGfhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWGC.
Kernel: GFHWC.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwgc_gfhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DNhwgcGfhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWGC.
Kernel: GFHWC.
Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_2d_nhwgc_gfhwc_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d_nhwgc_gfhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DNhwgcGfhwcQOp¶
- class mlir.dialects._linalg_ops_gen.Conv2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_2d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv2DOp¶
- class mlir.dialects._linalg_ops_gen.Conv3DNcdhwFcdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_3d_ncdhw_fcdhw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_3d_ncdhw_fcdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNcdhwFcdhwOp¶
- class mlir.dialects._linalg_ops_gen.Conv3DNdhwcDhwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_3d_ndhwc_dhwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfOp¶
- class mlir.dialects._linalg_ops_gen.Conv3DNdhwcDhwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations.
- OPERATION_NAME = 'linalg.conv_3d_ndhwc_dhwcf_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_3d_ndhwc_dhwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DNdhwcDhwcfQOp¶
- class mlir.dialects._linalg_ops_gen.Conv3DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.conv_3d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.conv_3d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Conv3DOp¶
- class mlir.dialects._linalg_ops_gen.CopyOp(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.copy'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- cast() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.copy(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | CopyOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNcwCwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_1d_ncw_cw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_1d_ncw_cw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNcwCwOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNwcWcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wc'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_1d_nwc_wc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv1DNwcWcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.depthwise_conv_1d_nwc_wcm'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_1d_nwc_wcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv1DNwcWcmOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNchwChwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_2d_nchw_chw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nchw_chw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNchwChwOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwc_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcQOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv2DNhwcHwcmQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.depthwise_conv_2d_nhwc_hwcm_q'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_2d_nhwc_hwcm_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv2DNhwcHwcmQOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNcdhwCdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_3d_ncdhw_cdhw'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ncdhw_cdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNcdhwCdhwOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNdhwcDhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions.
- OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwc'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ndhwc_dhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcOp¶
- class mlir.dialects._linalg_ops_gen.DepthwiseConv3DNdhwcDhwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.depthwise_conv_3d_ndhwc_dhwcm'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.depthwise_conv_3d_ndhwc_dhwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | DepthwiseConv3DNdhwcDhwcmOp¶
- class mlir.dialects._linalg_ops_gen.DivOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.divsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.div'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.div(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivOp¶
- class mlir.dialects._linalg_ops_gen.DivUnsignedOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.divsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.div_unsigned'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.div_unsigned(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DivUnsignedOp¶
- class mlir.dialects._linalg_ops_gen.DotOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.dot'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.dot(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | DotOp¶
- class mlir.dialects._linalg_ops_gen.ElementwiseOp(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None)¶
Bases:
_ods_irThe attribute
kinddescribes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select).By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type.
Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand.
Iterator-types are always all
parallel. Iterator-types are needed for constructing the underlying structured op.The number of dims of the iterator-types are inferred from the rank of the result type.
Example:
Defining a unary linalg.elementwise with default indexing-map:
%exp = linalg.elementwise kind=#linalg.elementwise_kind<exp> ins(%x : tensor<4x16x8xf32>) outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32>
Defining a binary linalg.elementwise with user-defined indexing-map:
%add = linalg.elementwise kind=#linalg.elementwise_kind<add> indexing_maps = [#transpose, #broadcast, #identity] ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>) outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32>
- OPERATION_NAME = 'linalg.elementwise'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- kind() _ods_ir¶
- indexing_maps() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.elementwise(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) _ods_ir | _ods_ir | ElementwiseOp¶
- class mlir.dialects._linalg_ops_gen.ErfOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.erf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.erf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ErfOp¶
- class mlir.dialects._linalg_ops_gen.ExpOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.exp'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.exp(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ExpOp¶
- class mlir.dialects._linalg_ops_gen.FillOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irWorks for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output.
- OPERATION_NAME = 'linalg.fill'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.fill(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillOp¶
- class mlir.dialects._linalg_ops_gen.FillRng2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers.
- OPERATION_NAME = 'linalg.fill_rng_2d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.fill_rng_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FillRng2DOp¶
- class mlir.dialects._linalg_ops_gen.FloorOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.floor'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.floor(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | FloorOp¶
- class mlir.dialects._linalg_ops_gen.GenericOp(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None)¶
Bases:
_ods_irGeneric Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a
linalg.genericop is written as:linalg.generic #trait_attribute ins(%A, %B : memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>) outs(%C : memref<?x?xf32, stride_specification>) attrs = {other-optional-attributes} {region}
Where #trait_attributes is an alias of a dictionary attribute containing:
doc [optional]: a documentation string
indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input
and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window
Example: Defining a #matmul_trait attribute in MLIR can be done as follows:
#matmul_accesses = [ (m, n, k) -> (m, k), (m, n, k) -> (k, n), (m, n, k) -> (m, n) ] #matmul_trait = { doc = "C(m, n) += A(m, k) * B(k, n)", indexing_maps = #matmul_accesses, library_call = "linalg_matmul", iterator_types = ["parallel", "parallel", "reduction"] }
And can be reused in multiple places as:
linalg.generic #matmul_trait ins(%A, %B : memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>) outs(%C : memref<?x?xf32, stride_specification>) {other-optional-attributes} { ^bb0(%a: f32, %b: f32, %c: f32) : %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 linalg.yield %e : f32 }
This may lower to either:
call @linalg_matmul(%A, %B, %C) : (memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>, memref<?x?xf32, stride_specification>) -> ()
or IR resembling:
scf.for %m = %c0 to %M step %c1 { scf.for %n = %c0 to %N step %c1 { scf.for %k = %c0 to %K step %c1 { %a = load %A[%m, %k] : memref<?x?xf32, stride_specification> %b = load %B[%k, %n] : memref<?x?xf32, stride_specification> %c = load %C[%m, %n] : memref<?x?xf32, stride_specification> %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 store %e, %C[%m, %n] : memref<?x?x?xf32, stride_specification> } } }
- OPERATION_NAME = 'linalg.generic'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- indexing_maps() _ods_ir¶
- iterator_types() _ods_ir¶
- doc() _ods_ir | None¶
- library_call() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.generic(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) _ods_ir | _ods_ir | GenericOp¶
- class mlir.dialects._linalg_ops_gen.IndexOp(dim, *, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe
linalg.indexoperation returns the iteration index of the immediately enclosing linalg structured operation for the iteration dimensiondim. Thedimattribute specifies the position of the accessed dimension in the indexing map domain.Example:
#map = affine_map<(i, j) -> (i, j)> linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel", "parallel"]} outs(%I, %J : memref<?x?xindex>, memref<?x?xindex>) { ^bb0(%arg0 : index, %arg1 : index): // Access the outer iteration dimension i %i = linalg.index 0 : index // Access the inner iteration dimension j %j = linalg.index 1 : index linalg.yield %i, %j : index, index }
This may lower to IR resembling:
%0 = dim %I, %c0 : memref<?x?xindex> %1 = dim %I, %c1 : memref<?x?xindex> scf.for %i = %c0 to %0 step %c1 { scf.for %j = %c0 to %1 step %c1 { store %i, %I[%i, %j] : memref<?x?xindex> store %j, %J[%i, %j] : memref<?x?xindex> } }
- OPERATION_NAME = 'linalg.index'¶
- _ODS_REGIONS = (0, True)¶
- dim() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.index(dim, *, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.PackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe “pack” operation converts a source tensor of rank
ninto a result tensor of rankn + kwith a tiled and packed layout (maybe with padding) and optionally transposes the tiled source tensor dimensions.inner_tiles(mandatory) specifiesktile sizes. These tile sizes correspond to the least significant (“inner”) result tensor dimension sizes, in the same order. Tile sizes can be static or dynamic.inner_dims_pos(mandatory) specifiesksource tensor dimensions that are being tiled, where0 <= k <= n.inner_dims_pos[i]specifies the source tensor dimension tiled by
inner_tiles[i]where0 <= i < k. All the values ininner_dims_posare within [0, n). * The tiled dimensions (of sizeinner_tiles) are added to the end of the result tensor in the order in which they appear, i.e.shape(result)[rank(source) + i] = inner_tiles[i]for0 <= i < k. * The following relationship for the tiled dimensions holds:shape(result)[inner_dims_pos[i]] = shape(source)[inner_dims_pos[i]] / inner_tiles[i], where (⌈/⌉ indicates CeilDiv).Example: If
inner_tiles = [16, 32], the result tensor has a shape of...x16x32. Ifinner_dims_pos = [0, 1], the 0th source dimension is tiled by 16 and the 1st source dimension is tiled by 32. Other source dimensions (if any) are not tiled. Ifinner_dims_pos = [1, 0], the 1st dimension is tiled by 16 and the 0th dimension is tiled by 32.Example:
// NC to NCnc %0 = linalg.pack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<128x256xf32> -> tensor<16x8 x 8x32 xf32> // \ / \ / // Outer Dims: 16x8 Inner Dims: 8x32 // CHW to CHWhw %0 = linalg.pack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] into %dest : tensor<3x20x24xf32> -> tensor<3x10x6 x 4x2 xf32> // \ / \ / // Outer Dims: 3x10x6 Inner Dims: 4x2 // HCW to HCWhw %0 = linalg.pack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] into %dest : tensor<18x3x32xf32> -> tensor<9x3x8 x 4x2 xf32> // \ / \ / // Outer Dims: 9x3x8 Inner Dims: 4x2
outer_dims_perm(optional) specifies a permutation for the outer dimensions. If specified, it must havenelements.Example:
// CK to KCck %0 = linalg.pack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<128x256xf32> -> tensor<8x16 x 8x32 xf32> // \ / // compare with "NC to NCnc": outer dims are transposed
padding_valuespecifies a padding value at the boundary on non-perfectly divisible dimensions. Padding is optional:If absent, it is assumed that for all inner tiles,
shape(source)[inner_dims_pos[i]] % inner_tiles[i] == 0, i.e. all inner tiles divide perfectly the corresponding outer dimension in the result tensor. It is UB if the tile does not perfectly divide the dimension. * If present, it will pad along high dimensions (high-padding) to make the tile complete. Note that it is not allowed to have artificial padding that is not strictly required by linalg.pack (i.e., padding past what is needed to complete the last tile along each packed dimension). It is UB if extra padding is requested. It is not possible to verify the requirements statically with dynamic shapes, so they are treated as UB.Example:
%0 = linalg.pack %arg0 padding_value(%pad : f32) outer_dims_perm = [2, 1, 0] inner_dims_pos = [1] inner_tiles = [2] into %arg1 : tensor<200x127x256xf32> -> tensor<256x64x200x2xf32> // \ // padded and tiled dim // // Source dimension 1 is tiled. 64 does not divide 127 evenly, so 1 padded // element is added at the end. // // Note: Only tiled dimensions can be padded.
Invalid example that has artificial padding:
%0 = linalg.pack %src padding_value(%cst : f32) inner_dims_pos = [0] inner_tiles = [8] into %dest : tensor<9xf32> -> tensor<3x8xf32> // \ // expect tensor<2x8xf32> because CeilDiv(9, 8) = 2
- OPERATION_NAME = 'linalg.pack'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- dest() _ods_ir¶
- padding_value() _ods_ir | None¶
- inner_tiles() _ods_ir¶
- outer_dims_perm() _ods_ir | None¶
- inner_dims_pos() _ods_ir¶
- static_inner_tiles() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.pack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.SoftmaxOp(result, input, output, dimension, *, loc=None, ip=None)¶
Bases:
_ods_irlinalg.softmax computes a numerically stable version of softmax.
For a given input tensor and a specified dimension
d, compute:the max
malong that dimensiondf(x) = exp(x - m)
sum f(x) along dimension d to get l(x).
compute the final result f(x) / l(x).
This is an aggregate linalg operation that further reduces to a small DAG of structured operations.
Warning: Regarding the tiling capabilities, the implementation doesn’t check that the provided dimensions make sense. This is the responsability of the transformation calling the tiling to ensure that the provided sizes for each dimension make sense with respect to the semantic of softmax.
- OPERATION_NAME = 'linalg.softmax'¶
- _ODS_REGIONS = (0, True)¶
- input() _ods_ir¶
- output() _ods_ir¶
- dimension() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.softmax(result, input, output, dimension, *, loc=None, ip=None) _ods_ir | _ods_ir | SoftmaxOp¶
- class mlir.dialects._linalg_ops_gen.UnPackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe “unpack” operation converts a source tensor of rank
nwith a tiled and packed layout to a result tensor of rankn - k.inner_tiles(mandatory) specifiesktile sizes. These tile sizes correspond to the least significant (“inner”) source tensor dimension sizes. The behavior of this op is undefined if:inner_tilesdo not exactly match with the corresponding source tensor
dimension sizes. * Or,
inner_tiles[i]does not divide the size of dimensioninner_dims_pos[i](assuming thatouter_dims_permis not specified) evenly.inner_dims_pos(mandatory) specifieskresult tensor (i.e. unpacked tensor) dimensions that were tiled with theinner_tilesto create the packed source tensor. The source tensor (i.e. packed tensor) dimensions can be unpacked giveninner_dims_posas follows.For
0 <= i < kthe following relationship holds:
shape(result)[inner_dims_pos[i]] <= shape(source)[n-k+i] * shape(source)[inner_dims_pos[i]]. * For0 <= j < n-kandjnot ininner_dims_posthe following relationship holds:shape(result)[j] = shape(source)[j].outer_dims_perm(optional) specifies a permutation for the outer dimensions. If specified, it must haven - kelements. If specified, this permutation is applied before combining any dimensions.Note, the unpack operation may drop any padding introduced by the pack operation and hence the following holds
NumElementsOf(source) >= NumElementsOf(result).Examples:
// NCnc to NC: %0 = linalg.unpack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<16x8 x 8x32 xf32> -> tensor<128x256xf32> // \ / \ / // Outer Dims: 16x8 Inner Dims: 8x32 // CK to KCck: %0 = linalg.unpack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<8x16 x 8x32 xf32> -> tensor<128x256xf32> // \ / \ / // Outer Dims: 8x16 Inner Dims: 8x32 // CHW to CHWhw: %0 = linalg.unpack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] into %dest : tensor<3x10x6 x 4x2 xf32> -> tensor<3x20x24xf32> // \ / \ / // Outer Dims: 3x10x6 Inner Dims: 4x2 // HCW to HCWhw %0 = linalg.unpack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] into %dest : tensor<9x3x8 x 4x2 xf32> -> tensor<18x3x32xf32> // \ / \ / // Outer Dims: 9x3x8 Inner Dims: 4x2
- OPERATION_NAME = 'linalg.unpack'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- dest() _ods_ir¶
- inner_tiles() _ods_ir¶
- outer_dims_perm() _ods_ir | None¶
- inner_dims_pos() _ods_ir¶
- static_inner_tiles() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.unpack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.WinogradFilterTransformOp(result, filter, output, fmr, *, loc=None, ip=None)¶
Bases:
_ods_irWinograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.
The algorithm F(m x m, r x r) is
Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A
The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.
This operator is defined to represent the high level concept of filter transformation (G x g x G^T) in the Winograd Conv2D algorithm.
- OPERATION_NAME = 'linalg.winograd_filter_transform'¶
- _ODS_REGIONS = (0, True)¶
- filter() _ods_ir¶
- output() _ods_ir¶
- fmr() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.winograd_filter_transform(result, filter, output, fmr, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.WinogradInputTransformOp(result, input, output, fmr, *, loc=None, ip=None)¶
Bases:
_ods_irWinograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.
The algorithm F(m x m, r x r) is
Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A
The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.
This operator is defined to represent the high level concept of input transformation (B^T x d x B) in the Winograd Conv2D algorithm.
- OPERATION_NAME = 'linalg.winograd_input_transform'¶
- _ODS_REGIONS = (0, True)¶
- input() _ods_ir¶
- output() _ods_ir¶
- fmr() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.winograd_input_transform(result, input, output, fmr, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.WinogradOutputTransformOp(result, value, output, fmr, *, loc=None, ip=None)¶
Bases:
_ods_irWinograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.
The algorithm F(m x m, r x r) is
Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A
The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.
This operator is defined to represent the high level concept of output transformation (A^T x y x A) in the Winograd Conv2D algorithm.
- OPERATION_NAME = 'linalg.winograd_output_transform'¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- output() _ods_ir¶
- fmr() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._linalg_ops_gen.winograd_output_transform(result, value, output, fmr, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._linalg_ops_gen.YieldOp(values, *, loc=None, ip=None)¶
Bases:
_ods_irlinalg.yieldis a special terminator operation for blocks inside regions inlinalggeneric ops. It returns values to the immediately enclosinglinalggeneric op.Example:
linalg.yield %f0, %f1 : f32, f32
- OPERATION_NAME = 'linalg.yield'¶
- _ODS_REGIONS = (0, True)¶
- values() _ods_ir¶
- class mlir.dialects._linalg_ops_gen.LogOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.log'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.log(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | LogOp¶
- class mlir.dialects._linalg_ops_gen.MapOp(result, inputs, init, *, loc=None, ip=None)¶
Bases:
_ods_irModels elementwise operations on tensors in terms of arithmetic operations on the corresponding elements.
Example:
%add = linalg.map ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>) (%lhs_elem: f32, %rhs_elem: f32) { %0 = arith.addf %lhs_elem, %rhs_elem: f32 linalg.yield %0: f32 }
Shortened print form is available for simple maps where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments with operands matching block arguments in order, and the yield operand is the result of the payload operation.
The example above will be printed using the shortened form as:
%add = linalg.map { arith.addf } ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>)
- OPERATION_NAME = 'linalg.map'¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- init() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mapper() _ods_ir¶
- mlir.dialects._linalg_ops_gen.map(result, inputs, init, *, loc=None, ip=None) _ods_ir | _ods_ir | MapOp¶
- class mlir.dialects._linalg_ops_gen.MatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
Broadcast and Transpose semantics can be appiled by specifying the explicit attribute ‘indexing_maps’ as shown below.This is a list attribute, so the list must include all the maps if specified.
Example Transpose:
linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose affine_map<(m, n, k) -> (k, n)>, affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5x3xf32>,memref<5x7xf32>) outs(%arg2: memref<3x7xf32>)
Example Broadcast:
linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k)>, // broadcast affine_map<(m, n, k) -> (k, n)>, affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<3xf32>, memref<5x7xf32>) outs(%arg2: memref<3x7xf32>)
Example Broadcast and transpose:
linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose affine_map<(m, n, k) -> (k)>, // broadcast affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5x3xf32>, memref<7xf32>) outs(%arg2: memref<3x7xf32>)
- OPERATION_NAME = 'linalg.matmul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- indexing_maps() _ods_ir | None¶
- cast() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) _ods_ir | _ods_ir | MatmulOp¶
- class mlir.dialects._linalg_ops_gen.MatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.matvec'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MatvecOp¶
- class mlir.dialects._linalg_ops_gen.MaxOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.maxsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.max(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MaxOp¶
- class mlir.dialects._linalg_ops_gen.MinOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.minsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.min'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.min(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MinOp¶
- class mlir.dialects._linalg_ops_gen.Mmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irDifferences from linalg.matmul:
The right hand side is transposed, whence the ‘t’ in ‘mmt’.
The input and output tensors have a 4D shape instead of a 2D shape. They
are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with ‘0’ suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0.
- OPERATION_NAME = 'linalg.mmt4d'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | Mmt4DOp¶
- class mlir.dialects._linalg_ops_gen.MulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.mulsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.mul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.mul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | MulOp¶
- class mlir.dialects._linalg_ops_gen.NegFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.negf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.negf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | NegFOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNchwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nchw_max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nchw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwMaxOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNchwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NCHW.
Kernel: HW.
Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nchw_sum'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nchw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNchwSumOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNcwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_ncw_max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_ncw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwMaxOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNcwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NCW.
Kernel: W.
Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_ncw_sum'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_ncw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNcwSumOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNdhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_ndhwc_max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_ndhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMaxOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNdhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_ndhwc_min'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_ndhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcMinOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNdhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_ndhwc_sum'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_ndhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNdhwcSumOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nhwc_max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNhwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nhwc_max_unsigned'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nhwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMaxUnsignedOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nhwc_min'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNhwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nhwc_min_unsigned'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nhwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcMinUnsignedOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NHWC.
Kernel: HW.
Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nhwc_sum'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNhwcSumOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nwc_max'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nwc_max_unsigned'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMaxUnsignedOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nwc_min'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nwc_min_unsigned'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcMinUnsignedOp¶
- class mlir.dialects._linalg_ops_gen.PoolingNwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None)¶
Bases:
_ods_irLayout:
Input: NWC.
Kernel: W.
Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.pooling_nwc_sum'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- strides() _ods_ir | None¶
- dilations() _ods_ir | None¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.pooling_nwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) _ods_ir | _ods_ir | PoolingNwcSumOp¶
- class mlir.dialects._linalg_ops_gen.PowFOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irOnly applies to floating point values.
The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.powfsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.powf'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.powf(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | PowFOp¶
- class mlir.dialects._linalg_ops_gen.QuantizedBatchMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.
- OPERATION_NAME = 'linalg.quantized_batch_matmul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.quantized_batch_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedBatchMatmulOp¶
- class mlir.dialects._linalg_ops_gen.QuantizedMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul.
- OPERATION_NAME = 'linalg.quantized_matmul'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.quantized_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | QuantizedMatmulOp¶
- class mlir.dialects._linalg_ops_gen.ReciprocalOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.reciprocal'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.reciprocal(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | ReciprocalOp¶
- class mlir.dialects._linalg_ops_gen.ReduceOp(result, inputs, inits, dimensions, *, loc=None, ip=None)¶
Bases:
_ods_irExecutes
combineron thedimensionsofinputsand returns the reduced result. Thedimensionsattribute needs to list the reduction dimensions in increasing order.Example:
%reduce = linalg.reduce ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1] (%in: f32, %out: f32) { %0 = arith.addf %out, %in: f32 linalg.yield %0: f32 }
Shortened print form is available for simple reduces where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments, the first block argument (init) is the last operand of the payload operation with remaining operands matching remaining block arguments in order, and the yield operand is the result of the payload operation.
The example above will be printed using the shortened form as:
%reduce = linalg.reduce { arith.addf } ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1]
- OPERATION_NAME = 'linalg.reduce'¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- inits() _ods_ir¶
- dimensions() _ods_ir¶
- combiner() _ods_ir¶
- mlir.dialects._linalg_ops_gen.reduce(result, inputs, inits, dimensions, *, loc=None, ip=None) _ods_ir | _ods_ir | ReduceOp¶
- class mlir.dialects._linalg_ops_gen.RoundOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.round'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.round(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RoundOp¶
- class mlir.dialects._linalg_ops_gen.RsqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.rsqrt'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.rsqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | RsqrtOp¶
- class mlir.dialects._linalg_ops_gen.SelectOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.selectsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.select'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.select(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SelectOp¶
- class mlir.dialects._linalg_ops_gen.SqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.sqrt'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.sqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SqrtOp¶
- class mlir.dialects._linalg_ops_gen.SquareOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.square'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.square(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SquareOp¶
- class mlir.dialects._linalg_ops_gen.SubOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irThe shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op.
This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a
linalg.broadcast+linalg.subsequence can be lowered to alinalg.genericwith different affine maps for the two operands.- OPERATION_NAME = 'linalg.sub'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.sub(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | SubOp¶
- class mlir.dialects._linalg_ops_gen.TanhOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNo numeric casting is performed on the input operand.
- OPERATION_NAME = 'linalg.tanh'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.tanh(result_tensors, inputs, outputs, *, loc=None, ip=None) _ods_ir | _ods_ir | TanhOp¶
- class mlir.dialects._linalg_ops_gen.TransposeOp(result, input, init, permutation, *, loc=None, ip=None)¶
Bases:
_ods_irPermutes the dimensions of
inputaccording to the givenpermutation.dim(result, i) = dim(input, permutation[i])This op actually moves data, unlike
memref.transposewhich is a metadata operation only that produces a transposed “view”.Example:
%transpose = linalg.transpose ins(%input:tensor<16x64xf32>) outs(%init:tensor<64x16xf32>) permutation = [1, 0]
- OPERATION_NAME = 'linalg.transpose'¶
- _ODS_REGIONS = (1, True)¶
- input() _ods_ir¶
- init() _ods_ir¶
- permutation() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- region() _ods_ir¶
- mlir.dialects._linalg_ops_gen.transpose(result, input, init, permutation, *, loc=None, ip=None) _ods_ir | _ods_ir | TransposeOp¶
- class mlir.dialects._linalg_ops_gen.VecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None)¶
Bases:
_ods_irNumeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output.
- OPERATION_NAME = 'linalg.vecmat'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (1, True)¶
- inputs() _ods_ir¶
- outputs() _ods_ir¶
- result_tensors() _ods_ir¶
- region() _ods_ir¶