mlir.dialects.linalg ==================== .. py:module:: mlir.dialects.linalg Submodules ---------- .. toctree:: :maxdepth: 1 /autoapi/mlir/dialects/linalg/opdsl/index /autoapi/mlir/dialects/linalg/passes/index Attributes ---------- .. autoapisummary:: mlir.dialects.linalg._ods_ir mlir.dialects.linalg._ods_ir mlir.dialects.linalg.T1 mlir.dialects.linalg.T2 mlir.dialects.linalg.Batch mlir.dialects.linalg._CONTEXT mlir.dialects.linalg.StructuredOpOuts mlir.dialects.linalg.ContractionOpInterface mlir.dialects.linalg.ConvolutionOpInterface mlir.dialects.linalg.FillOpInterface mlir.dialects.linalg.Canonicalizer mlir.dialects.linalg.D mlir.dialects.linalg.S mlir.dialects.linalg.TV mlir.dialects.linalg.I32 mlir.dialects.linalg.I64 mlir.dialects.linalg.F32 mlir.dialects.linalg.F64 mlir.dialects.linalg.T mlir.dialects.linalg.U mlir.dialects.linalg.V mlir.dialects.linalg.ValueList mlir.dialects.linalg.generic mlir.dialects.linalg.ElementwiseOp mlir.dialects.linalg.reduce mlir.dialects.linalg.map Classes ------- .. autoapisummary:: mlir.dialects.linalg._Dialect mlir.dialects.linalg.AbsOp mlir.dialects.linalg.AddOp mlir.dialects.linalg.BatchMatmulOp mlir.dialects.linalg.BatchMatvecOp mlir.dialects.linalg.BatchMmt4DOp mlir.dialects.linalg.BatchReduceMatmulOp mlir.dialects.linalg.BatchVecmatOp mlir.dialects.linalg.BroadcastOp mlir.dialects.linalg.CeilOp mlir.dialects.linalg.ContractOp mlir.dialects.linalg.Conv1DNcwFcwOp mlir.dialects.linalg.Conv1DNwcWcfOp mlir.dialects.linalg.Conv1DOp mlir.dialects.linalg.Conv2DNchwFchwOp mlir.dialects.linalg.Conv2DNchwFchwQOp mlir.dialects.linalg.Conv2DNgchwFgchwOp mlir.dialects.linalg.Conv2DNgchwGfchwOp mlir.dialects.linalg.Conv2DNgchwGfchwQOp mlir.dialects.linalg.Conv2DNhwcFhwcOp mlir.dialects.linalg.Conv2DNhwcFhwcQOp mlir.dialects.linalg.Conv2DNhwcHwcfOp mlir.dialects.linalg.Conv2DNhwcHwcfQOp mlir.dialects.linalg.Conv2DNhwgcGfhwcOp mlir.dialects.linalg.Conv2DNhwgcGfhwcQOp mlir.dialects.linalg.Conv2DOp mlir.dialects.linalg.Conv3DNcdhwFcdhwOp mlir.dialects.linalg.Conv3DNdhwcDhwcfOp mlir.dialects.linalg.Conv3DNdhwcDhwcfQOp mlir.dialects.linalg.Conv3DOp mlir.dialects.linalg.CopyOp mlir.dialects.linalg.DepthwiseConv1DNcwCwOp mlir.dialects.linalg.DepthwiseConv1DNwcWcOp mlir.dialects.linalg.DepthwiseConv1DNwcWcmOp mlir.dialects.linalg.DepthwiseConv2DNchwChwOp mlir.dialects.linalg.DepthwiseConv2DNhwcHwcOp mlir.dialects.linalg.DepthwiseConv2DNhwcHwcQOp mlir.dialects.linalg.DepthwiseConv2DNhwcHwcmOp mlir.dialects.linalg.DepthwiseConv2DNhwcHwcmQOp mlir.dialects.linalg.DepthwiseConv3DNcdhwCdhwOp mlir.dialects.linalg.DepthwiseConv3DNdhwcDhwcOp mlir.dialects.linalg.DepthwiseConv3DNdhwcDhwcmOp mlir.dialects.linalg.DivOp mlir.dialects.linalg.DivUnsignedOp mlir.dialects.linalg.DotOp mlir.dialects.linalg.ElementwiseOp mlir.dialects.linalg.ErfOp mlir.dialects.linalg.ExpOp mlir.dialects.linalg.FillOp mlir.dialects.linalg.FillRng2DOp mlir.dialects.linalg.FloorOp mlir.dialects.linalg.GenericOp mlir.dialects.linalg.IndexOp mlir.dialects.linalg.PackOp mlir.dialects.linalg.SoftmaxOp mlir.dialects.linalg.UnPackOp mlir.dialects.linalg.WinogradFilterTransformOp mlir.dialects.linalg.WinogradInputTransformOp mlir.dialects.linalg.WinogradOutputTransformOp mlir.dialects.linalg.YieldOp mlir.dialects.linalg.LogOp mlir.dialects.linalg.MapOp mlir.dialects.linalg.MatmulOp mlir.dialects.linalg.MatvecOp mlir.dialects.linalg.MaxOp mlir.dialects.linalg.MinOp mlir.dialects.linalg.Mmt4DOp mlir.dialects.linalg.MulOp mlir.dialects.linalg.NegFOp mlir.dialects.linalg.PoolingNchwMaxOp mlir.dialects.linalg.PoolingNchwSumOp mlir.dialects.linalg.PoolingNcwMaxOp mlir.dialects.linalg.PoolingNcwSumOp mlir.dialects.linalg.PoolingNdhwcMaxOp mlir.dialects.linalg.PoolingNdhwcMinOp mlir.dialects.linalg.PoolingNdhwcSumOp mlir.dialects.linalg.PoolingNhwcMaxOp mlir.dialects.linalg.PoolingNhwcMaxUnsignedOp mlir.dialects.linalg.PoolingNhwcMinOp mlir.dialects.linalg.PoolingNhwcMinUnsignedOp mlir.dialects.linalg.PoolingNhwcSumOp mlir.dialects.linalg.PoolingNwcMaxOp mlir.dialects.linalg.PoolingNwcMaxUnsignedOp mlir.dialects.linalg.PoolingNwcMinOp mlir.dialects.linalg.PoolingNwcMinUnsignedOp mlir.dialects.linalg.PoolingNwcSumOp mlir.dialects.linalg.PowFOp mlir.dialects.linalg.QuantizedBatchMatmulOp mlir.dialects.linalg.QuantizedMatmulOp mlir.dialects.linalg.ReciprocalOp mlir.dialects.linalg.ReduceOp mlir.dialects.linalg.RoundOp mlir.dialects.linalg.RsqrtOp mlir.dialects.linalg.SelectOp mlir.dialects.linalg.SqrtOp mlir.dialects.linalg.SquareOp mlir.dialects.linalg.SubOp mlir.dialects.linalg.TanhOp mlir.dialects.linalg.TransposeOp mlir.dialects.linalg.VecmatOp mlir.dialects.linalg.BinaryFn mlir.dialects.linalg.ElementwiseArityGroup mlir.dialects.linalg.ElementwiseCaseLimits mlir.dialects.linalg.ElementwiseKind mlir.dialects.linalg.IteratorType mlir.dialects.linalg.TernaryFn mlir.dialects.linalg.TypeFn mlir.dialects.linalg.UnaryFn mlir.dialects.linalg.WinogradConv2DFmr mlir.dialects.linalg.DefinedOpCallable mlir.dialects.linalg.TensorExpression mlir.dialects.linalg.TensorUse mlir.dialects.linalg.TensorFn mlir.dialects.linalg.TensorReduceFn mlir.dialects.linalg.const mlir.dialects.linalg.index mlir.dialects.linalg.FunctionKind mlir.dialects.linalg.UnaryFnType mlir.dialects.linalg.UnaryFn mlir.dialects.linalg.BinaryFnType mlir.dialects.linalg.BinaryFn mlir.dialects.linalg.TernaryFnType mlir.dialects.linalg.TernaryFn mlir.dialects.linalg.TypeFnType mlir.dialects.linalg.TypeFn mlir.dialects.linalg.ReduceFnUse mlir.dialects.linalg.ReduceFnType mlir.dialects.linalg.ReduceFn mlir.dialects.linalg.OperandKind mlir.dialects.linalg.OperandDef mlir.dialects.linalg.TensorDef mlir.dialects.linalg.ScalarDef mlir.dialects.linalg.IndexAttrDef mlir.dialects.linalg.UnaryFnAttrDef mlir.dialects.linalg.BinaryFnAttrDef mlir.dialects.linalg.TernaryFnAttrDef mlir.dialects.linalg.TypeFnAttrDef mlir.dialects.linalg.Comprehension mlir.dialects.linalg.OpInterfaceDef mlir.dialects.linalg.OpDefinitionDef mlir.dialects.linalg.OpMetadataDef mlir.dialects.linalg.LinalgOpDef mlir.dialects.linalg.AffineBuildState mlir.dialects.linalg.AffineExprDef mlir.dialects.linalg.DimDef mlir.dialects.linalg.SymbolDef mlir.dialects.linalg.ScalarAssign mlir.dialects.linalg.ScalarFn mlir.dialects.linalg.ScalarArg mlir.dialects.linalg.ScalarConst mlir.dialects.linalg.ScalarIndex mlir.dialects.linalg.ScalarExpression mlir.dialects.linalg.TypeVar mlir.dialects.linalg.YAMLObject mlir.dialects.linalg.LinalgStructuredOpConfig mlir.dialects.linalg.LinalgOpConfig mlir.dialects.linalg.OperandDefConfig mlir.dialects.linalg.GenericOp_ mlir.dialects.linalg.ElementwiseOp_ Functions --------- .. autoapisummary:: mlir.dialects.linalg._ods_equally_sized_accessor mlir.dialects.linalg._ods_get_default_loc_context mlir.dialects.linalg._get_op_results_or_values mlir.dialects.linalg._ods_segmented_accessor mlir.dialects.linalg.abs mlir.dialects.linalg.add mlir.dialects.linalg.batch_matmul mlir.dialects.linalg.batch_matvec mlir.dialects.linalg.batch_mmt4d mlir.dialects.linalg.batch_reduce_matmul mlir.dialects.linalg.batch_vecmat mlir.dialects.linalg.broadcast mlir.dialects.linalg.ceil mlir.dialects.linalg.contract mlir.dialects.linalg.conv_1d_ncw_fcw mlir.dialects.linalg.conv_1d_nwc_wcf mlir.dialects.linalg.conv_1d mlir.dialects.linalg.conv_2d_nchw_fchw mlir.dialects.linalg.conv_2d_nchw_fchw_q mlir.dialects.linalg.conv_2d_ngchw_fgchw mlir.dialects.linalg.conv_2d_ngchw_gfchw mlir.dialects.linalg.conv_2d_ngchw_gfchw_q mlir.dialects.linalg.conv_2d_nhwc_fhwc mlir.dialects.linalg.conv_2d_nhwc_fhwc_q mlir.dialects.linalg.conv_2d_nhwc_hwcf mlir.dialects.linalg.conv_2d_nhwc_hwcf_q mlir.dialects.linalg.conv_2d_nhwgc_gfhwc mlir.dialects.linalg.conv_2d_nhwgc_gfhwc_q mlir.dialects.linalg.conv_2d mlir.dialects.linalg.conv_3d_ncdhw_fcdhw mlir.dialects.linalg.conv_3d_ndhwc_dhwcf mlir.dialects.linalg.conv_3d_ndhwc_dhwcf_q mlir.dialects.linalg.conv_3d mlir.dialects.linalg.copy mlir.dialects.linalg.depthwise_conv_1d_ncw_cw mlir.dialects.linalg.depthwise_conv_1d_nwc_wc mlir.dialects.linalg.depthwise_conv_1d_nwc_wcm mlir.dialects.linalg.depthwise_conv_2d_nchw_chw mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc_q mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm_q mlir.dialects.linalg.depthwise_conv_3d_ncdhw_cdhw mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwc mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwcm mlir.dialects.linalg.div mlir.dialects.linalg.div_unsigned mlir.dialects.linalg.dot mlir.dialects.linalg.elementwise mlir.dialects.linalg.erf mlir.dialects.linalg.exp mlir.dialects.linalg.fill mlir.dialects.linalg.fill_rng_2d mlir.dialects.linalg.floor mlir.dialects.linalg.generic mlir.dialects.linalg.index mlir.dialects.linalg.pack mlir.dialects.linalg.softmax mlir.dialects.linalg.unpack mlir.dialects.linalg.winograd_filter_transform mlir.dialects.linalg.winograd_input_transform mlir.dialects.linalg.winograd_output_transform mlir.dialects.linalg.yield_ mlir.dialects.linalg.log mlir.dialects.linalg.map mlir.dialects.linalg.matmul mlir.dialects.linalg.matvec mlir.dialects.linalg.max mlir.dialects.linalg.min mlir.dialects.linalg.mmt4d mlir.dialects.linalg.mul mlir.dialects.linalg.negf mlir.dialects.linalg.pooling_nchw_max mlir.dialects.linalg.pooling_nchw_sum mlir.dialects.linalg.pooling_ncw_max mlir.dialects.linalg.pooling_ncw_sum mlir.dialects.linalg.pooling_ndhwc_max mlir.dialects.linalg.pooling_ndhwc_min mlir.dialects.linalg.pooling_ndhwc_sum mlir.dialects.linalg.pooling_nhwc_max mlir.dialects.linalg.pooling_nhwc_max_unsigned mlir.dialects.linalg.pooling_nhwc_min mlir.dialects.linalg.pooling_nhwc_min_unsigned mlir.dialects.linalg.pooling_nhwc_sum mlir.dialects.linalg.pooling_nwc_max mlir.dialects.linalg.pooling_nwc_max_unsigned mlir.dialects.linalg.pooling_nwc_min mlir.dialects.linalg.pooling_nwc_min_unsigned mlir.dialects.linalg.pooling_nwc_sum mlir.dialects.linalg.powf mlir.dialects.linalg.quantized_batch_matmul mlir.dialects.linalg.quantized_matmul mlir.dialects.linalg.reciprocal mlir.dialects.linalg.reduce mlir.dialects.linalg.round mlir.dialects.linalg.rsqrt mlir.dialects.linalg.select mlir.dialects.linalg.sqrt mlir.dialects.linalg.square mlir.dialects.linalg.sub mlir.dialects.linalg.tanh mlir.dialects.linalg.transpose mlir.dialects.linalg.vecmat mlir.dialects.linalg.register_attribute_builder mlir.dialects.linalg._binaryfn mlir.dialects.linalg._elementwisearitygroup mlir.dialects.linalg._elementwisecaselimits mlir.dialects.linalg._elementwisekind mlir.dialects.linalg._iteratortype mlir.dialects.linalg._ternaryfn mlir.dialects.linalg._typefn mlir.dialects.linalg._unaryfn mlir.dialects.linalg._winogradconv2dfmr mlir.dialects.linalg._binaryfnattr mlir.dialects.linalg._elementwisekindattr mlir.dialects.linalg._iteratortypeenum mlir.dialects.linalg._ternaryfnattr mlir.dialects.linalg._typefnattr mlir.dialects.linalg._unaryfnattr mlir.dialects.linalg._iteratortypeenum mlir.dialects.linalg.copy mlir.dialects.linalg.exp mlir.dialects.linalg.log mlir.dialects.linalg.abs mlir.dialects.linalg.ceil mlir.dialects.linalg.floor mlir.dialects.linalg.negf mlir.dialects.linalg.reciprocal mlir.dialects.linalg.round mlir.dialects.linalg.sqrt mlir.dialects.linalg.rsqrt mlir.dialects.linalg.square mlir.dialects.linalg.tanh mlir.dialects.linalg.erf mlir.dialects.linalg.add mlir.dialects.linalg.sub mlir.dialects.linalg.mul mlir.dialects.linalg.div mlir.dialects.linalg.div_unsigned mlir.dialects.linalg.max mlir.dialects.linalg.min mlir.dialects.linalg.powf mlir.dialects.linalg.select mlir.dialects.linalg.quantized_matmul mlir.dialects.linalg.mmt4d mlir.dialects.linalg.batch_mmt4d mlir.dialects.linalg.quantized_batch_matmul mlir.dialects.linalg.matvec mlir.dialects.linalg.vecmat mlir.dialects.linalg.batch_matvec mlir.dialects.linalg.batch_vecmat mlir.dialects.linalg.dot mlir.dialects.linalg.conv_1d mlir.dialects.linalg.conv_2d mlir.dialects.linalg.conv_3d mlir.dialects.linalg.conv_1d_nwc_wcf mlir.dialects.linalg.conv_1d_ncw_fcw mlir.dialects.linalg.conv_2d_nhwc_hwcf mlir.dialects.linalg.conv_2d_nhwc_fhwc mlir.dialects.linalg.conv_2d_nhwc_hwcf_q mlir.dialects.linalg.conv_2d_nhwc_fhwc_q mlir.dialects.linalg.conv_2d_nchw_fchw_q mlir.dialects.linalg.conv_2d_nchw_fchw mlir.dialects.linalg.conv_2d_ngchw_fgchw mlir.dialects.linalg.conv_2d_ngchw_gfchw mlir.dialects.linalg.conv_2d_nhwgc_gfhwc mlir.dialects.linalg.conv_2d_nhwgc_gfhwc_q mlir.dialects.linalg.conv_2d_ngchw_gfchw_q mlir.dialects.linalg.conv_3d_ndhwc_dhwcf mlir.dialects.linalg.conv_3d_ndhwc_dhwcf_q mlir.dialects.linalg.conv_3d_ncdhw_fcdhw mlir.dialects.linalg.depthwise_conv_1d_nwc_wc mlir.dialects.linalg.depthwise_conv_1d_ncw_cw mlir.dialects.linalg.depthwise_conv_1d_nwc_wcm mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc mlir.dialects.linalg.depthwise_conv_2d_nchw_chw mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwc_q mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm mlir.dialects.linalg.depthwise_conv_2d_nhwc_hwcm_q mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwc mlir.dialects.linalg.depthwise_conv_3d_ncdhw_cdhw mlir.dialects.linalg.depthwise_conv_3d_ndhwc_dhwcm mlir.dialects.linalg.pooling_nhwc_sum mlir.dialects.linalg.pooling_nchw_sum mlir.dialects.linalg.pooling_nhwc_max mlir.dialects.linalg.pooling_nhwc_max_unsigned mlir.dialects.linalg.pooling_nchw_max mlir.dialects.linalg.pooling_nhwc_min mlir.dialects.linalg.pooling_nhwc_min_unsigned mlir.dialects.linalg.pooling_nwc_sum mlir.dialects.linalg.pooling_ncw_sum mlir.dialects.linalg.pooling_nwc_max mlir.dialects.linalg.pooling_nwc_max_unsigned mlir.dialects.linalg.pooling_ncw_max mlir.dialects.linalg.pooling_nwc_min mlir.dialects.linalg.pooling_nwc_min_unsigned mlir.dialects.linalg.pooling_ndhwc_sum mlir.dialects.linalg.pooling_ndhwc_max mlir.dialects.linalg.pooling_ndhwc_min mlir.dialects.linalg.fill mlir.dialects.linalg.fill_rng_2d mlir.dialects.linalg._get_op_result_or_value mlir.dialects.linalg._get_op_results_or_values mlir.dialects.linalg.bind_op_def mlir.dialects.linalg.current_op_def mlir.dialects.linalg._prepare_structured_op_outs mlir.dialects.linalg.linalg_structured_op mlir.dialects.linalg.domain mlir.dialects.linalg.implements mlir.dialects.linalg.defines mlir.dialects.linalg.yaml_dump mlir.dialects.linalg.yaml_dump_all mlir.dialects.linalg.emit_generic_structured_op mlir.dialects.linalg.emit_named_structured_op mlir.dialects.linalg.loc_tracebacks mlir.dialects.linalg.register_attribute_builder mlir.dialects.linalg._affineMapAttr mlir.dialects.linalg._integerSetAttr mlir.dialects.linalg._boolAttr mlir.dialects.linalg._dictAttr mlir.dialects.linalg._indexAttr mlir.dialects.linalg._i1Attr mlir.dialects.linalg._i8Attr mlir.dialects.linalg._i16Attr mlir.dialects.linalg._i32Attr mlir.dialects.linalg._i64Attr mlir.dialects.linalg._si1Attr mlir.dialects.linalg._si8Attr mlir.dialects.linalg._si16Attr mlir.dialects.linalg._si32Attr mlir.dialects.linalg._si64Attr mlir.dialects.linalg._ui1Attr mlir.dialects.linalg._ui8Attr mlir.dialects.linalg._ui16Attr mlir.dialects.linalg._ui32Attr mlir.dialects.linalg._ui64Attr mlir.dialects.linalg._f32Attr mlir.dialects.linalg._f64Attr mlir.dialects.linalg._stringAttr mlir.dialects.linalg._symbolNameAttr mlir.dialects.linalg._symbolRefAttr mlir.dialects.linalg._flatSymbolRefAttr mlir.dialects.linalg._unitAttr mlir.dialects.linalg._arrayAttr mlir.dialects.linalg._affineMapArrayAttr mlir.dialects.linalg._boolArrayAttr mlir.dialects.linalg._dictArrayAttr mlir.dialects.linalg._flatSymbolRefArrayAttr mlir.dialects.linalg._i32ArrayAttr mlir.dialects.linalg._i64ArrayAttr mlir.dialects.linalg._i64SmallVectorArrayAttr mlir.dialects.linalg._indexListArrayAttr mlir.dialects.linalg._f32ArrayAttr mlir.dialects.linalg._f64ArrayAttr mlir.dialects.linalg._strArrayAttr mlir.dialects.linalg._symbolRefArrayAttr mlir.dialects.linalg._denseF32ArrayAttr mlir.dialects.linalg._denseF64ArrayAttr mlir.dialects.linalg._denseI8ArrayAttr mlir.dialects.linalg._denseI16ArrayAttr mlir.dialects.linalg._denseI32ArrayAttr mlir.dialects.linalg._denseI64ArrayAttr mlir.dialects.linalg._denseBoolArrayAttr mlir.dialects.linalg._typeAttr mlir.dialects.linalg._typeArrayAttr mlir.dialects.linalg._memref_type_attr mlir.dialects.linalg._f64ElementsAttr mlir.dialects.linalg._get_op_result_or_value mlir.dialects.linalg._get_op_result_or_op_results mlir.dialects.linalg._dispatch_mixed_values mlir.dialects.linalg.region_op mlir.dialects.linalg.transpose mlir.dialects.linalg.broadcast mlir.dialects.linalg._IteratorTypeArrayAttr mlir.dialects.linalg._create_matmul_like_op mlir.dialects.linalg.matmul mlir.dialects.linalg.batch_matmul mlir.dialects.linalg.batch_reduce_matmul mlir.dialects.linalg.contract mlir.dialects.linalg.elementwise mlir.dialects.linalg.pack mlir.dialects.linalg.unpack Package Contents ---------------- .. py:function:: _ods_equally_sized_accessor(elements, n_simple, n_variadic, n_preceding_simple, n_preceding_variadic) Returns a starting position and a number of elements per variadic group assuming equally-sized groups and the given numbers of preceding groups. elements: a sequential container. n_simple: the number of non-variadic groups in the container. n_variadic: the number of variadic groups in the container. n_preceding_simple: the number of non-variadic groups preceding the current group. n_preceding_variadic: the number of variadic groups preceding the current group. .. py:function:: _ods_get_default_loc_context(location=None) Returns a context in which the defaulted location is created. If the location is None, takes the current location from the stack. .. py:function:: _get_op_results_or_values(arg: Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, Sequence[Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value]]]) -> Union[Sequence[Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value]], mlir._mlir_libs._mlir.ir.OpResultList] Returns the given sequence of values or the results of the given op. This is useful to implement op constructors so that they can take other ops as lists of arguments instead of requiring the caller to extract results for every op. .. py:function:: _ods_segmented_accessor(elements, raw_segments, idx) Returns a slice of elements corresponding to the idx-th segment. elements: a sliceable container (operands or results). raw_segments: an mlir.ir.Attribute, of DenseI32Array subclass containing sizes of the segments. idx: index of the segment. .. py:data:: _ods_ir .. py:class:: _Dialect(descriptor: object) Bases: :py:obj:`_ods_ir` .. py:attribute:: DIALECT_NAMESPACE :value: 'linalg' .. py:class:: AbsOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.abs' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: abs(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, AbsOp] .. py:class:: AddOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.add`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.add' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: add(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, AddOp] .. py:class:: BatchMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. code:: Broadcast and Transpose semantics can be appiled by specifying the explicit attribute 'indexing_maps' as shown below. This is a list attribute, so must include maps for all arguments if specified. Example Transpose: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>) outs(%arg2: memref<2x3x7xf32>) ``` Example Broadcast: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>) outs(%arg2: memref<2x3x7xf32>) ``` Example Broadcast and Transpose: ```mlir linalg.batch_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>) outs(%arg2: memref<2x3x7xf32>) ``` .. py:attribute:: OPERATION_NAME :value: 'linalg.batch_matmul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: indexing_maps() -> Optional[_ods_ir] .. py:method:: cast() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: batch_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BatchMatmulOp] .. py:class:: BatchMatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.batch_matvec' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: batch_matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BatchMatvecOp] .. py:class:: BatchMmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d. .. py:attribute:: OPERATION_NAME :value: 'linalg.batch_mmt4d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: batch_mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BatchMmt4DOp] .. py:class:: BatchReduceMatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Broadcast and Transpose semantics can be applied by specifying the explicit attribute 'indexing_maps' as shown below. This is a list attribute, so must include maps for all arguments if specified. Example Transpose: .. code:: mlir linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, // transpose affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<2x5x3xf32>,memref<2x5x7xf32>) outs(%arg2: memref<3x7xf32>) Example Broadcast: .. code:: mlir linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5xf32>, memref<2x5x7xf32>) outs(%arg2: memref<3x7xf32>) Example Broadcast and Transpose: .. code:: mlir linalg.batch_reduce_matmul indexing_maps = [affine_map<(batch, m, n, k) -> (m, k)>, // broadcast affine_map<(batch, m, n, k) -> (batch, n, k)>, // transpose affine_map<(batch, m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<3x5xf32>, memref<2x7x5xf32>) outs(%arg2: memref<3x7xf32>) .. py:attribute:: OPERATION_NAME :value: 'linalg.batch_reduce_matmul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: indexing_maps() -> Optional[_ods_ir] .. py:method:: cast() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: batch_reduce_matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BatchReduceMatmulOp] .. py:class:: BatchVecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.batch_vecmat' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: batch_vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BatchVecmatOp] .. py:class:: BroadcastOp(result, input, init, dimensions, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Broadcast the input into the given shape by adding ``dimensions``. Example: .. code:: mlir %bcast = linalg.broadcast ins(%input:tensor<16xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1] .. py:attribute:: OPERATION_NAME :value: 'linalg.broadcast' .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: input() -> _ods_ir .. py:method:: init() -> _ods_ir .. py:method:: dimensions() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:method:: region() -> _ods_ir .. py:function:: broadcast(result, input, init, dimensions, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, BroadcastOp] .. py:class:: CeilOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.ceil' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: ceil(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, CeilOp] .. py:class:: ContractOp(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The semantics of contracting inputs ``A`` and ``B`` on top of ``C`` to produce output ``D`` is given by ``D[H] = (SUM_{(I ∪ J) \ H} A[I] * B[J]) + C[H]`` where ``I``, ``J``, and ``H`` are tuples of (pairwise distinct) dimension identifiers - meant to range over valid indices - corresponding to the results of the mandatory (projected permutation) ``indexing_maps`` for ``A``, ``B`` and ``C``. ``SUM_{dims}`` means reduce over all valid indices for the dimensions in the set ``dims`` (with ``I``, ``J``, and ``K`` treated as *sets* of dim identifiers). The iteration space consists of all dimensions in ``I``, ``J`` and ``H``, i.e. the domain of each of the ``affine_map``s. Like for einsums, the iteration type of each dim is inferred and is either: * reduction: the dim is used to index into ``A`` and ``B`` but not ``C``. Per the above semantics, these dims will be contracted, i.e. reduced over. * parallel: the dim is used to index into ``C`` and at least one of ``A`` and ``B``, and - deriving from matmul terminology - is either an "M-like" dim (if used on ``A`` and ``C``), an "N-like" dim (if used on ``B`` and ``C``) or a "batch"-dim (if used to index into ``A``, ``B``, and ``C``). For example, batch-matmul is given by ``I = ⟨ b, m, k ⟩``, ``J = ⟨ b, k, n ⟩``, ``H = ⟨ b, m, n ⟩`` (with ``k`` as a contracting reduction-dimension while ``m``, ``n`` and ``b`` have parallel iteration-type) and gets represented as: .. code:: mlir %D = linalg.contract indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>, affine_map<(batch, m, n, k) -> (batch, k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%A, %B: tensor, tensor) outs(%C: tensor) -> tensor Note that by permuting dims in the ``affine_map``s' results, accesses to to the inputs and output can be arbitrarily transposed. Similarly, arbitrary broadcasts can be achieved through leaving out dims on either input operand. For example, the following is a variant of batch-matmul with a transposition applied to ``A`` while ``B``'s 2D-matrix gets broadcasted along the batch dim: .. code:: mlir linalg.contract indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>, affine_map<(batch, m, n, k) -> (k, n)>, affine_map<(batch, m, n, k) -> (batch, m, n)>] ins(%A, %B: memref, memref) outs(%C: memref) Numeric casting is performed on the operands to the inner multiplication, promoting/truncating them to the same data type as the accumulator/output. TODO: Allow control over the combining/accumulating op and possibly the multiplication op. .. py:attribute:: OPERATION_NAME :value: 'linalg.contract' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: indexing_maps() -> _ods_ir .. py:method:: cast() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: combiner() -> _ods_ir .. py:function:: contract(result_tensors, inputs, outputs, indexing_maps, *, cast=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ContractOp] .. py:class:: Conv1DNcwFcwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NCW. * Kernel: FCW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_1d_ncw_fcw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_1d_ncw_fcw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv1DNcwFcwOp] .. py:class:: Conv1DNwcWcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_1d_nwc_wcf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_1d_nwc_wcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv1DNwcWcfOp] .. py:class:: Conv1DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_1d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_1d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv1DOp] .. py:class:: Conv2DNchwFchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NCHW. * Kernel: FCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nchw_fchw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nchw_fchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNchwFchwOp] .. py:class:: Conv2DNchwFchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NCHW. * Kernel: FCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nchw_fchw_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nchw_fchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNchwFchwQOp] .. py:class:: Conv2DNgchwFgchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NGCHW. * Kernel: FGCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_ngchw_fgchw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_ngchw_fgchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNgchwFgchwOp] .. py:class:: Conv2DNgchwGfchwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NGCHW. * Kernel: GFCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_ngchw_gfchw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_ngchw_gfchw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNgchwGfchwOp] .. py:class:: Conv2DNgchwGfchwQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NGCHW. * Kernel: GFCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_ngchw_gfchw_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_ngchw_gfchw_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNgchwGfchwQOp] .. py:class:: Conv2DNhwcFhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWC. * Kernel: FHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwc_fhwc' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwc_fhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwcFhwcOp] .. py:class:: Conv2DNhwcFhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWC. * Kernel: FHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwc_fhwc_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwc_fhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwcFhwcQOp] .. py:class:: Conv2DNhwcHwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWC. * Kernel: HWCF. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwc_hwcf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwc_hwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwcHwcfOp] .. py:class:: Conv2DNhwcHwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWC. * Kernel: HWCF. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwc_hwcf_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwc_hwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwcHwcfQOp] .. py:class:: Conv2DNhwgcGfhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWGC. * Kernel: GFHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwgc_gfhwc' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwgc_gfhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwgcGfhwcOp] .. py:class:: Conv2DNhwgcGfhwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWGC. * Kernel: GFHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d_nhwgc_gfhwc_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d_nhwgc_gfhwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DNhwgcGfhwcQOp] .. py:class:: Conv2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_2d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv2DOp] .. py:class:: Conv3DNcdhwFcdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_3d_ncdhw_fcdhw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_3d_ncdhw_fcdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv3DNcdhwFcdhwOp] .. py:class:: Conv3DNdhwcDhwcfOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_3d_ndhwc_dhwcf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_3d_ndhwc_dhwcf(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv3DNdhwcDhwcfOp] .. py:class:: Conv3DNdhwcDhwcfQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_3d_ndhwc_dhwcf_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_3d_ndhwc_dhwcf_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv3DNdhwcDhwcfQOp] .. py:class:: Conv3DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.conv_3d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: conv_3d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Conv3DOp] .. py:class:: CopyOp(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.copy' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: cast() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: copy(result_tensors, inputs, outputs, *, cast=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, CopyOp] .. py:class:: DepthwiseConv1DNcwCwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_1d_ncw_cw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_1d_ncw_cw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv1DNcwCwOp] .. py:class:: DepthwiseConv1DNwcWcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_1d_nwc_wc' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_1d_nwc_wc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv1DNwcWcOp] .. py:class:: DepthwiseConv1DNwcWcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_1d_nwc_wcm' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_1d_nwc_wcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv1DNwcWcmOp] .. py:class:: DepthwiseConv2DNchwChwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_2d_nchw_chw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_2d_nchw_chw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv2DNchwChwOp] .. py:class:: DepthwiseConv2DNhwcHwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_2d_nhwc_hwc' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_2d_nhwc_hwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv2DNhwcHwcOp] .. py:class:: DepthwiseConv2DNhwcHwcQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_2d_nhwc_hwc_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_2d_nhwc_hwc_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv2DNhwcHwcQOp] .. py:class:: DepthwiseConv2DNhwcHwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_2d_nhwc_hwcm' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_2d_nhwc_hwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv2DNhwcHwcmOp] .. py:class:: DepthwiseConv2DNhwcHwcmQOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_2d_nhwc_hwcm_q' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_2d_nhwc_hwcm_q(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv2DNhwcHwcmQOp] .. py:class:: DepthwiseConv3DNcdhwCdhwOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_3d_ncdhw_cdhw' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_3d_ncdhw_cdhw(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv3DNcdhwCdhwOp] .. py:class:: DepthwiseConv3DNdhwcDhwcOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_3d_ndhwc_dhwc' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_3d_ndhwc_dhwc(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv3DNdhwcDhwcOp] .. py:class:: DepthwiseConv3DNdhwcDhwcmOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.depthwise_conv_3d_ndhwc_dhwcm' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: depthwise_conv_3d_ndhwc_dhwcm(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DepthwiseConv3DNdhwcDhwcmOp] .. py:class:: DivOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.div`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.div' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: div(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DivOp] .. py:class:: DivUnsignedOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.div`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.div_unsigned' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: div_unsigned(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DivUnsignedOp] .. py:class:: DotOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.dot' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: dot(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, DotOp] .. py:class:: ElementwiseOp(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The attribute ``kind`` describes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select). By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type. Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand. Iterator-types are always all ``parallel``. Iterator-types are needed for constructing the underlying structured op. The number of dims of the iterator-types are inferred from the rank of the result type. Example: Defining a unary linalg.elementwise with default indexing-map: .. code:: mlir %exp = linalg.elementwise kind=#linalg.elementwise_kind ins(%x : tensor<4x16x8xf32>) outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32> Defining a binary linalg.elementwise with user-defined indexing-map: .. code:: mlir %add = linalg.elementwise kind=#linalg.elementwise_kind indexing_maps = [#transpose, #broadcast, #identity] ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>) outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32> .. py:attribute:: OPERATION_NAME :value: 'linalg.elementwise' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: kind() -> _ods_ir .. py:method:: indexing_maps() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: elementwise(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ElementwiseOp] .. py:class:: ErfOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.erf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: erf(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ErfOp] .. py:class:: ExpOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.exp' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: exp(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ExpOp] .. py:class:: FillOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output. .. py:attribute:: OPERATION_NAME :value: 'linalg.fill' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: fill(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, FillOp] .. py:class:: FillRng2DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers. .. py:attribute:: OPERATION_NAME :value: 'linalg.fill_rng_2d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: fill_rng_2d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, FillRng2DOp] .. py:class:: FloorOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.floor' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: floor(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, FloorOp] .. py:class:: GenericOp(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a ``linalg.generic`` op is written as: .. code:: mlir linalg.generic #trait_attribute ins(%A, %B : memref, memref) outs(%C : memref) attrs = {other-optional-attributes} {region} Where #trait_attributes is an alias of a dictionary attribute containing: * doc [optional]: a documentation string * indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window Example: Defining a #matmul_trait attribute in MLIR can be done as follows: .. code:: mlir #matmul_accesses = [ (m, n, k) -> (m, k), (m, n, k) -> (k, n), (m, n, k) -> (m, n) ] #matmul_trait = { doc = "C(m, n) += A(m, k) * B(k, n)", indexing_maps = #matmul_accesses, library_call = "linalg_matmul", iterator_types = ["parallel", "parallel", "reduction"] } And can be reused in multiple places as: .. code:: mlir linalg.generic #matmul_trait ins(%A, %B : memref, memref) outs(%C : memref) {other-optional-attributes} { ^bb0(%a: f32, %b: f32, %c: f32) : %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 linalg.yield %e : f32 } This may lower to either: .. code:: mlir call @linalg_matmul(%A, %B, %C) : (memref, memref, memref) -> () or IR resembling: .. code:: mlir scf.for %m = %c0 to %M step %c1 { scf.for %n = %c0 to %N step %c1 { scf.for %k = %c0 to %K step %c1 { %a = load %A[%m, %k] : memref %b = load %B[%k, %n] : memref %c = load %C[%m, %n] : memref %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 store %e, %C[%m, %n] : memref } } } .. py:attribute:: OPERATION_NAME :value: 'linalg.generic' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: indexing_maps() -> _ods_ir .. py:method:: iterator_types() -> _ods_ir .. py:method:: doc() -> Optional[_ods_ir] .. py:method:: library_call() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: generic(result_tensors, inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, GenericOp] .. py:class:: IndexOp(dim, *, results=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The ``linalg.index`` operation returns the iteration index of the immediately enclosing linalg structured operation for the iteration dimension ``dim``. The ``dim`` attribute specifies the position of the accessed dimension in the indexing map domain. Example: .. code:: mlir #map = affine_map<(i, j) -> (i, j)> linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel", "parallel"]} outs(%I, %J : memref, memref) { ^bb0(%arg0 : index, %arg1 : index): // Access the outer iteration dimension i %i = linalg.index 0 : index // Access the inner iteration dimension j %j = linalg.index 1 : index linalg.yield %i, %j : index, index } This may lower to IR resembling: .. code:: mlir %0 = dim %I, %c0 : memref %1 = dim %I, %c1 : memref scf.for %i = %c0 to %0 step %c1 { scf.for %j = %c0 to %1 step %c1 { store %i, %I[%i, %j] : memref store %j, %J[%i, %j] : memref } } .. py:attribute:: OPERATION_NAME :value: 'linalg.index' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: dim() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: index(dim, *, results=None, loc=None, ip=None) -> _ods_ir .. py:class:: PackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The "pack" operation converts a source tensor of rank ``n`` into a result tensor of rank ``n + k`` with a tiled and packed layout (maybe with padding) and optionally transposes the tiled source tensor dimensions. ``inner_tiles`` (mandatory) specifies ``k`` tile sizes. These tile sizes correspond to the least significant ("inner") result tensor dimension sizes, in the same order. Tile sizes can be static or dynamic. ``inner_dims_pos`` (mandatory) specifies ``k`` source tensor dimensions that are being tiled, where ``0 <= k <= n``. * ``inner_dims_pos[i]`` specifies the source tensor dimension tiled by ``inner_tiles[i]`` where ``0 <= i < k``. All the values in ``inner_dims_pos`` are within [0, n). * The tiled dimensions (of size ``inner_tiles``) are added to the end of the result tensor in the order in which they appear, i.e. ``shape(result)[rank(source) + i] = inner_tiles[i]`` for ``0 <= i < k``. * The following relationship for the tiled dimensions holds: ``shape(result)[inner_dims_pos[i]] = shape(source)[inner_dims_pos[i]] / inner_tiles[i]``, where (⌈/⌉ indicates CeilDiv). Example: If ``inner_tiles = [16, 32]``, the result tensor has a shape of ``...x16x32``. If ``inner_dims_pos = [0, 1]``, the 0th source dimension is tiled by 16 and the 1st source dimension is tiled by 32. Other source dimensions (if any) are not tiled. If ``inner_dims_pos = [1, 0]``, the 1st dimension is tiled by 16 and the 0th dimension is tiled by 32. Example: .. code:: mlir // NC to NCnc %0 = linalg.pack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<128x256xf32> -> tensor<16x8 x 8x32 xf32> // \ / \ / // Outer Dims: 16x8 Inner Dims: 8x32 // CHW to CHWhw %0 = linalg.pack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] into %dest : tensor<3x20x24xf32> -> tensor<3x10x6 x 4x2 xf32> // \ / \ / // Outer Dims: 3x10x6 Inner Dims: 4x2 // HCW to HCWhw %0 = linalg.pack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] into %dest : tensor<18x3x32xf32> -> tensor<9x3x8 x 4x2 xf32> // \ / \ / // Outer Dims: 9x3x8 Inner Dims: 4x2 ``outer_dims_perm`` (optional) specifies a permutation for the outer dimensions. If specified, it must have ``n`` elements. Example: .. code:: mlir // CK to KCck %0 = linalg.pack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<128x256xf32> -> tensor<8x16 x 8x32 xf32> // \ / // compare with "NC to NCnc": outer dims are transposed ``padding_value`` specifies a padding value at the boundary on non-perfectly divisible dimensions. Padding is optional: * If absent, it is assumed that for all inner tiles, ``shape(source)[inner_dims_pos[i]] % inner_tiles[i] == 0``, i.e. all inner tiles divide perfectly the corresponding outer dimension in the result tensor. It is UB if the tile does not perfectly divide the dimension. * If present, it will pad along high dimensions (high-padding) to make the tile complete. Note that it is not allowed to have artificial padding that is not strictly required by linalg.pack (i.e., padding past what is needed to complete the last tile along each packed dimension). It is UB if extra padding is requested. It is not possible to verify the requirements statically with dynamic shapes, so they are treated as UB. Example: .. code:: mlir %0 = linalg.pack %arg0 padding_value(%pad : f32) outer_dims_perm = [2, 1, 0] inner_dims_pos = [1] inner_tiles = [2] into %arg1 : tensor<200x127x256xf32> -> tensor<256x64x200x2xf32> // \ // padded and tiled dim // // Source dimension 1 is tiled. 64 does not divide 127 evenly, so 1 padded // element is added at the end. // // Note: Only tiled dimensions can be padded. Invalid example that has artificial padding: .. code:: mlir %0 = linalg.pack %src padding_value(%cst : f32) inner_dims_pos = [0] inner_tiles = [8] into %dest : tensor<9xf32> -> tensor<3x8xf32> // \ // expect tensor<2x8xf32> because CeilDiv(9, 8) = 2 .. py:attribute:: OPERATION_NAME :value: 'linalg.pack' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: source() -> _ods_ir .. py:method:: dest() -> _ods_ir .. py:method:: padding_value() -> Optional[_ods_ir] .. py:method:: inner_tiles() -> _ods_ir .. py:method:: outer_dims_perm() -> Optional[_ods_ir] .. py:method:: inner_dims_pos() -> _ods_ir .. py:method:: static_inner_tiles() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: pack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, padding_value=None, outer_dims_perm=None, results=None, loc=None, ip=None) -> _ods_ir .. py:class:: SoftmaxOp(result, input, output, dimension, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` linalg.softmax computes a numerically stable version of softmax. For a given input tensor and a specified dimension ``d``, compute: #. the max ``m`` along that dimension ``d`` #. f(x) = exp(x - m) #. sum f(x) along dimension d to get l(x). #. compute the final result f(x) / l(x). This is an aggregate linalg operation that further reduces to a small DAG of structured operations. Warning: Regarding the tiling capabilities, the implementation doesn't check that the provided dimensions make sense. This is the responsability of the transformation calling the tiling to ensure that the provided sizes for each dimension make sense with respect to the semantic of softmax. .. py:attribute:: OPERATION_NAME :value: 'linalg.softmax' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: input() -> _ods_ir .. py:method:: output() -> _ods_ir .. py:method:: dimension() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: softmax(result, input, output, dimension, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, SoftmaxOp] .. py:class:: UnPackOp(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The "unpack" operation converts a source tensor of rank ``n`` with a tiled and packed layout to a result tensor of rank ``n - k``. ``inner_tiles`` (mandatory) specifies ``k`` tile sizes. These tile sizes correspond to the least significant ("inner") source tensor dimension sizes. The behavior of this op is undefined if: * ``inner_tiles`` do not exactly match with the corresponding source tensor dimension sizes. * Or, ``inner_tiles[i]`` does not divide the size of dimension ``inner_dims_pos[i]`` (assuming that ``outer_dims_perm`` is not specified) evenly. ``inner_dims_pos`` (mandatory) specifies ``k`` result tensor (i.e. unpacked tensor) dimensions that were tiled with the ``inner_tiles`` to create the packed source tensor. The source tensor (i.e. packed tensor) dimensions can be unpacked given ``inner_dims_pos`` as follows. * For ``0 <= i < k`` the following relationship holds: ``shape(result)[inner_dims_pos[i]] <= shape(source)[n-k+i] * shape(source)[inner_dims_pos[i]]``. * For ``0 <= j < n-k`` and ``j`` not in ``inner_dims_pos`` the following relationship holds: ``shape(result)[j] = shape(source)[j]``. ``outer_dims_perm`` (optional) specifies a permutation for the outer dimensions. If specified, it must have ``n - k`` elements. If specified, this permutation is applied before combining any dimensions. Note, the unpack operation may drop any padding introduced by the pack operation and hence the following holds ``NumElementsOf(source) >= NumElementsOf(result)``. Examples: .. code:: mlir // NCnc to NC: %0 = linalg.unpack %source inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<16x8 x 8x32 xf32> -> tensor<128x256xf32> // \ / \ / // Outer Dims: 16x8 Inner Dims: 8x32 // CK to KCck: %0 = linalg.unpack %source outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %dest : tensor<8x16 x 8x32 xf32> -> tensor<128x256xf32> // \ / \ / // Outer Dims: 8x16 Inner Dims: 8x32 // CHW to CHWhw: %0 = linalg.unpack %source inner_dims_pos = [2, 1] inner_tiles = [4, 2] into %dest : tensor<3x10x6 x 4x2 xf32> -> tensor<3x20x24xf32> // \ / \ / // Outer Dims: 3x10x6 Inner Dims: 4x2 // HCW to HCWhw %0 = linalg.unpack %source inner_dims_pos = [2, 0] inner_tiles = [4, 2] into %dest : tensor<9x3x8 x 4x2 xf32> -> tensor<18x3x32xf32> // \ / \ / // Outer Dims: 9x3x8 Inner Dims: 4x2 .. py:attribute:: OPERATION_NAME :value: 'linalg.unpack' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: source() -> _ods_ir .. py:method:: dest() -> _ods_ir .. py:method:: inner_tiles() -> _ods_ir .. py:method:: outer_dims_perm() -> Optional[_ods_ir] .. py:method:: inner_dims_pos() -> _ods_ir .. py:method:: static_inner_tiles() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: unpack(source, dest, inner_dims_pos, inner_tiles, static_inner_tiles, *, outer_dims_perm=None, results=None, loc=None, ip=None) -> _ods_ir .. py:class:: WinogradFilterTransformOp(result, filter, output, fmr, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor. The algorithm F(m x m, r x r) is Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices. This operator is defined to represent the high level concept of filter transformation (G x g x G^T) in the Winograd Conv2D algorithm. .. py:attribute:: OPERATION_NAME :value: 'linalg.winograd_filter_transform' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: filter() -> _ods_ir .. py:method:: output() -> _ods_ir .. py:method:: fmr() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: winograd_filter_transform(result, filter, output, fmr, *, loc=None, ip=None) -> _ods_ir .. py:class:: WinogradInputTransformOp(result, input, output, fmr, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor. The algorithm F(m x m, r x r) is Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices. This operator is defined to represent the high level concept of input transformation (B^T x d x B) in the Winograd Conv2D algorithm. .. py:attribute:: OPERATION_NAME :value: 'linalg.winograd_input_transform' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: input() -> _ods_ir .. py:method:: output() -> _ods_ir .. py:method:: fmr() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: winograd_input_transform(result, input, output, fmr, *, loc=None, ip=None) -> _ods_ir .. py:class:: WinogradOutputTransformOp(result, value, output, fmr, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Winograd Conv2D algorithm will convert linalg Conv2D operator into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor. The algorithm F(m x m, r x r) is Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices. This operator is defined to represent the high level concept of output transformation (A^T x y x A) in the Winograd Conv2D algorithm. .. py:attribute:: OPERATION_NAME :value: 'linalg.winograd_output_transform' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: value() -> _ods_ir .. py:method:: output() -> _ods_ir .. py:method:: fmr() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:function:: winograd_output_transform(result, value, output, fmr, *, loc=None, ip=None) -> _ods_ir .. py:class:: YieldOp(values, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` ``linalg.yield`` is a special terminator operation for blocks inside regions in ``linalg`` generic ops. It returns values to the immediately enclosing ``linalg`` generic op. Example: .. code:: mlir linalg.yield %f0, %f1 : f32, f32 .. py:attribute:: OPERATION_NAME :value: 'linalg.yield' .. py:attribute:: _ODS_REGIONS :value: (0, True) .. py:method:: values() -> _ods_ir .. py:function:: yield_(values, *, loc=None, ip=None) -> YieldOp .. py:class:: LogOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.log' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: log(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, LogOp] .. py:class:: MapOp(result, inputs, init, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Models elementwise operations on tensors in terms of arithmetic operations on the corresponding elements. Example: .. code:: mlir %add = linalg.map ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>) (%lhs_elem: f32, %rhs_elem: f32) { %0 = arith.addf %lhs_elem, %rhs_elem: f32 linalg.yield %0: f32 } Shortened print form is available for simple maps where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments with operands matching block arguments in order, and the yield operand is the result of the payload operation. The example above will be printed using the shortened form as: .. code:: mlir %add = linalg.map { arith.addf } ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>) outs(%init: tensor<64xf32>) .. py:attribute:: OPERATION_NAME :value: 'linalg.map' .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: init() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:method:: mapper() -> _ods_ir .. py:function:: map(result, inputs, init, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MapOp] .. py:class:: MatmulOp(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Broadcast and Transpose semantics can be appiled by specifying the explicit attribute 'indexing_maps' as shown below.This is a list attribute, so the list must include all the maps if specified. Example Transpose: .. code:: mlir linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose affine_map<(m, n, k) -> (k, n)>, affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5x3xf32>,memref<5x7xf32>) outs(%arg2: memref<3x7xf32>) Example Broadcast: .. code:: mlir linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k)>, // broadcast affine_map<(m, n, k) -> (k, n)>, affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<3xf32>, memref<5x7xf32>) outs(%arg2: memref<3x7xf32>) Example Broadcast and transpose: .. code:: mlir linalg.matmul indexing_maps = [affine_map<(m, n, k) -> (k, m)>, // transpose affine_map<(m, n, k) -> (k)>, // broadcast affine_map<(m, n, k) -> (m, n)>] ins(%arg0, %arg1 : memref<5x3xf32>, memref<7xf32>) outs(%arg2: memref<3x7xf32>) .. py:attribute:: OPERATION_NAME :value: 'linalg.matmul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: indexing_maps() -> Optional[_ods_ir] .. py:method:: cast() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: matmul(result_tensors, inputs, outputs, *, indexing_maps=None, cast=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MatmulOp] .. py:class:: MatvecOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.matvec' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: matvec(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MatvecOp] .. py:class:: MaxOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.max`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: max(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MaxOp] .. py:class:: MinOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.min`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.min' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: min(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MinOp] .. py:class:: Mmt4DOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Differences from linalg.matmul: * The right hand side is transposed, whence the 't' in 'mmt'. * The input and output tensors have a 4D shape instead of a 2D shape. They are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with '0' suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0. .. py:attribute:: OPERATION_NAME :value: 'linalg.mmt4d' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: mmt4d(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, Mmt4DOp] .. py:class:: MulOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.mul`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.mul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: mul(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, MulOp] .. py:class:: NegFOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.negf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: negf(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, NegFOp] .. py:class:: PoolingNchwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nchw_max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nchw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNchwMaxOp] .. py:class:: PoolingNchwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NCHW. * Kernel: HW. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nchw_sum' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nchw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNchwSumOp] .. py:class:: PoolingNcwMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_ncw_max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_ncw_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNcwMaxOp] .. py:class:: PoolingNcwSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NCW. * Kernel: W. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_ncw_sum' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_ncw_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNcwSumOp] .. py:class:: PoolingNdhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_ndhwc_max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_ndhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNdhwcMaxOp] .. py:class:: PoolingNdhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_ndhwc_min' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_ndhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNdhwcMinOp] .. py:class:: PoolingNdhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_ndhwc_sum' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_ndhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNdhwcSumOp] .. py:class:: PoolingNhwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nhwc_max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nhwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNhwcMaxOp] .. py:class:: PoolingNhwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nhwc_max_unsigned' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nhwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNhwcMaxUnsignedOp] .. py:class:: PoolingNhwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nhwc_min' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nhwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNhwcMinOp] .. py:class:: PoolingNhwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nhwc_min_unsigned' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nhwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNhwcMinUnsignedOp] .. py:class:: PoolingNhwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NHWC. * Kernel: HW. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nhwc_sum' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nhwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNhwcSumOp] .. py:class:: PoolingNwcMaxOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nwc_max' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nwc_max(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNwcMaxOp] .. py:class:: PoolingNwcMaxUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nwc_max_unsigned' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nwc_max_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNwcMaxUnsignedOp] .. py:class:: PoolingNwcMinOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nwc_min' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nwc_min(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNwcMinOp] .. py:class:: PoolingNwcMinUnsignedOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nwc_min_unsigned' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nwc_min_unsigned(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNwcMinUnsignedOp] .. py:class:: PoolingNwcSumOp(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Layout: * Input: NWC. * Kernel: W. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.pooling_nwc_sum' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: strides() -> Optional[_ods_ir] .. py:method:: dilations() -> Optional[_ods_ir] .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: pooling_nwc_sum(result_tensors, inputs, outputs, *, strides=None, dilations=None, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PoolingNwcSumOp] .. py:class:: PowFOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Only applies to floating point values. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.powf`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.powf' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: powf(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, PowFOp] .. py:class:: QuantizedBatchMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul. .. py:attribute:: OPERATION_NAME :value: 'linalg.quantized_batch_matmul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: quantized_batch_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, QuantizedBatchMatmulOp] .. py:class:: QuantizedMatmulOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul. .. py:attribute:: OPERATION_NAME :value: 'linalg.quantized_matmul' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: quantized_matmul(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, QuantizedMatmulOp] .. py:class:: ReciprocalOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.reciprocal' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: reciprocal(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ReciprocalOp] .. py:class:: ReduceOp(result, inputs, inits, dimensions, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Executes ``combiner`` on the ``dimensions`` of ``inputs`` and returns the reduced result. The ``dimensions`` attribute needs to list the reduction dimensions in increasing order. Example: .. code:: mlir %reduce = linalg.reduce ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1] (%in: f32, %out: f32) { %0 = arith.addf %out, %in: f32 linalg.yield %0: f32 } Shortened print form is available for simple reduces where the body contains exactly two operations (the payload operation and a yield), the payload operation has the same number of operands as block arguments, the first block argument (init) is the last operand of the payload operation with remaining operands matching remaining block arguments in order, and the yield operand is the result of the payload operation. The example above will be printed using the shortened form as: .. code:: mlir %reduce = linalg.reduce { arith.addf } ins(%input:tensor<16x32x64xf32>) outs(%init:tensor<16x64xf32>) dimensions = [1] .. py:attribute:: OPERATION_NAME :value: 'linalg.reduce' .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: inits() -> _ods_ir .. py:method:: dimensions() -> _ods_ir .. py:method:: combiner() -> _ods_ir .. py:function:: reduce(result, inputs, inits, dimensions, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, ReduceOp] .. py:class:: RoundOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.round' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: round(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, RoundOp] .. py:class:: RsqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.rsqrt' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: rsqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, RsqrtOp] .. py:class:: SelectOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.select`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.select' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: select(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, SelectOp] .. py:class:: SqrtOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.sqrt' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: sqrt(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, SqrtOp] .. py:class:: SquareOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.square' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: square(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, SquareOp] .. py:class:: SubOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.sub`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:attribute:: OPERATION_NAME :value: 'linalg.sub' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: sub(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, SubOp] .. py:class:: TanhOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` No numeric casting is performed on the input operand. .. py:attribute:: OPERATION_NAME :value: 'linalg.tanh' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: tanh(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, TanhOp] .. py:class:: TransposeOp(result, input, init, permutation, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Permutes the dimensions of ``input`` according to the given ``permutation``. ``dim(result, i) = dim(input, permutation[i])`` This op actually moves data, unlike ``memref.transpose`` which is a metadata operation only that produces a transposed "view". Example: .. code:: mlir %transpose = linalg.transpose ins(%input:tensor<16x64xf32>) outs(%init:tensor<64x16xf32>) permutation = [1, 0] .. py:attribute:: OPERATION_NAME :value: 'linalg.transpose' .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: input() -> _ods_ir .. py:method:: init() -> _ods_ir .. py:method:: permutation() -> _ods_ir .. py:method:: result() -> _ods_ir Shortcut to get an op result if it has only one (throws an error otherwise). .. py:method:: region() -> _ods_ir .. py:function:: transpose(result, input, init, permutation, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, TransposeOp] .. py:class:: VecmatOp(result_tensors, inputs, outputs, *, loc=None, ip=None) Bases: :py:obj:`_ods_ir` Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:attribute:: OPERATION_NAME :value: 'linalg.vecmat' .. py:attribute:: _ODS_OPERAND_SEGMENTS .. py:attribute:: _ODS_REGIONS :value: (1, True) .. py:method:: inputs() -> _ods_ir .. py:method:: outputs() -> _ods_ir .. py:method:: result_tensors() -> _ods_ir .. py:method:: region() -> _ods_ir .. py:function:: vecmat(result_tensors, inputs, outputs, *, loc=None, ip=None) -> Union[_ods_ir, _ods_ir, VecmatOp] .. py:function:: register_attribute_builder(kind, replace=False) .. py:data:: _ods_ir .. py:class:: BinaryFn Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 .. py:attribute:: add :value: 0 .. py:attribute:: sub :value: 1 .. py:attribute:: mul :value: 2 .. py:attribute:: div :value: 3 .. py:attribute:: div_unsigned :value: 4 .. py:attribute:: max_signed :value: 5 .. py:attribute:: min_signed :value: 6 .. py:attribute:: max_unsigned :value: 7 .. py:attribute:: min_unsigned :value: 8 .. py:attribute:: powf :value: 9 .. py:method:: __str__() Return str(self). .. py:function:: _binaryfn(x, context) .. py:class:: ElementwiseArityGroup Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 1, 2, 3 .. py:attribute:: Unary :value: 1 .. py:attribute:: Binary :value: 2 .. py:attribute:: Ternary :value: 3 .. py:method:: __str__() Return str(self). .. py:function:: _elementwisearitygroup(x, context) .. py:class:: ElementwiseCaseLimits Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: .. py:attribute:: LastUnary :value: 13 .. py:attribute:: LastBinary :value: 23 .. py:attribute:: LastTernary :value: 24 .. py:method:: __str__() Return str(self). .. py:function:: _elementwisecaselimits(x, context) .. py:class:: ElementwiseKind Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 .. py:attribute:: exp :value: 0 .. py:attribute:: log :value: 1 .. py:attribute:: abs :value: 2 .. py:attribute:: ceil :value: 3 .. py:attribute:: floor :value: 4 .. py:attribute:: negf :value: 5 .. py:attribute:: reciprocal :value: 6 .. py:attribute:: round :value: 7 .. py:attribute:: sqrt :value: 8 .. py:attribute:: rsqrt :value: 9 .. py:attribute:: square :value: 10 .. py:attribute:: tanh :value: 11 .. py:attribute:: erf :value: 12 .. py:attribute:: add :value: 13 .. py:attribute:: sub :value: 14 .. py:attribute:: mul :value: 15 .. py:attribute:: div :value: 16 .. py:attribute:: div_unsigned :value: 17 .. py:attribute:: max_signed :value: 18 .. py:attribute:: min_signed :value: 19 .. py:attribute:: max_unsigned :value: 20 .. py:attribute:: min_unsigned :value: 21 .. py:attribute:: powf :value: 22 .. py:attribute:: select :value: 23 .. py:method:: __str__() Return str(self). .. py:function:: _elementwisekind(x, context) .. py:class:: IteratorType Bases: :py:obj:`enum.IntEnum` Iterator type .. py:attribute:: parallel :value: 0 .. py:attribute:: reduction :value: 1 .. py:method:: __str__() Return str(self). .. py:function:: _iteratortype(x, context) .. py:class:: TernaryFn Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0 .. py:attribute:: select :value: 0 .. py:method:: __str__() Return str(self). .. py:function:: _ternaryfn(x, context) .. py:class:: TypeFn Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0, 1 .. py:attribute:: cast_signed :value: 0 .. py:attribute:: cast_unsigned :value: 1 .. py:method:: __str__() Return str(self). .. py:function:: _typefn(x, context) .. py:class:: UnaryFn Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 .. py:attribute:: exp :value: 0 .. py:attribute:: log :value: 1 .. py:attribute:: abs :value: 2 .. py:attribute:: ceil :value: 3 .. py:attribute:: floor :value: 4 .. py:attribute:: negf :value: 5 .. py:attribute:: reciprocal :value: 6 .. py:attribute:: round :value: 7 .. py:attribute:: sqrt :value: 8 .. py:attribute:: rsqrt :value: 9 .. py:attribute:: square :value: 10 .. py:attribute:: tanh :value: 11 .. py:attribute:: erf :value: 12 .. py:method:: __str__() Return str(self). .. py:function:: _unaryfn(x, context) .. py:class:: WinogradConv2DFmr Bases: :py:obj:`enum.IntEnum` allowed 32-bit signless integer cases: 0, 1, 2 .. py:attribute:: F_2_3 :value: 0 .. py:attribute:: F_4_3 :value: 1 .. py:attribute:: F_2_5 :value: 2 .. py:method:: __str__() Return str(self). .. py:function:: _winogradconv2dfmr(x, context) .. py:function:: _binaryfnattr(x, context) .. py:function:: _elementwisekindattr(x, context) .. py:function:: _iteratortypeenum(x, context) .. py:function:: _ternaryfnattr(x, context) .. py:function:: _typefnattr(x, context) .. py:function:: _unaryfnattr(x, context) .. py:function:: _iteratortypeenum(x, context) .. py:data:: T1 .. py:data:: T2 .. py:data:: Batch .. py:function:: copy(I=TensorDef(T1), O=TensorDef(U, output=True), cast=TypeFnAttrDef(default=TypeFn.cast_signed)) Copies the tensor elementwise. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: exp(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies exp(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: log(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies log(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: abs(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies abs(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: ceil(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies ceil(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: floor(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies floor(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: negf(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies negf(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: reciprocal(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies reciprocal(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: round(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies round(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: sqrt(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies sqrt(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: rsqrt(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies rsqrt(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: square(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies square(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: tanh(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies tanh(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: erf(I=TensorDef(T1), O=TensorDef(T1, output=True)) Applies erf(x) elementwise. No numeric casting is performed on the input operand. .. py:function:: add(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Adds two tensors elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.add`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: sub(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Subtracts two tensors elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.sub`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: mul(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Multiplies two tensors elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.mul`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: div(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Divides the first tensor by the second tensor, elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.div`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: div_unsigned(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Divides the first tensor by the second tensor, elementwise. For integer types, performs an unsigned division. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.div`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: max(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Takes the max (signed) between two inputs, elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.max`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: min(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Takes the min (signed) between two inputs, elementwise. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.min`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: powf(lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Takes the powf(lhs, rhs) between two inputs, elementwise. For powf(arg, 2) use ``linalg.square``. Only applies to floating point values. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.powf`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: select(cond=TensorDef(U), lhs=TensorDef(T1), rhs=TensorDef(T1), O=TensorDef(T1, output=True)) Chooses one value based on a binary condition supplied as its first operand. The shapes and element types must be identical. The appropriate casts, broadcasts and reductions should be done previously to calling this op. This means reduction/broadcast/element cast semantics is explicit. Further passes can take that into account when lowering this code. For example, a ``linalg.broadcast`` + ``linalg.select`` sequence can be lowered to a ``linalg.generic`` with different affine maps for the two operands. .. py:function:: quantized_matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, S.M, S.N, output=True)) Performs a matrix multiplication of two 2D inputs. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul. .. py:function:: mmt4d(lhs=TensorDef(TV.LhsType, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, S.M, S.N, S.M0, S.N0, output=True)) Performs a matrix-matrix-transpose multiplication of two 4D inputs. Differences from linalg.matmul: * The right hand side is transposed, whence the 't' in 'mmt'. * The input and output tensors have a 4D shape instead of a 2D shape. They are interpreted as 2D matrices with one level of 2D tile subdivision, whence the 2+2=4 dimensions. The inner tile dimensions are identified with '0' suffixes below, for instance the LHS matrix shape (M, K, M0, K0) reads as: MxK tiles, each of shape M0xK0. .. py:function:: batch_mmt4d(lhs=TensorDef(TV.LhsType, Batch, S.M, S.K, S.M0, S.K0), rhs=TensorDef(TV.RhsType, Batch, S.N, S.K, S.N0, S.K0), accum=TensorDef(TV.AccumType, Batch, S.M, S.N, S.M0, S.N0, output=True)) Performs a batched matrix-matrix-transpose multiplication of two batched-4D (5D) inputs. Besides the outermost batch dimension has the same semantic as linalg.batch_matmul, the differences from linalg.batch_matmul in the non-batch dimensions are the same as linalg.mmt4d vs. linalg.matmul. See the description of lingalg.mmt4d. .. py:function:: quantized_batch_matmul(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K, S.N), AZp=ScalarDef(I32), BZp=ScalarDef(I32), C=TensorDef(U, Batch, S.M, S.N, output=True)) Performs a batched matrix multiplication of two 3D inputs. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. The quantized variant includes zero-point adjustments for the left and right operands of the matmul. .. py:function:: matvec(A=TensorDef(T1, S.M, S.N), y=TensorDef(T2, S.N), x=TensorDef(U, S.M, output=True)) Performs a matrix-vector multiplication. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: vecmat(y=TensorDef(T1, S.M), A=TensorDef(T2, S.M, S.N), x=TensorDef(U, S.N, output=True)) Performs a vector-matrix multiplication. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: batch_matvec(A=TensorDef(T1, Batch, S.M, S.K), B=TensorDef(T2, Batch, S.K), C=TensorDef(U, Batch, S.M, output=True)) Performs a batched matrix-vector multiplication. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: batch_vecmat(A=TensorDef(T1, Batch, S.K), B=TensorDef(T2, Batch, S.K, S.N), C=TensorDef(U, Batch, S.N, output=True)) Performs a batched matrix-vector multiplication. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: dot(A=TensorDef(T1, S.M), B=TensorDef(T2, S.M), C=TensorDef(U, output=True)) Performs a dot product of two vectors to a scalar result. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_1d(I=TensorDef(T1, S.OW + S.KW), K=TensorDef(T2, S.KW), O=TensorDef(U, S.OW, output=True)) Performs 1-D convolution with no channels. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d(I=TensorDef(T1, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KH, S.KW), O=TensorDef(U, S.OH, S.OW, output=True)) Performs 2-D convolution with no channels. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_3d(I=TensorDef(T1, S.OD + S.KD, S.OH + S.KH, S.OW + S.KW), K=TensorDef(T2, S.KD, S.KH, S.KW), O=TensorDef(U, S.OD, S.OH, S.OW, output=True)) Performs 3-D convolution with no channels. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_1d_nwc_wcf(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OW, S.F, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs 1-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_1d_ncw_fcw(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KW), O=TensorDef(U, S.N, S.F, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs 1-D convolution. Layout: * Input: NCW. * Kernel: FCW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_nhwc_hwcf(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution. Layout: * Input: NHWC. * Kernel: HWCF. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_nhwc_fhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution. Layout: * Input: NHWC. * Kernel: FHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_nhwc_hwcf_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution with zero point offsets. Layout: * Input: NHWC. * Kernel: HWCF. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_2d_nhwc_fhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.F, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution with zero point offsets. Layout: * Input: NHWC. * Kernel: FHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_2d_nchw_fchw_q(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution with zero point offsets. Layout: * Input: NCHW. * Kernel: FCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_2d_nchw_fchw(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D convolution. Layout: * Input: NCHW. * Kernel: FCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_ngchw_fgchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.FG, S.G, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D grouped convolution. Layout: * Input: NGCHW. * Kernel: FGCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_ngchw_gfchw(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D grouped convolution. Layout: * Input: NGCHW. * Kernel: GFCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_nhwgc_gfhwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D grouped convolution. Layout: * Input: NHWGC. * Kernel: GFHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_2d_nhwgc_gfhwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.G, S.C), K=TensorDef(T2, S.G, S.FG, S.KH, S.KW, S.C), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.G, S.FG, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D grouped convolution with zero point offsets. Layout: * Input: NHWGC. * Kernel: GFHWC. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_2d_ngchw_gfchw_q(I=TensorDef(T1, S.N, S.G, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.G, S.FG, S.C, S.KH, S.KW), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.G, S.FG, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs 2-D grouped convolution with zero-point offsets. Layout: * Input: NGCHW. * Kernel: GFCHW. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_3d_ndhwc_dhwcf(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: conv_3d_ndhwc_dhwcf_q(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, S.C, S.F), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.F, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3-D convolution with zero point offsets. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. This includes the zero point offsets common to quantized operations. .. py:function:: conv_3d_ncdhw_fcdhw(I=TensorDef(T1, S.N, S.C, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.F, S.C, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.F, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: depthwise_conv_1d_nwc_wc(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC), O=TensorDef(U, S.N, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs depth-wise 1-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_1d_ncw_cw(I=TensorDef(T1, S.N, S.IC, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KW), O=TensorDef(U, S.N, S.IC, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs depth-wise 1-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_1d_nwc_wcm(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs depth-wise 1-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: depthwise_conv_2d_nhwc_hwc(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs depth-wise 2-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_2d_nchw_chw(I=TensorDef(T1, S.N, S.IC, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs depth-wise 2-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_2d_nhwc_hwc_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs depth-wise 2-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: depthwise_conv_2d_nhwc_hwcm(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs depth-wise 2-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: depthwise_conv_2d_nhwc_hwcm_q(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KH, S.KW, S.IC, S.CM), IZp=ScalarDef(I32), KZp=ScalarDef(I32), O=TensorDef(U, S.N, S.OH, S.OW, S.IC, S.CM, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs depth-wise 2-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: depthwise_conv_3d_ndhwc_dhwc(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs depth-wise 3-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_3d_ncdhw_cdhw(I=TensorDef(T1, S.N, S.IC, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.IC, S.KD, S.KH, S.KW), O=TensorDef(U, S.N, S.IC, S.OD, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs depth-wise 3-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. Multiplier is set to 1 which is a special case for most depthwise convolutions. .. py:function:: depthwise_conv_3d_ndhwc_dhwcm(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.IC), K=TensorDef(T2, S.KD, S.KH, S.KW, S.IC, S.CM), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.CM, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs depth-wise 3-D convolution. Numeric casting is performed on the operands to the inner multiply, promoting them to the same data type as the accumulator/output. .. py:function:: pooling_nhwc_sum(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs sum pooling. Layout: * Input: NHWC. * Kernel: HW. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nchw_sum(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs sum pooling. Layout: * Input: NCHW. * Kernel: HW. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nhwc_max(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nhwc_max_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs unsigned max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nchw_max(I=TensorDef(T1, S.N, S.C, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.C, S.OH, S.OW, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nhwc_min(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs min pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nhwc_min_unsigned(I=TensorDef(T1, S.N, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KH, S.KW, index_dims=[D.kh, D.kw]), O=TensorDef(U, S.N, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]), dilations=IndexAttrDef(S.DH, S.DW, default=[1, 1])) Performs unsigned min pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nwc_sum(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs sum pooling. Layout: * Input: NWC. * Kernel: W. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_ncw_sum(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs sum pooling. Layout: * Input: NCW. * Kernel: W. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nwc_max(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nwc_max_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs unsigned max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_ncw_max(I=TensorDef(T1, S.N, S.C, S.OW * S.SW + S.KW * S.DW), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.C, S.OW, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nwc_min(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs min pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_nwc_min_unsigned(I=TensorDef(T1, S.N, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KW, index_dims=[D.kw]), O=TensorDef(U, S.N, S.OW, S.C, output=True), strides=IndexAttrDef(S.SW, default=[1]), dilations=IndexAttrDef(S.DW, default=[1])) Performs unsigned min pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_ndhwc_sum(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3D sum pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_ndhwc_max(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3D max pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: pooling_ndhwc_min(I=TensorDef(T1, S.N, S.OD * S.SD + S.KD * S.DD, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW, S.C), K=TensorDef(T2, S.KD, S.KH, S.KW, index_dims=[D.kd, D.kh, D.kw]), O=TensorDef(U, S.N, S.OD, S.OH, S.OW, S.C, output=True), strides=IndexAttrDef(S.SD, S.SH, S.SW, default=[1, 1, 1]), dilations=IndexAttrDef(S.DD, S.DH, S.DW, default=[1, 1, 1])) Performs 3D min pooling. Numeric casting is performed on the input operand, promoting it to the same data type as the accumulator/output. .. py:function:: fill(value=ScalarDef(T1), O=TensorDef(U, output=True)) Fills the output tensor with the given value. Works for arbitrary ranked output tensors since the operation performs scalar accesses only and is thus rank polymorphic. Numeric casting is performed on the value operand, promoting it to the same data type as the output. .. py:function:: fill_rng_2d(min=ScalarDef(F64), max=ScalarDef(F64), seed=ScalarDef(I32), O=TensorDef(T, S.M, S.N, output=True)) Fills the output tensor with pseudo random numbers. The operation generations pseudo random numbers using a linear congruential generator. It provides no guarantees regarding the distribution of the generated random numbers. Instead of generating the random numbers sequentially, it instantiates one random number generator per data element and runs them in parallel. The seed operand and the indices of the data element seed the random number generation. The min and max operands limit the range of the generated random numbers. .. py:function:: _get_op_result_or_value(arg: Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value, mlir._mlir_libs._mlir.ir.OpResultList]) -> mlir._mlir_libs._mlir.ir.Value Returns the given value or the single result of the given op. This is useful to implement op constructors so that they can take other ops as arguments instead of requiring the caller to extract results for every op. Raises ValueError if provided with an op that doesn't have a single result. .. py:function:: _get_op_results_or_values(arg: Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, Sequence[Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value]]]) -> Union[Sequence[Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value]], mlir._mlir_libs._mlir.ir.OpResultList] Returns the given sequence of values or the results of the given op. This is useful to implement op constructors so that they can take other ops as lists of arguments instead of requiring the caller to extract results for every op. .. py:data:: _CONTEXT .. py:data:: StructuredOpOuts .. py:function:: bind_op_def(op_def: mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef) .. py:function:: current_op_def() -> mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef .. py:function:: _prepare_structured_op_outs(outs: StructuredOpOuts) -> mlir.dialects.linalg.opdsl.lang.emitter.ValueList .. py:class:: DefinedOpCallable(op_name: str, op_def: mlir.dialects.linalg.opdsl.lang.emitter.LinalgOpDef) Callable that wraps any defined op function. .. py:attribute:: op_name .. py:attribute:: op_def .. py:method:: __call__(*ins: mlir.dialects.linalg.opdsl.lang.emitter.Union[mlir.ir.Operation, mlir.ir.OpView, mlir.ir.Value], outs: StructuredOpOuts, **kwargs) Emits the corresponding op definition as IR. Most arguments are passed through to the underlying emitter. The following keyword argument is interpreted here: emit_generic: Emits a generic form as appropriate (default True). If False, a named form is emitted (which must have been built in to the compiler). .. py:function:: linalg_structured_op(dsl_func=None, *, op_name=None, op_class_name=None) -> DefinedOpCallable .. py:function:: domain(*dimensions: mlir.dialects.linalg.opdsl.lang.emitter.DimDef) .. py:function:: implements(*interfaces: mlir.dialects.linalg.opdsl.lang.emitter.OpInterfaceDef) .. py:function:: defines(*definitions: mlir.dialects.linalg.opdsl.lang.emitter.OpDefinitionDef) .. py:class:: TensorExpression An expression that can appear on the RHS of a comprehension. .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression :abstractmethod: .. py:method:: visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None]) Visits all tensor expression reachable by the expression. .. py:method:: collect_dim_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) Collects all DimDefs reachable through this expression. .. py:method:: collect_tensor_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[TensorUse]) Collects all TensorUses reachable through this expression. .. py:method:: collect_indices(indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[index]) Collects all index accesses reachable through this expression. .. py:method:: collect_scalar_uses(uses: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[ScalarDef]) Collects all ScalarDefs reachable through this expression. .. py:method:: __add__(rhs: TensorExpression) -> TensorExpression .. py:method:: __mul__(rhs) -> TensorExpression .. py:method:: __sub__(rhs) -> TensorExpression .. py:method:: __truediv__(rhs) -> TensorExpression .. py:method:: __hash__() .. py:class:: TensorUse(operand_def: OperandDef, indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef]) Bases: :py:obj:`TensorExpression` A used tensor represented by its (tensor_name, indices). Note that forming a comprehension via direct assignment is performed through **setitem** on the TensorDef level. However, performing a reduction with compound ops (+=, *=, etc) is done by doing a: TensorDef.**getitem** TensorUse.**iadd** TensorDef.**setitem** .. py:attribute:: operand_def .. py:attribute:: indices .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:property:: tensor_name :type: str .. py:method:: _compute_reduce_dims(rhs: TensorExpression) -> mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef] .. py:method:: __iadd__(rhs: TensorExpression) -> TensorReduceFn .. py:method:: __repr__() .. py:class:: TensorFn(kind: FunctionKind, name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str], operand_def: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[OperandDef], type_var: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar], args: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[TensorExpression]) Bases: :py:obj:`TensorExpression` Application of a tensor function. .. py:attribute:: name .. py:attribute:: kind .. py:attribute:: operand_def .. py:attribute:: type_var .. py:attribute:: args .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:method:: visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None]) Visits all tensor expression reachable by the expression. .. py:method:: __repr__() .. py:class:: TensorReduceFn(reduce_use: ReduceFnUse, args: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[TensorExpression]) Bases: :py:obj:`TensorExpression` Application of a reduction function. This captures the lhs (initial value) separately from the rhs. .. py:attribute:: reduce_use .. py:attribute:: lhs :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[TensorUse] :value: None .. py:attribute:: args .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:method:: visit_tensor_exprs(callback: mlir.dialects.linalg.opdsl.lang.scalar_expr.Callable[[TensorExpression], None]) Visits all tensor expression reachable by the expression. .. py:method:: __repr__() .. py:class:: const(value: mlir.dialects.linalg.opdsl.lang.scalar_expr.Any) Bases: :py:obj:`TensorExpression` Returns the given constant floating point or integer value. .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:method:: __repr__() .. py:class:: index(dim: mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef) Bases: :py:obj:`TensorExpression` Returns the iteration index for a given dimension name. Resolves the given dimension name to obtain its position in the iteration domain of the operation. .. py:attribute:: dim_def .. py:attribute:: dim :value: -1 .. py:method:: resolve_dimension_name(affine_state: mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineBuildState) .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:method:: __repr__() .. py:class:: FunctionKind Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.types.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: UNARY :value: 0 .. py:attribute:: BINARY :value: 1 .. py:attribute:: TERNARY :value: 2 .. py:attribute:: TYPE :value: 3 .. py:class:: UnaryFnType(fn_name: str) Unary function. A unary function takes one tensor expression and returns the function evaluation result. .. py:attribute:: fn_name .. py:method:: __call__(arg: TensorExpression) -> TensorFn .. py:method:: __repr__() .. py:class:: UnaryFn Unary function namespace. .. py:attribute:: exp .. py:attribute:: log .. py:attribute:: abs .. py:attribute:: ceil .. py:attribute:: floor .. py:attribute:: negf .. py:attribute:: reciprocal .. py:attribute:: round .. py:attribute:: sqrt .. py:attribute:: rsqrt .. py:attribute:: square .. py:attribute:: tanh .. py:attribute:: erf .. py:class:: BinaryFnType(fn_name: str) Binary function. A binary function takes two tensor expressions and returns the function evaluation result. .. py:attribute:: fn_name .. py:method:: __call__(arg0: TensorExpression, arg1: TensorExpression) -> TensorFn .. py:method:: __repr__() .. py:class:: BinaryFn Binary function namespace. As the integer types are signless, signedness is implement by different functions that treat integers as signed or unsigned values. Examples: * max -> ``arith.MaxSIOp`` * max_unsigned -> ``arith.MaxUIOp`` .. py:attribute:: add .. py:attribute:: sub .. py:attribute:: mul .. py:attribute:: div .. py:attribute:: div_unsigned .. py:attribute:: max_signed .. py:attribute:: min_signed .. py:attribute:: max_unsigned .. py:attribute:: min_unsigned .. py:attribute:: powf .. py:class:: TernaryFnType(fn_name: str) Ternary function. A ternary function takes three tensor expressions and returns the function evaluation result. .. py:attribute:: fn_name .. py:method:: __call__(arg0: TensorExpression, arg1: TensorExpression, arg2: TensorExpression) -> TensorFn .. py:method:: __repr__() .. py:class:: TernaryFn Ternary function namespace. .. py:attribute:: select .. py:class:: TypeFnType(fn_name: str) Type conversion function. A type conversion function takes a target type and a tensor expression and returns the casted tensor expression. .. py:attribute:: fn_name .. py:method:: __call__(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, arg: TensorExpression) -> TensorFn .. py:method:: __repr__() .. py:class:: TypeFn Type conversion function namespace. As the integer types are signless, signedness is implement by different cast functions that treat integers as signed (``cast_signed``) or unsigned (``cast_unsigned``) values. Examples: * cast_signed(I32 -> I64) -> ``arith.ExtSIOp`` * cast_unsigned(I32 -> I64) -> ``arith.ExtUIOp`` .. py:attribute:: cast_signed .. py:attribute:: cast_unsigned .. py:class:: ReduceFnUse(binary_fn: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[BinaryFnType], binary_attr: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[BinaryFnAttrDef], *reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef) Reduction function use. A reduction use specifies the reduction function and dimensions. .. py:attribute:: binary_fn .. py:attribute:: binary_attr .. py:attribute:: reduce_dims :value: () .. py:method:: __call__(*args: TensorExpression) -> TensorReduceFn .. py:method:: __repr__() .. py:class:: ReduceFnType(binary_fn: BinaryFnType) Reduction function. A binary function that reduces its RHS into its LHS. .. py:attribute:: binary_fn .. py:method:: __getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) -> ReduceFnUse .. py:method:: __repr__() .. py:class:: ReduceFn .. py:attribute:: add .. py:attribute:: mul .. py:attribute:: max_signed .. py:attribute:: min_signed .. py:attribute:: max_unsigned .. py:attribute:: min_unsigned .. py:class:: OperandKind Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.types.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: INPUT_TENSOR :value: 0 .. py:attribute:: SCALAR :value: 1 .. py:attribute:: OUTPUT_TENSOR :value: 2 .. py:attribute:: INDEX_ATTR :value: 3 .. py:attribute:: UNARY_FN_ATTR :value: 4 .. py:attribute:: BINARY_FN_ATTR :value: 5 .. py:attribute:: TERNARY_FN_ATTR :value: 6 .. py:attribute:: TYPE_FN_ATTR :value: 7 .. py:class:: OperandDef(kind: OperandKind, type_var: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar] = None, size_exprs: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef]] = None, index_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]] = None, default_indices: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[int]] = None, default_fn: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None) Definition of an operand passed to an operation. Keep the meta information of Tensor, Scalar, and Attribute operands and provide the shared registration functionality. .. py:attribute:: owner :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[LinalgOpDef] :value: None .. py:attribute:: type_var :value: None .. py:attribute:: size_exprs :value: None .. py:attribute:: index_dims :value: None .. py:attribute:: default_indices :value: None .. py:attribute:: default_fn :value: None .. py:attribute:: kind .. py:attribute:: name :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] :value: None .. py:attribute:: registered_index :type: int :value: -1 .. py:method:: attach(index: int, name: str, owner: LinalgOpDef) .. py:method:: is_input() -> bool .. py:method:: is_tensor() -> bool .. py:method:: is_attribute() -> bool .. py:method:: __hash__() .. py:method:: __repr__() .. py:class:: TensorDef(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, *shape: mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef, index_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]] = None, output: bool = False) Tensor operand definition. Tensor operands are indexed using the associated indexing_map when forwarded to the body of the structured op. A unique name identifies the tensor operands and an index determines their position in the operation's parameter list. A tensor definition takes type, a shape, and an optional flag to mark output tensors. Additionally, a tuple of index dimensions may be used to map the tensor to the loop dimensions of the operation. This mapping is needed to compute the indexing map of shape-only tensors that have no uses. .. py:attribute:: operand_def .. py:method:: __getitem__(dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef]) -> TensorUse .. py:method:: __setitem__(dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[mlir.dialects.linalg.opdsl.lang.scalar_expr.AffineExprDef], value: TensorExpression) Creates a new 1:1 comprehension by binding this tensor to an expression. Note that due to the way assignment works in Python, we have to capture direct assignment as a setitem on the TensorDef. .. py:class:: ScalarDef(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar) Bases: :py:obj:`TensorExpression` Scalar operand definition. Scalar operands are forwarded to the body of the structured op as they are. A unique name identifies the scalars and an index determines their position in the operation's parameter list. .. py:attribute:: operand_def .. py:property:: scalar_name :type: str .. py:method:: to_scalar_expression() -> mlir.dialects.linalg.opdsl.lang.scalar_expr.ScalarExpression .. py:class:: IndexAttrDef(*sizes: mlir.dialects.linalg.opdsl.lang.scalar_expr.SymbolDef, default: mlir.dialects.linalg.opdsl.lang.scalar_expr.Sequence[int]) Index attribute definition. Index attributes provide a way to define and set symbols that can be used in indexing expressions. Every attribute specifies a tuple of symbols that at compile-time are replaced by integer values as well as their default values. .. py:attribute:: operand_def .. py:class:: UnaryFnAttrDef(default: UnaryFnType) Unary function attribute definition. Unary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default unary function that may be overwritten at operation instantiation time. .. py:attribute:: operand_def .. py:method:: __call__(arg: TensorExpression) -> TensorFn .. py:class:: BinaryFnAttrDef(default: BinaryFnType) Binary function attribute definition. Binary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default binary function that may be overwritten at operation instantiation time. .. py:attribute:: operand_def .. py:method:: __call__(arg0: TensorExpression, arg1: TensorExpression) -> TensorFn .. py:method:: __getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) -> ReduceFnUse .. py:class:: TernaryFnAttrDef(default: TernaryFnType) Ternary function attribute definition. Ternary function attributes provide a way to make the arithmetic computation parametrizable. Every attribute specifies a default Ternary function that may be overwritten at operation instantiation time. .. py:attribute:: operand_def .. py:method:: __call__(arg0: TensorExpression, arg1: TensorExpression) -> TensorFn .. py:method:: __getitem__(reduce_dims: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef]) -> ReduceFnUse .. py:class:: TypeFnAttrDef(default: TypeFnType) Type conversion function attribute definition. Type conversion function attributes provide a way to make type conversions parameterizable. Every attribute specifies a default type conversion function that may be overwritten at operation instantiation time. .. py:attribute:: operand_def .. py:method:: __call__(type_var: mlir.dialects.linalg.opdsl.lang.types.TypeVar, arg: TensorExpression) -> TensorFn .. py:class:: Comprehension(*bindings: mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[TensorUse, TensorExpression]) Represents a single comprehension. .. py:attribute:: definitions :value: [] .. py:attribute:: values :value: [] .. py:property:: all_reduction_dims :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.Set[mlir.dialects.linalg.opdsl.lang.scalar_expr.Tuple[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef, Ellipsis]] Gets the reduction dims for the comprehension or None. .. py:method:: __repr__() .. py:class:: OpInterfaceDef(cpp_name: str) An interface that an op implements. .. py:attribute:: cpp_name .. py:data:: ContractionOpInterface .. py:data:: ConvolutionOpInterface .. py:data:: FillOpInterface .. py:class:: OpDefinitionDef(def_name: str) A method that an op implements. .. py:attribute:: def_name .. py:data:: Canonicalizer .. py:class:: OpMetadataDef(name: str, cpp_class_name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str], doc: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str]) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` Metadata about the op (generally not behavior impacting). .. py:attribute:: yaml_tag :value: '!LinalgOpMetadata' .. py:attribute:: name .. py:attribute:: cpp_class_name .. py:attribute:: doc .. py:attribute:: implements :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[OpInterfaceDef] :value: [] .. py:attribute:: defines :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[OpDefinitionsDef] :value: [] .. py:method:: to_yaml_custom_dict() .. py:class:: LinalgOpDef(name: str, cpp_class_name: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None, doc: mlir.dialects.linalg.opdsl.lang.scalar_expr.Optional[str] = None) Definition of a linalg op. .. py:attribute:: metadata .. py:attribute:: registered_operands :type: mlir.dialects.linalg.opdsl.lang.types.Dict[str, OperandDef] .. py:attribute:: domain :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[mlir.dialects.linalg.opdsl.lang.scalar_expr.DimDef] :value: [] .. py:attribute:: comprehensions :type: mlir.dialects.linalg.opdsl.lang.scalar_expr.List[Comprehension] :value: [] .. py:attribute:: _affine_state .. py:method:: add_operand(name: str, operand: OperandDef) Registers an operand. .. py:method:: __repr__() .. py:class:: AffineBuildState(*, global_state: AffineBuildState = None, allow_new_symbols: bool = True, allow_new_dims: bool = True) Internal state for the AffineExprDef._create impls. Note that a "local" AffineBuildState can be created relative to a "global" AffineBuildState. In that case, any affine expressions built will inherit symbol and dim bindings from the global state and will update both as new ones are discovered. This allows for building expressions across contexts which share a common symbol and dim space. .. py:attribute:: local_symbols :type: Dict[str, int] .. py:attribute:: local_dims :type: Dict[str, int] .. py:attribute:: allow_new_symbols :value: True .. py:attribute:: allow_new_dims :value: True .. py:method:: get_dim(dimname: str) -> int Gets the dim position given a name. .. py:method:: get_symbol(symname: str) -> int Geta a symbol position given a name. .. py:property:: local_dim_count :type: int .. py:property:: local_symbol_count :type: int .. py:property:: dim_count :type: int .. py:property:: symbol_count :type: int .. py:method:: __repr__() .. py:class:: AffineExprDef Base class for an affine expression being defined. .. py:method:: build(state: Optional[AffineBuildState] = None) -> mlir.ir.AffineExpr Builds the corresponding _ir.AffineExpr from the definitions. .. py:method:: _create(state: AffineBuildState) -> mlir.ir.AffineExpr :abstractmethod: .. py:method:: coerce_from(py_value) :staticmethod: .. py:method:: visit_affine_exprs(callback) Visits all AffineExprDefs including self. .. py:method:: __add__(rhs) .. py:method:: __mul__(rhs) .. py:method:: __mod__(rhs) .. py:method:: __floordiv__(rhs) .. py:method:: __truediv__(rhs) .. py:data:: D .. py:class:: DimDef Bases: :py:obj:`AffineExprDef` Represents a named dimension. .. py:attribute:: ALL_DIMS :type: Dict[str, DimDef] .. py:method:: __repr__() .. py:method:: _create(state: AffineBuildState) -> mlir.ir.AffineExpr .. py:method:: create_expando() :classmethod: Create an expando class that creates unique symbols based on attr access. .. py:data:: S .. py:class:: SymbolDef Bases: :py:obj:`AffineExprDef` Represents a named symbol. s1 = SymbolDef("s1") s1 Symbol(s1) s2 = SymbolDef("s2") s1 is s2 False s1 is SymbolDef("s1") True .. py:attribute:: ALL_SYMBOLS :type: Dict[str, SymbolDef] .. py:method:: __repr__() .. py:method:: _create(state: AffineBuildState) -> mlir.ir.AffineExpr .. py:method:: create_expando() :classmethod: Create an expando class that creates unique symbols based on attr access. .. py:class:: ScalarAssign(arg: str, value: ScalarExpression) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` An assignment to a named argument (LHS of a comprehension). .. py:attribute:: yaml_tag :value: '!ScalarAssign' .. py:attribute:: arg .. py:attribute:: value .. py:method:: to_yaml_custom_dict() .. py:method:: __repr__() .. py:class:: ScalarFn(kind: mlir.dialects.linalg.opdsl.lang.comprehension.FunctionKind, fn_name: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[str], attr_name: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[str], type_var: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.dialects.linalg.opdsl.lang.types.TypeVar], operands: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[ScalarExpression]) A type of ScalarExpression that applies a function. .. py:attribute:: kind .. py:attribute:: fn_name .. py:attribute:: attr_name .. py:attribute:: type_var .. py:attribute:: operands .. py:method:: expr() -> ScalarExpression .. py:method:: __repr__() .. py:class:: ScalarArg(arg: str) A type of ScalarExpression that references a named argument. .. py:attribute:: arg .. py:method:: expr() -> ScalarExpression .. py:method:: __repr__() .. py:class:: ScalarConst(value: str) A type of ScalarExpression representing a constant. .. py:attribute:: value .. py:method:: expr() -> ScalarExpression .. py:method:: __repr__() .. py:class:: ScalarIndex(dim: int) A type of ScalarExpression accessing an iteration index. .. py:attribute:: dim .. py:method:: expr() -> ScalarExpression .. py:method:: __repr__() .. py:class:: ScalarExpression(scalar_fn: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarFn] = None, scalar_arg: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarArg] = None, scalar_const: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarConst] = None, scalar_index: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[ScalarIndex] = None) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` An expression on scalar values. Can be one of: * ScalarFn * ScalarArg * ScalarConst * ScalarIndex .. py:attribute:: yaml_tag :value: '!ScalarExpression' .. py:attribute:: scalar_fn :value: None .. py:attribute:: scalar_arg :value: None .. py:attribute:: scalar_const :value: None .. py:attribute:: scalar_index :value: None .. py:method:: to_yaml_custom_dict() .. py:class:: TypeVar A replaceable type variable. Type variables are uniqued by name. .. py:attribute:: ALL_TYPEVARS :type: Dict[str, TypeVar] .. py:method:: __repr__() .. py:method:: create_expando() :classmethod: Create an expando class that creates unique type vars on attr access. .. py:data:: TV .. py:data:: I32 .. py:data:: I64 .. py:data:: F32 .. py:data:: F64 .. py:data:: T .. py:data:: U .. py:data:: V .. py:function:: yaml_dump(data, sort_keys=False, **kwargs) .. py:function:: yaml_dump_all(data, sort_keys=False, explicit_start=True, **kwargs) .. py:class:: YAMLObject Bases: :py:obj:`yaml.YAMLObject` An object that can dump itself to a YAML stream and load itself from a YAML stream. .. py:method:: to_yaml(dumper, self) :classmethod: Default to a custom dictionary mapping. .. py:method:: to_yaml_custom_dict() :abstractmethod: .. py:method:: as_linalg_yaml() .. py:class:: LinalgStructuredOpConfig(comprehension: mlir.dialects.linalg.opdsl.lang.comprehension.Comprehension, domain: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.DimDef], registered_operands: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef], context: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.Context] = None) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` Configuration for metadata sufficient to construct a linalg named op. .. py:attribute:: yaml_tag :value: '!LinalgStructuredOpConfig' .. py:attribute:: context :value: None .. py:attribute:: affine_state .. py:attribute:: writes :type: mlir.dialects.linalg.opdsl.lang.comprehension.List[mlir.dialects.linalg.opdsl.lang.comprehension.Tuple[mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse, mlir.dialects.linalg.opdsl.lang.comprehension.TensorExpression]] :value: [] .. py:attribute:: operands :type: mlir.dialects.linalg.opdsl.lang.comprehension.Dict[mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef, OperandDefConfig] .. py:attribute:: uses :type: mlir.dialects.linalg.opdsl.lang.comprehension.Dict[mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse, TensorUseConfig] .. py:attribute:: reduction_dims .. py:attribute:: assignments .. py:property:: ordered_operands :type: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[OperandDefConfig] .. py:property:: ordered_dims :type: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.dialects.linalg.opdsl.lang.comprehension.Tuple[str, int]] Gets the ordered list of dim bindings (symbolic name, position). TODO: The original parser relies on parse ordering to arrive at the iterator types, but that ordering is not defined on the Python side, so this may be ambiguous. .. py:property:: indexing_maps :type: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[mlir.ir.AffineMap] .. py:property:: iterator_types :type: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[str] .. py:method:: add_operand(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef) .. py:method:: add_indexed_operand(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef) .. py:method:: add_tensor_use(tensor_use: mlir.dialects.linalg.opdsl.lang.comprehension.TensorUse) .. py:method:: _get_scalar_map() -> mlir.ir.AffineMap Create an empty affine map used to index a scalar. .. py:method:: _normalize_affine_map(affine_map: mlir.ir.AffineMap, with_dims: bool = True) -> mlir.ir.AffineMap Normalizes an indexing map to have the max known symbols and dims. .. py:method:: to_yaml_custom_dict() .. py:method:: __repr__() .. py:class:: LinalgOpConfig(metadata: mlir.dialects.linalg.opdsl.lang.comprehension.OpMetadataDef, *, structured_op: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[LinalgStructuredOpConfig] = None) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` Container for any supported linalg op type. This includes the concrete type by name for ease of parsing by systems that ignore tags. .. py:attribute:: yaml_tag :value: '!LinalgOpConfig' .. py:attribute:: metadata .. py:attribute:: structured_op :value: None .. py:method:: to_yaml_custom_dict() .. py:method:: from_linalg_op_def(op_def: mlir.dialects.linalg.opdsl.lang.comprehension.LinalgOpDef, context: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.Context] = None) -> mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[LinalgOpConfig] :staticmethod: Expands a LinalgOpDef into corresponding Linalg configured ops. .. py:method:: __repr__() .. py:class:: OperandDefConfig(operand_def: mlir.dialects.linalg.opdsl.lang.comprehension.OperandDef, shape_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None, index_attr_map: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] = None) Bases: :py:obj:`mlir.dialects.linalg.opdsl.lang.yaml_helper.YAMLObject` Wrapper containing an operand definition with additional state. .. py:attribute:: yaml_tag :value: '!LinalgOperandDefConfig' .. py:attribute:: operand_def .. py:attribute:: shape_map :type: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] :value: None .. py:attribute:: index_attr_map :type: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] :value: None .. py:attribute:: indexing_map :type: mlir.dialects.linalg.opdsl.lang.comprehension.Optional[mlir.ir.AffineMap] :value: None .. py:property:: name :type: str .. py:property:: kind :type: mlir.dialects.linalg.opdsl.lang.comprehension.OperandKind .. py:property:: type_var :type: mlir.dialects.linalg.opdsl.lang.comprehension.TypeVar .. py:method:: to_yaml_custom_dict() .. py:method:: __repr__() .. py:function:: emit_generic_structured_op(op_config: mlir.dialects.linalg.opdsl.lang.config.LinalgStructuredOpConfig, *ins: Value, outs: ValueList, **attrs: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[int]) .. py:function:: emit_named_structured_op(op_config: mlir.dialects.linalg.opdsl.lang.config.LinalgStructuredOpConfig, op_name: str, op_class_name: str, *ins: Value, outs: ValueList, **attrs: mlir.dialects.linalg.opdsl.lang.comprehension.Sequence[int]) .. py:data:: ValueList .. py:function:: loc_tracebacks(*, max_depth: int | None = None) -> collections.abc.Iterable[None] Enables automatic traceback-based locations for MLIR operations. Operations created within this context will have their location automatically set based on the Python call stack. :param max_depth: Maximum number of frames to include in the location. If None, the default limit is used. .. py:function:: register_attribute_builder(kind, replace=False) .. py:function:: _affineMapAttr(x, context) .. py:function:: _integerSetAttr(x, context) .. py:function:: _boolAttr(x, context) .. py:function:: _dictAttr(x, context) .. py:function:: _indexAttr(x, context) .. py:function:: _i1Attr(x, context) .. py:function:: _i8Attr(x, context) .. py:function:: _i16Attr(x, context) .. py:function:: _i32Attr(x, context) .. py:function:: _i64Attr(x, context) .. py:function:: _si1Attr(x, context) .. py:function:: _si8Attr(x, context) .. py:function:: _si16Attr(x, context) .. py:function:: _si32Attr(x, context) .. py:function:: _si64Attr(x, context) .. py:function:: _ui1Attr(x, context) .. py:function:: _ui8Attr(x, context) .. py:function:: _ui16Attr(x, context) .. py:function:: _ui32Attr(x, context) .. py:function:: _ui64Attr(x, context) .. py:function:: _f32Attr(x, context) .. py:function:: _f64Attr(x, context) .. py:function:: _stringAttr(x, context) .. py:function:: _symbolNameAttr(x, context) .. py:function:: _symbolRefAttr(x, context) .. py:function:: _flatSymbolRefAttr(x, context) .. py:function:: _unitAttr(x, context) .. py:function:: _arrayAttr(x, context) .. py:function:: _affineMapArrayAttr(x, context) .. py:function:: _boolArrayAttr(x, context) .. py:function:: _dictArrayAttr(x, context) .. py:function:: _flatSymbolRefArrayAttr(x, context) .. py:function:: _i32ArrayAttr(x, context) .. py:function:: _i64ArrayAttr(x, context) .. py:function:: _i64SmallVectorArrayAttr(x, context) .. py:function:: _indexListArrayAttr(x, context) .. py:function:: _f32ArrayAttr(x, context) .. py:function:: _f64ArrayAttr(x, context) .. py:function:: _strArrayAttr(x, context) .. py:function:: _symbolRefArrayAttr(x, context) .. py:function:: _denseF32ArrayAttr(x, context) .. py:function:: _denseF64ArrayAttr(x, context) .. py:function:: _denseI8ArrayAttr(x, context) .. py:function:: _denseI16ArrayAttr(x, context) .. py:function:: _denseI32ArrayAttr(x, context) .. py:function:: _denseI64ArrayAttr(x, context) .. py:function:: _denseBoolArrayAttr(x, context) .. py:function:: _typeAttr(x, context) .. py:function:: _typeArrayAttr(x, context) .. py:function:: _memref_type_attr(x, context) .. py:function:: _f64ElementsAttr(x, context) .. py:function:: _get_op_result_or_value(arg: Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.Value, mlir._mlir_libs._mlir.ir.OpResultList]) -> mlir._mlir_libs._mlir.ir.Value Returns the given value or the single result of the given op. This is useful to implement op constructors so that they can take other ops as arguments instead of requiring the caller to extract results for every op. Raises ValueError if provided with an op that doesn't have a single result. .. py:function:: _get_op_result_or_op_results(op: Union[mlir._mlir_libs._mlir.ir.OpView, mlir._mlir_libs._mlir.ir.Operation]) -> Union[mlir._mlir_libs._mlir.ir.Operation, mlir._mlir_libs._mlir.ir.OpResult, Sequence[mlir._mlir_libs._mlir.ir.OpResult]] .. py:function:: _dispatch_mixed_values(values: MixedValues) -> Tuple[List[mlir.ir.Value], Union[mlir.ir.Operation, mlir.ir.Value, mlir.ir.OpView], mlir.ir.DenseI64ArrayAttr] .. py:function:: region_op(op_constructor, terminator=None) Decorator to define an MLIR Op specified as a python function. Requires that an ``mlir.ir.InsertionPoint`` and ``mlir.ir.Location`` are active for the current thread (i.e. established in a ``with`` block). Supports "naked" usage i.e., no parens if no args need to be passed to the Op constructor. When applied as a decorator to a Python function, an entry block will be constructed for the Op with types as specified **as type hints on the args of the function**. The block arguments will be passed positionally to the Python function. If a terminator is specified then the return from the decorated function will be passed to the terminator as the last statement in the entry block. Note, the API for the terminator is a (possibly empty) list; terminator accepting single values should be wrapped in a ``lambda args: term(args[0])`` The identifier (name) of the function will become: #. A single value result if the Op returns a single value; #. An OpResultList (as a list) if the Op returns multiple values; #. The Operation if the Op returns no results. See examples in tensor.py and transform.extras. .. py:function:: transpose(input: opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]], *, outs: opdsl.ops.core_named_ops.List[opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]]], permutation: opdsl.ops.core_named_ops.Union[DenseI64ArrayAttr, opdsl.ops.core_named_ops.List[int]]) .. py:function:: broadcast(input: opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]], *, outs: opdsl.ops.core_named_ops.List[opdsl.ops.core_named_ops.Union[Operation, OpView, opdsl.ops.core_named_ops.Sequence[Value]]], dimensions: opdsl.ops.core_named_ops.Union[DenseI64ArrayAttr, opdsl.ops.core_named_ops.List[int]]) .. py:function:: _IteratorTypeArrayAttr(x, context) .. py:class:: GenericOp_(inputs, outputs, indexing_maps, iterator_types, *, doc=None, library_call=None, loc=None, ip=None) Bases: :py:obj:`mlir.dialects._linalg_ops_gen.GenericOp` Generic Linalg op form where the key properties of the computation are specified as attributes. In pretty form, a ``linalg.generic`` op is written as: .. code:: mlir linalg.generic #trait_attribute ins(%A, %B : memref, memref) outs(%C : memref) attrs = {other-optional-attributes} {region} Where #trait_attributes is an alias of a dictionary attribute containing: * doc [optional]: a documentation string * indexing_maps: a list of AffineMapAttr, one AffineMapAttr per each input and output view. Such AffineMapAttr specifies the mapping between the loops and the indexing within each view. * library_call [optional]: a StringAttr containing the name of an external library function that the linalg.generic operation maps to. The external library is assumed to be dynamically linked and no strong compile-time guarantees are provided. In the absence of such a library call, linalg.generic will always lower to loops. * iterator_types: an ArrayAttr specifying the type of the enclosing loops. Each element of the list represents and iterator of one of the following types: parallel, reduction, window Example: Defining a #matmul_trait attribute in MLIR can be done as follows: .. code:: mlir #matmul_accesses = [ (m, n, k) -> (m, k), (m, n, k) -> (k, n), (m, n, k) -> (m, n) ] #matmul_trait = { doc = "C(m, n) += A(m, k) * B(k, n)", indexing_maps = #matmul_accesses, library_call = "linalg_matmul", iterator_types = ["parallel", "parallel", "reduction"] } And can be reused in multiple places as: .. code:: mlir linalg.generic #matmul_trait ins(%A, %B : memref, memref) outs(%C : memref) {other-optional-attributes} { ^bb0(%a: f32, %b: f32, %c: f32) : %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 linalg.yield %e : f32 } This may lower to either: .. code:: mlir call @linalg_matmul(%A, %B, %C) : (memref, memref, memref) -> () or IR resembling: .. code:: mlir scf.for %m = %c0 to %M step %c1 { scf.for %n = %c0 to %N step %c1 { scf.for %k = %c0 to %K step %c1 { %a = load %A[%m, %k] : memref %b = load %B[%k, %n] : memref %c = load %C[%m, %n] : memref %d = arith.mulf %a, %b: f32 %e = arith.addf %c, %d: f32 store %e, %C[%m, %n] : memref } } } .. py:data:: generic .. py:function:: _create_matmul_like_op(op_type, *ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None) .. py:function:: matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None) .. py:function:: batch_matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None) .. py:function:: batch_reduce_matmul(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None, cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None) .. py:function:: contract(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], indexing_maps: opdsl.ops.core_named_ops.Sequence[AffineMapAttr], cast: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Union[opdsl.ops.core_named_ops.TypeFn, Attribute]] = None) .. py:class:: ElementwiseOp_(result_tensors, inputs, outputs, kind, *, indexing_maps=None, loc=None, ip=None) Bases: :py:obj:`mlir.dialects._linalg_ops_gen.ElementwiseOp` The attribute ``kind`` describes arithmetic operation to perform. The operation kind can be unary (e.g. max), binary (e.g. add) or ternary (e.g. select). By default, all indexing maps are identities. In the case of default indexing map, all input and output shapes must match. The number of dims in each of the identity maps is equal to the rank of the output type. Affine-maps for operands and result are required to be provided by the user when a transpose and/or broadcast is needed on any operand. When a map is not provided, default identity maps are inferred for each operand. Iterator-types are always all ``parallel``. Iterator-types are needed for constructing the underlying structured op. The number of dims of the iterator-types are inferred from the rank of the result type. Example: Defining a unary linalg.elementwise with default indexing-map: .. code:: mlir %exp = linalg.elementwise kind=#linalg.elementwise_kind ins(%x : tensor<4x16x8xf32>) outs(%y: tensor<4x16x8xf32>) -> tensor<4x16x8xf32> Defining a binary linalg.elementwise with user-defined indexing-map: .. code:: mlir %add = linalg.elementwise kind=#linalg.elementwise_kind indexing_maps = [#transpose, #broadcast, #identity] ins(%exp, %arg1 : tensor<4x16x8xf32>, tensor<4x16xf32>) outs(%arg2: tensor<4x8x16xf32>) -> tensor<4x8x16xf32> .. py:data:: ElementwiseOp .. py:function:: elementwise(*ins: opdsl.ops.core_named_ops.Union[Operation, OpView, Value], outs: opdsl.ops.core_named_ops.Sequence[opdsl.ops.core_named_ops.Union[Operation, OpView, Value]], kind: opdsl.ops.core_named_ops.Union[mlir.dialects._linalg_enum_gen.ElementwiseKind, Attribute], indexing_maps: opdsl.ops.core_named_ops.Optional[opdsl.ops.core_named_ops.Sequence[AffineMapAttr]] = None) .. py:function:: pack(source, dest, inner_dims_pos, inner_tiles, *, padding_value=None, outer_dims_perm=None, loc=None, ip=None) -> opdsl.ops.core_named_ops.ir.Value .. py:function:: unpack(source, dest, inner_dims_pos, inner_tiles, *, outer_dims_perm=None, loc=None, ip=None) -> opdsl.ops.core_named_ops.ir.Value .. py:data:: reduce .. py:data:: map