mlir.dialects._nvvm_ops_gen¶
Attributes¶
Classes¶
The |
|
Thread that executes this op announces their arrival at the barrier with |
|
The |
|
Breakpoint suspends execution of the program for debugging. |
|
Initializes a region of shared memory at the address given by |
|
The |
|
The |
|
|
|
|
|
The |
|
This Op converts the given bf16 inputs in a bf16x2 vector to the specified |
|
This Op converts the given f4 inputs in a packed i8 to f16. |
|
This Op converts the given f6 inputs in a i8x2 vector to f16. |
|
This Op converts the given f8 inputs in a i8x2 vector to bf16. |
|
This Op converts the given f8 inputs in a i8x2 vector to f16. |
|
This Op converts the given f16 inputs in an f16x2 vector to the specified |
|
Converts two F32 values to packed bf16x2 format using stochastic |
|
This Op converts each of the given float inputs to the specified fp4 type. |
|
This Op converts each of the given float inputs to the specified fp6 type. |
|
This Op converts each of the given float inputs to the specified fp8 type. |
|
Converts two F32 values to packed f16x2 format using stochastic |
|
Converts a vector<4xf32> to packed f4x4 format using |
|
Converts a vector<4xf32> to packed f6x4 format using |
|
Converts a vector<4xf32> to packed f8x4 format using |
|
This Op converts the given f32 input to tf32. |
|
This Op commits all prior initiated but uncommitted cp.async.bulk |
|
Initiates an asynchronous copy operation from global memory to cluster's |
|
Initiates an asynchronous prefetch of data from the location |
|
Initiates an asynchronous copy operation from Shared CTA memory to |
|
Initiates an asynchronous copy operation from Shared CTA memory to Shared |
|
Initiates an asynchronous copy operation on the tensor data from global |
|
Initiates an asynchronous prefetch operation on the tensor data from global |
|
Initiates an asynchronous reduction operation of tensor data in |
|
Initiates an asynchronous copy of the tensor data from shared::cta |
|
Op waits for completion of the most recent bulk async-groups. |
|
The |
|
Performs a two-way 16-bit to 8-bit dot-product which is accumulated in a |
|
Performs a four-way byte dot-product which is accumulated in a 32-bit |
|
The |
|
Ends execution of a thread. |
|
Fence operation that applies on the prior nvvm.mbarrier.init |
|
|
|
Fence operation with proxy to establish an ordering between memory accesses |
|
|
|
If the $kind attribute is set to |
|
This op allows using PTX directly within the NVVM |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
|
|
The |
|
The op suspends the thread for a sleep duration approximately close to the |
|
Triggers one or more of a fixed number of performance monitor events, with |
|
Prefetches the cache line containing the address given by |
|
|
|
The |
|
Collectively store one or more matrices across all threads in a warp to the |
|
The |
|
The |
|
The |
|
Instruction tcgen05.cp initiates an asynchronous copy operation from |
|
The |
|
The |
|
Instruction |
|
The |
|
The |
|
The |
|
Instruction |
|
The |
|
The |
|
Enforce an ordering of register accesses between warpgroup level matrix |
|
Commits all prior uncommitted warpgroup level matrix multiplication operations. |
|
The warpgroup (128 threads) level matrix multiply and accumulate operation |
|
Signal the completion of a preceding warpgroup operation. |
Functions¶
Module Contents¶
- mlir.dialects._nvvm_ops_gen._ods_ir¶
- class mlir.dialects._nvvm_ops_gen._Dialect(descriptor: object)¶
Bases:
_ods_ir- DIALECT_NAMESPACE = 'nvvm'¶
- class mlir.dialects._nvvm_ops_gen.Barrier0Op(*, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.barrier0operation is a convenience operation that performs barrier synchronization and communication within a CTA (Cooperative Thread Array) using barrier ID 0. It is functionally equivalent tonvvm.barrierornvvm.barrier id=0.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.barrier0'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.barrier0(*, loc=None, ip=None) Barrier0Op¶
- class mlir.dialects._nvvm_ops_gen.BarrierArriveOp(numberOfThreads, *, barrierId=None, loc=None, ip=None)¶
Bases:
_ods_irThread that executes this op announces their arrival at the barrier with given id and continue their execution.
The default barrier id is 0 that is similar to
nvvm.barrierOp. WhenbarrierIdis not present, the default barrier id is used.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.barrier.arrive'¶
- _ODS_REGIONS = (0, True)¶
- barrierId() _ods_ir | None¶
- numberOfThreads() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.barrier_arrive(number_of_threads, *, barrier_id=None, loc=None, ip=None) BarrierArriveOp¶
- class mlir.dialects._nvvm_ops_gen.BarrierOp(res, *, barrierId=None, numberOfThreads=None, reductionOp=None, reductionPredicate=None, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.barrieroperation performs barrier synchronization and communication within a CTA (Cooperative Thread Array). It causes executing threads to wait for all non-exited threads participating in the barrier to arrive.The operation takes two optional operands:
barrierId: Specifies a logical barrier resource with value 0 through 15.
Each CTA instance has sixteen barriers numbered 0..15. Defaults to 0 if not specified. *
numberOfThreads: Specifies the number of threads participating in the barrier. When specified, the value must be a multiple of the warp size. If not specified, all threads in the CTA participate in the barrier. *reductionOp: specifies the reduction operation (popc,and,or). *reductionPredicate: specifies the predicate to be used with thereductionOp.The barrier operation guarantees that when the barrier completes, prior memory accesses requested by participating threads are performed relative to all threads participating in the barrier. It also ensures that no new memory access is requested by participating threads before the barrier completes.
When a barrier completes, the waiting threads are restarted without delay, and the barrier is reinitialized so that it can be immediately reused.
This operation generates an aligned barrier, indicating that all threads in the CTA will execute the same barrier instruction. Behavior is undefined if all threads in the CTA do not reach this instruction.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.barrier'¶
- _ODS_OPERAND_SEGMENTS = [0, 0, 0]¶
- _ODS_REGIONS = (0, True)¶
- barrierId() _ods_ir | None¶
- numberOfThreads() _ods_ir | None¶
- reductionPredicate() _ods_ir | None¶
- reductionOp() _ods_ir | None¶
- res() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.barrier(res, *, barrier_id=None, number_of_threads=None, reduction_op=None, reduction_predicate=None, loc=None, ip=None) _ods_ir | _ods_ir | BarrierOp¶
- class mlir.dialects._nvvm_ops_gen.BlockDimXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockDimYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockDimZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockIdXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockIdYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockIdZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockInClusterIdXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockInClusterIdYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.BlockInClusterIdZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.Breakpoint(*, loc=None, ip=None)¶
Bases:
_ods_irBreakpoint suspends execution of the program for debugging. For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.breakpoint'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.breakpoint(*, loc=None, ip=None) Breakpoint¶
- class mlir.dialects._nvvm_ops_gen.BulkStoreOp(addr, size, *, initVal=None, loc=None, ip=None)¶
Bases:
_ods_irInitializes a region of shared memory at the address given by
addr. Thesizeoperand specifies the number of bytes to initialize and must be a multiple of 8. TheinitValoperand specifies the value to initialize the memory to. The only supported value is 0.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.st.bulk'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- size() _ods_ir¶
- initVal() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.st_bulk(addr, size, *, init_val=None, loc=None, ip=None) BulkStoreOp¶
- class mlir.dialects._nvvm_ops_gen.Clock64Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.clock64'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clock64(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClockOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.clock'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clock(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterArriveOp(*, aligned=None, loc=None, ip=None)¶
Bases:
_ods_irThe
cluster.arrivecan be used by the threads within the cluster for synchronization and communication. Thecluster.arriveinstruction marks the warps’ arrival at the barrier without causing the executing thread to wait for other participating threads.The
alignedattribute, when provided, generates the .aligned version of the PTX instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cluster.arrive'¶
- _ODS_REGIONS = (0, True)¶
- aligned() bool¶
- mlir.dialects._nvvm_ops_gen.cluster_arrive(*, aligned=None, loc=None, ip=None) ClusterArriveOp¶
- class mlir.dialects._nvvm_ops_gen.ClusterArriveRelaxedOp(*, aligned=None, loc=None, ip=None)¶
Bases:
_ods_irThe
cluster.arrivecan be used by the threads within the cluster for synchronization and communication. Thecluster.arriveinstruction marks the warps’ arrival at the barrier without causing the executing thread to wait for other participating threads.The
alignedattribute, when provided, generates the .aligned version of the PTX instruction. The .relaxed qualifier oncluster.arrivespecifies that there are no memory ordering and visibility guarantees provided for the memory accesses performed prior tocluster.arrive.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cluster.arrive.relaxed'¶
- _ODS_REGIONS = (0, True)¶
- aligned() bool¶
- mlir.dialects._nvvm_ops_gen.cluster_arrive_relaxed(*, aligned=None, loc=None, ip=None) ClusterArriveRelaxedOp¶
- class mlir.dialects._nvvm_ops_gen.ClusterDim(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctarank'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctarank(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterDimZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterId(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctarank'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctarank(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterIdXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterIdYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterIdZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterLaunchControlQueryCancelOp(res, query_type, try_cancel_response, *, loc=None, ip=None)¶
Bases:
_ods_irclusterlaunchcontrol.query.cancelqueries the response of aclusterlaunchcontrol.try.canceloperation specified by operandtry_cancel_response.Operand
query_typespecifies the type of query to perform and can be one of the following:is_canceled: Returns true if the try cancel request succeeded,
and false otherwise. *
get_first_cta_id_{x/y/z}: Returns the x, y, or z coordinate of the first CTA in the canceled cluster. Behaviour is defined only if the try cancel request succeeded.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.clusterlaunchcontrol.query.cancel'¶
- _ODS_REGIONS = (0, True)¶
- try_cancel_response() _ods_ir¶
- query_type() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.clusterlaunchcontrol_query_cancel(res, query_type, try_cancel_response, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ClusterLaunchControlTryCancelOp(smemAddress, mbarrier, *, multicast=None, loc=None, ip=None)¶
Bases:
_ods_irclusterlaunchcontrol.try.cancelrequests atomically canceling the launch of a cluster that has not started running yet. It asynchronously writes an opaque response to shared memory indicating whether the operation succeeded or failed.Operand
smemAddressspecifies the naturally aligned address of the 16-byte wide shared memory location where the request’s response is written.Operand
mbarrierspecifies the mbarrier object used to track the completion of the asynchronous operation.If
multicastis specified, the response is asynchronously written to the corresponding local shared memory location (specifed byaddr) of each CTA in the requesting cluster.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.clusterlaunchcontrol.try.cancel'¶
- _ODS_REGIONS = (0, True)¶
- smemAddress() _ods_ir¶
- mbarrier() _ods_ir¶
- multicast() bool¶
- mlir.dialects._nvvm_ops_gen.clusterlaunchcontrol_try_cancel(smem_address, mbarrier, *, multicast=None, loc=None, ip=None) ClusterLaunchControlTryCancelOp¶
- class mlir.dialects._nvvm_ops_gen.ClusterWaitOp(*, aligned=None, loc=None, ip=None)¶
Bases:
_ods_irThe
cluster.waitcauses the executing thread to wait for all non-exited threads of the cluster to performcluster.arrive. Thealignedattribute, when provided, generates the .aligned version of the PTX instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cluster.wait'¶
- _ODS_REGIONS = (0, True)¶
- aligned() bool¶
- mlir.dialects._nvvm_ops_gen.cluster_wait(*, aligned=None, loc=None, ip=None) ClusterWaitOp¶
- class mlir.dialects._nvvm_ops_gen.ConvertBF16x2ToF8x2Op(dst, a, dstTy, *, rnd=None, sat=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given bf16 inputs in a bf16x2 vector to the specified f8 type. The result
dstis represented as an i16 type or as a vector of two i8 types. Ifdstis returned as an i16 type, the converted values fromaare packed such that the value converted from the first element ofais stored in the upper 8 bits ofdstand the value converted from the second element ofais stored in the lower 8 bits ofdst. Ifdstis returned as a vector type, each converted value is stored as an i8 element in the vector. Therndandsatattributes specify the rounding and saturation modes respectively.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.bf16x2.to.f8x2'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- rnd() _ods_ir¶
- sat() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_bf16x2_to_f8x2(dst, a, dst_ty, *, rnd=None, sat=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF4x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f4 inputs in a packed i8 to f16.
The result
dstis represented as a vector of f16 elements. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f4x2.to.f16x2'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- relu() _ods_ir¶
- srcType() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f4x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF6x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f6 inputs in a i8x2 vector to f16.
The result
dstis represented as a vector of f16 elements. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f6x2.to.f16x2'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- relu() _ods_ir¶
- srcType() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f6x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF8x2ToBF16x2Op(dst, src, srcType, *, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f8 inputs in a i8x2 vector to bf16.
The result
dstis represented as a vector of bf16 elements.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f8x2.to.bf16x2'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- srcType() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f8x2_to_bf16x2(dst, src, src_type, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF8x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f8 inputs in a i8x2 vector to f16.
The result
dstis represented as a vector of f16 elements. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f8x2.to.f16x2'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- relu() _ods_ir¶
- srcType() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f8x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF16x2ToF8x2Op(dst, a, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f16 inputs in an f16x2 vector to the specified f8 type. The result
dstis represented as an i16 type or as a vector of two i8 types. Ifdstis returned as an i16 type, the converted values fromaare packed such that the value converted from the first element ofais stored in the upper 8 bits ofdstand the value converted from the second element ofais stored in the lower 8 bits ofdst. Ifdstis returned as a vector type, each converted value is stored as an i8 element in the vector. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f16x2.to.f8x2'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f16x2_to_f8x2(dst, a, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToBF16x2Op(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None)¶
Bases:
_ods_irConverts two F32 values to packed bf16x2 format using stochastic rounding (.rs) mode with randomness provided by the
rbitsparameter. Thereluattribute clamps negative results to 0. Thesatattribute determines saturation behavior. Thesrc_hiandsrc_loparameters correspond to operandsaandbin the PTX ISA, respectively.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x2.to.bf16x2'¶
- _ODS_REGIONS = (0, True)¶
- src_hi() _ods_ir¶
- src_lo() _ods_ir¶
- rbits() _ods_ir¶
- rnd() _ods_ir¶
- sat() _ods_ir¶
- relu() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x2_to_bf16x2(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF4x2Op(dst, a, b, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts each of the given float inputs to the specified fp4 type. The result
dstis returned as an i8 type where the converted values are packed such that the value converted fromais stored in the upper 4 bits ofdstand the value converted frombis stored in the lower 4 bits ofdst. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x2.to.f4x2'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f4x2(dst, a, b, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF6x2Op(dst, a, b, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts each of the given float inputs to the specified fp6 type. The result
dstis represented either as an i16 type or as a vector of two i8 types. Ifdstis returned as an i16 type, the converted values are packed such that the value converted fromais stored in the upper 8 bits ofdstwith 2 MSB bits padded with zeros and the value converted frombis stored in the lower 8 bits ofdstwith 2 MSB bits padded with zeros. Ifdstis returned as a vector type, each converted value is stored as an i8 element in the vector. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x2.to.f6x2'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f6x2(dst, a, b, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF8x2Op(dst, a, b, dstTy, *, rnd=None, sat=None, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts each of the given float inputs to the specified fp8 type. The result
dstis represented as an i16 type or as a vector of two i8 types. Ifdstis returned as an i16 type, the converted values are packed such that the value converted fromais stored in the upper 8 bits ofdstand the value converted frombis stored in the lower 8 bits ofdst. Ifdstis returned as a vector type, each converted value is stored as an i8 element in the vector. Therndandsatattributes specify the rounding and saturation modes respectively. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x2.to.f8x2'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- rnd() _ods_ir¶
- sat() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f8x2(dst, a, b, dst_ty, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF16x2Op(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None)¶
Bases:
_ods_irConverts two F32 values to packed f16x2 format using stochastic rounding (.rs) mode with randomness provided by the
rbitsparameter. Thereluattribute clamps negative results to 0. Thesatattribute determines saturation behavior. Thesrc_hiandsrc_loparameters correspond to operandsaandbin the PTX ISA, respectively.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x2.to.f16x2'¶
- _ODS_REGIONS = (0, True)¶
- src_hi() _ods_ir¶
- src_lo() _ods_ir¶
- rbits() _ods_ir¶
- rnd() _ods_ir¶
- sat() _ods_ir¶
- relu() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f16x2(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF4x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irConverts a vector<4xf32> to packed f4x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the
rbitsparameter. ThedstTyattribute specifies the target floating-point format. Thereluattribute clamps negative results to 0.Note: These operations always use RS rounding mode and SATFINITE saturation mode.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x4.to.f4x4'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- rbits() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f4x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF6x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irConverts a vector<4xf32> to packed f6x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the
rbitsparameter. ThedstTyattribute specifies the target floating-point format. Thereluattribute clamps negative results to 0.Note: These operations always use RS rounding mode and SATFINITE saturation mode.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x4.to.f6x4'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- rbits() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f6x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF8x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)¶
Bases:
_ods_irConverts a vector<4xf32> to packed f8x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the
rbitsparameter. ThedstTyattribute specifies the target floating-point format. Thereluattribute clamps negative results to 0.Note: These operations always use RS rounding mode and SATFINITE saturation mode.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.f32x4.to.f8x4'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- rbits() _ods_ir¶
- relu() _ods_ir¶
- dstTy() _ods_ir¶
- dst() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f8x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ConvertFloatToTF32Op(res, src, *, rnd=None, sat=None, relu=None, loc=None, ip=None)¶
Bases:
_ods_irThis Op converts the given f32 input to tf32. The result
resis represented as an i32 type. Thereluattribute, when set, lowers to the ‘.relu’ variant of the cvt instruction. Therndandsatattributes specify the the rounding and saturation modes respectively.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.convert.float.to.tf32'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- rnd() _ods_ir¶
- sat() _ods_ir¶
- relu() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.convert_float_to_tf32(res, src, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncBulkCommitGroupOp(*, loc=None, ip=None)¶
Bases:
_ods_irThis Op commits all prior initiated but uncommitted cp.async.bulk instructions into a cp.async.bulk-group.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.bulk.commit.group'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.cp_async_bulk_commit_group(*, loc=None, ip=None) CpAsyncBulkCommitGroupOp¶
Bases:
_ods_irInitiates an asynchronous copy operation from global memory to cluster’s shared memory.
The
multicastMaskoperand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. OperandmulticastMaskspecifies the destination CTAs in the cluster such that each bit position in the 16-bitmulticastMaskoperand corresponds to thenvvm.read.ptx.sreg.ctaidof the destination CTA.The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.For more information, see PTX ISA
- class mlir.dialects._nvvm_ops_gen.CpAsyncBulkPrefetchOp(srcMem, size, *, l2CacheHint=None, loc=None, ip=None)¶
Bases:
_ods_irInitiates an asynchronous prefetch of data from the location specified by
srcMemto the L2 cache.The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.Example:
nvvm.cp.async.bulk.prefetch %src, %size : !llvm.ptr<1> // with l2_cache_hint nvvm.cp.async.bulk.prefetch %src, %size l2_cache_hint = %ch : !llvm.ptr<1>
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.bulk.prefetch'¶
- _ODS_REGIONS = (0, True)¶
- srcMem() _ods_ir¶
- size() _ods_ir¶
- l2CacheHint() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.cp_async_bulk_prefetch(src_mem, size, *, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkPrefetchOp¶
Bases:
_ods_irInitiates an asynchronous copy operation from Shared CTA memory to global memory. The 32-bit operand
sizespecifies the amount of memory to be copied, in terms of number of bytes.sizemust be a multiple of 16. Thel2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access. ThebyteMaskoperand is optional. The i-th bit in the 16-bit widebyteMaskspecifies whether the i-th byte of each 16-byte wide chunk of source data is copied to the destination. If the bit is set, the byte is copied.Example:
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size : !llvm.ptr<1>, !llvm.ptr<3> // with l2_cache_hint nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch : !llvm.ptr<1>, !llvm.ptr<3> // with byte_mask nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size byte_mask = %mask : !llvm.ptr<1>, !llvm.ptr<3> // with both l2_cache_hint and byte_mask nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch byte_mask = %mask : !llvm.ptr<1>, !llvm.ptr<3>
For more information, see PTX ISA
Bases:
_ods_irInitiates an asynchronous copy operation from Shared CTA memory to Shared cluster memory.
For more information, see PTX ISA
Bases:
_ods_irInitiates an asynchronous copy operation on the tensor data from global memory to shared::cluster (or) shared::cta memory. This Op supports all the load modes specified in
TMALoadMode.The
multicastMaskoperand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. OperandmulticastMaskspecifies the destination CTAs in the cluster such that each bit position in the 16-bitmulticastMaskoperand corresponds to thenvvm.read.ptx.sreg.ctaidof the destination CTA.The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.When the
isCTAOnlyattribute is set to true, the destination is shared::cta only. Hence,multicastMaskandCTAGroupare not applicable whenisCTAOnlyis true.For more information, see PTX ISA
- class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorPrefetchOp(tmaDescriptor, coordinates, im2colOffsets, *, mode=None, l2CacheHint=None, loc=None, ip=None)¶
Bases:
_ods_irInitiates an asynchronous prefetch operation on the tensor data from global memory to L2 cache. This Op supports all the load modes specified in
TMALoadMode.The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.prefetch'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- tmaDescriptor() _ods_ir¶
- coordinates() _ods_ir¶
- im2colOffsets() _ods_ir¶
- l2CacheHint() _ods_ir | None¶
- mode() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_prefetch(tma_descriptor, coordinates, im2col_offsets, *, mode=None, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkTensorPrefetchOp¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorReduceOp(tmaDescriptor, srcMem, redKind, coordinates, *, mode=None, l2CacheHint=None, loc=None, ip=None)¶
Bases:
_ods_irInitiates an asynchronous reduction operation of tensor data in global memory with tensor data in shared memory.
The
modeattribute indicates whether the copy mode is tile or im2col. TheredOpattribute specifies the reduction operations applied. The supported reduction operations are: {add, min, max, inc, dec, and, or, xor}The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.reduce'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- tmaDescriptor() _ods_ir¶
- srcMem() _ods_ir¶
- coordinates() _ods_ir¶
- l2CacheHint() _ods_ir | None¶
- redKind() _ods_ir¶
- mode() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_reduce(tma_descriptor, src_mem, red_kind, coordinates, *, mode=None, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkTensorReduceOp¶
Bases:
_ods_irInitiates an asynchronous copy of the tensor data from shared::cta memory to global memory. This Op supports all the store modes specified in
TMAStoreMode.The
l2CacheHintoperand is optional, and it is used to specify cache eviction policy that may be used during the memory access.For more information, see PTX ISA
- class mlir.dialects._nvvm_ops_gen.CpAsyncBulkWaitGroupOp(group, *, read=None, loc=None, ip=None)¶
Bases:
_ods_irOp waits for completion of the most recent bulk async-groups.
The
$groupoperand tells waiting has to be done until for $group or fewer of the most recent bulk async-groups. If$groupis 0, the op wait until all the most recent bulk async-groups have completed.The
$readindicates that the waiting has to be done until all the bulk async operations in the specified bulk async-group have completed reading from their source locations.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.bulk.wait_group'¶
- _ODS_REGIONS = (0, True)¶
- group() _ods_ir¶
- read() bool¶
- mlir.dialects._nvvm_ops_gen.cp_async_bulk_wait_group(group, *, read=None, loc=None, ip=None) CpAsyncBulkWaitGroupOp¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncCommitGroupOp(*, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.cp.async.commit.group'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.cp_async_commit_group(*, loc=None, ip=None) CpAsyncCommitGroupOp¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncMBarrierArriveOp(addr, *, noinc=None, loc=None, ip=None)¶
Bases:
_ods_irThe
cp.async.mbarrier.arriveOp makes the mbarrier object track all prior cp.async operations initiated by the executing thread. Theaddroperand specifies the address of the mbarrier object in generic or shared::cta address space. When it is generic, the underlying memory should fall within the shared::cta space; otherwise the behavior is undefined. Thenoincattr impacts how the mbarrier’s state is updated.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.cp.async.mbarrier.arrive'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- noinc() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.cp_async_mbarrier_arrive(addr, *, noinc=None, loc=None, ip=None) CpAsyncMBarrierArriveOp¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncOp(dst, src, size, modifier, *, cpSize=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.cp.async.shared.global'¶
- _ODS_REGIONS = (0, True)¶
- dst() _ods_ir¶
- src() _ods_ir¶
- cpSize() _ods_ir | None¶
- size() _ods_ir¶
- modifier() _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.CpAsyncWaitGroupOp(n, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.cp.async.wait.group'¶
- _ODS_REGIONS = (0, True)¶
- n() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.cp_async_wait_group(n, *, loc=None, ip=None) CpAsyncWaitGroupOp¶
- class mlir.dialects._nvvm_ops_gen.DotAccumulate2WayOp(res, a, a_type, b, b_type, c, b_hi, *, loc=None, ip=None)¶
Bases:
_ods_irPerforms a two-way 16-bit to 8-bit dot-product which is accumulated in a 32-bit result. Operand
ais a vector of two 16-bit elements and operandba vector of four 8-bit elements between which the dot product is computed.The
a_typeandb_typeattributes specify the type of the elements inaandbrespectively. Ifa_typeorb_typeiss, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. Ifa_typeorb_typeisu, then the elements in the corresponding vector are zero-extended to 32-bit instead.The
b_hiboolean attribute specifies which two bytes ofbare used for the dot product. Ifb_hiis true, then the dot product is computed betweenaand elements at indices 2 and 3 ofb. Ifb_hiis false, then the dot product is computed betweenaand elements at indices 0 and 1 ofb.Operand
cis a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any ofa_typeorb_typeis signed.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.dot.accumulate.2way'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- c() _ods_ir¶
- a_type() _ods_ir¶
- b_type() _ods_ir¶
- b_hi() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.dot_accumulate_2way(res, a, a_type, b, b_type, c, b_hi, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.DotAccumulate4WayOp(res, a, a_type, b, b_type, c, *, loc=None, ip=None)¶
Bases:
_ods_irPerforms a four-way byte dot-product which is accumulated in a 32-bit result. Operand
aandbare vectors of 4 bytes between which the dot product is computed.The
a_typeandb_typeattributes specify the type of the elements inaandbrespectively. Ifa_typeorb_typeissigned, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. Ifa_typeorb_typeisunsigned, then the elements in the corresponding vector are zero-extended to 32-bit instead.Operand
cis a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any ofa_typeorb_typeiss8.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.dot.accumulate.4way'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- c() _ods_ir¶
- a_type() _ods_ir¶
- b_type() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.dot_accumulate_4way(res, a, a_type, b, b_type, c, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ElectSyncOp(pred, *, membermask=None, loc=None, ip=None)¶
Bases:
_ods_irThe
elect.syncinstruction elects one predicated active leader thread from among a set of threads specified in themembermask. When themembermaskis not provided explicitly, a default value of0xFFFFFFFFis used. The predicate result is set toTruefor the leader thread, andFalsefor all other threads.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.elect.sync'¶
- _ODS_REGIONS = (0, True)¶
- membermask() _ods_ir | None¶
- pred() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.elect_sync(pred, *, membermask=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg0Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg0'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg0(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg1Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg1'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg1(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg2Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg2'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg2(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg3Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg3'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg3(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg4Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg4'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg4(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg5Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg5'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg5(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg6Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg6'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg6(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg7Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg7'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg7(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg8Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg8'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg8(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg9Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg9'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg9(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg10Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg10'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg10(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg11Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg11'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg11(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg12Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg12'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg12(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg13Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg13'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg13(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg14Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg14'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg14(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg15Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg15'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg15(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg16Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg16'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg16(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg17Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg17'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg17(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg18Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg18'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg18(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg19Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg19'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg19(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg20Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg20'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg20(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg21Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg21'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg21(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg22Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg22'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg22(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg23Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg23'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg23(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg24Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg24'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg24(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg25Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg25'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg25(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg26Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg26'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg26(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg27Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg27'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg27(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg28Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg28'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg28(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg29Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg29'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg29(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg30Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg30'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg30(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.EnvReg31Op(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg31'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg31(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.Exit(*, loc=None, ip=None)¶
Bases:
_ods_irEnds execution of a thread. For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.exit'¶
- _ODS_REGIONS = (0, True)¶
- class mlir.dialects._nvvm_ops_gen.FenceMbarrierInitOp(*, loc=None, ip=None)¶
Bases:
_ods_irFence operation that applies on the prior nvvm.mbarrier.init
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.fence.mbarrier.init'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.fence_mbarrier_init(*, loc=None, ip=None) FenceMbarrierInitOp¶
- class mlir.dialects._nvvm_ops_gen.FenceProxyAcquireOp(scope, addr, size, *, fromProxy=None, toProxy=None, loc=None, ip=None)¶
Bases:
_ods_irfence.proxy.acquireis a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxyThe address operand
addrand the operandsizetogether specify the memory range[addr, addr+size)on which the ordering guarantees on the memory accesses across the proxies is to be provided. The only supported value for thesizeoperand is 128 and must be an immediate. Generic Addressing is used unconditionally, and the address specified by the operandaddrmust fall within the.globalstate space. Otherwise, the behavior is undefinedFor more information, see PTX ISA
- OPERATION_NAME = 'nvvm.fence.proxy.acquire'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- size() _ods_ir¶
- scope() _ods_ir¶
- fromProxy() _ods_ir¶
- toProxy() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.fence_proxy_acquire(scope, addr, size, *, from_proxy=None, to_proxy=None, loc=None, ip=None) FenceProxyAcquireOp¶
- class mlir.dialects._nvvm_ops_gen.FenceProxyOp(kind, *, space=None, loc=None, ip=None)¶
Bases:
_ods_irFence operation with proxy to establish an ordering between memory accesses that may happen through different proxies.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.fence.proxy'¶
- _ODS_REGIONS = (0, True)¶
- kind() _ods_ir¶
- space() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.fence_proxy(kind, *, space=None, loc=None, ip=None) FenceProxyOp¶
- class mlir.dialects._nvvm_ops_gen.FenceProxyReleaseOp(scope, *, fromProxy=None, toProxy=None, loc=None, ip=None)¶
Bases:
_ods_irfence.proxy.releaseis a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxy.fence.proxy.releaseoperation can form a release sequence that synchronizes with an acquire sequence that contains the fence.proxy.acquire proxy fence operationFor more information, see PTX ISA
- OPERATION_NAME = 'nvvm.fence.proxy.release'¶
- _ODS_REGIONS = (0, True)¶
- scope() _ods_ir¶
- fromProxy() _ods_ir¶
- toProxy() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.fence_proxy_release(scope, *, from_proxy=None, to_proxy=None, loc=None, ip=None) FenceProxyReleaseOp¶
- class mlir.dialects._nvvm_ops_gen.FenceScClusterOp(*, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.fence.sc.cluster'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.fence_sc_cluster(*, loc=None, ip=None) FenceScClusterOp¶
- class mlir.dialects._nvvm_ops_gen.GlobalTimerLoOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.globaltimer.lo'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_globaltimer_lo(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GlobalTimerOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.globaltimer'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_globaltimer(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GridDimXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GridDimYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GridDimZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GridIdOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.gridid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_gridid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.GriddepcontrolOp(kind, *, loc=None, ip=None)¶
Bases:
_ods_irIf the $kind attribute is set to
wait, it causes the executing thread to wait until all prerequisite grids in flight have completed and all the memory operations from the prerequisite grids are performed and made visible to the current grid.When the $kind is launch_dependents, it signals that specific dependents the runtime system designated to react to this instruction can be scheduled as soon as all other CTAs in the grid issue the same instruction or have completed.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.griddepcontrol'¶
- _ODS_REGIONS = (0, True)¶
- kind() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.griddepcontrol(kind, *, loc=None, ip=None) GriddepcontrolOp¶
- class mlir.dialects._nvvm_ops_gen.InlinePtxOp(writeOnlyArgs, readOnlyArgs, readWriteArgs, ptxCode, *, predicate=None, loc=None, ip=None)¶
Bases:
_ods_irThis op allows using PTX directly within the NVVM dialect, while greatly simplifying llvm.inline_asm generation. It automatically handles register size selection and sets the correct read/write access for each operand. The operation leverages the
BasicPtxBuilderInterfaceto abstract away low-level details of PTX assembly formatting.The `predicate` attribute is used to specify a predicate for the PTX instruction. Example 1: Read-only Parameters ```mlir nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count) : !llvm.ptr, i32 // Lowers to: llvm.inline_asm has_side_effects asm_dialect = att "mbarrier.init.b64 [$0], $1;", "l,r" %arg0, %arg2 : (!llvm.ptr, i32) -> () ``` Example 2: Read-only and Write-only Parameters ```mlir %0 = nvvm.inline_ptx "ex2.approx.ftz.f32 $0, $1;" (%input) : f32 -> f32 // Lowers to: %0 = llvm.inline_asm has_side_effects asm_dialect = att "ex2.approx.ftz.f32 $0, $1;", "=f,f" %arg0 : (f32) -> f32 ``` Example 3: Predicate Usage ```mlir nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count), predicate = %pred : !llvm.ptr, i32, i1 // Lowers to: llvm.inline_asm has_side_effects asm_dialect = att "@$2 mbarrier.init.b64 [$0], $1;", "l,r,b" %arg0, %arg2, %arg3 : (!llvm.ptr, i32, i1) -> () ```
- OPERATION_NAME = 'nvvm.inline_ptx'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- readOnlyArgs() _ods_ir¶
- readWriteArgs() _ods_ir¶
- predicate() _ods_ir | None¶
- ptxCode() _ods_ir¶
- writeOnlyArgs() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.inline_ptx(write_only_args, read_only_args, read_write_args, ptx_code, *, predicate=None, loc=None, ip=None) _ods_ir | _ods_ir | InlinePtxOp¶
- class mlir.dialects._nvvm_ops_gen.LaneIdOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.laneid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_laneid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LaneMaskEqOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.eq'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_eq(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LaneMaskGeOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.ge'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_ge(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LaneMaskGtOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.gt'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_gt(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LaneMaskLeOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.le'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_le(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LaneMaskLtOp(res, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.lt'¶
- _ODS_REGIONS = (0, True)¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_lt(res, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.LdMatrixOp(res, ptr, num, layout, shape, eltType, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.ldmatrix'¶
- _ODS_REGIONS = (0, True)¶
- ptr() _ods_ir¶
- num() _ods_ir¶
- layout() _ods_ir¶
- shape() _ods_ir¶
- eltType() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.ldmatrix(res, ptr, num, layout, shape, elt_type, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MBarrierArriveExpectTxOp(addr, txcount, *, predicate=None, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.arrive.expect_txoperation performs an expect-tx operation followed by an arrive-on operation on the mbarrier object. Uses the default.release.ctasemantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.This operation first performs an expect-tx operation with the specified transaction count, then performs an arrive-on operation with an implicit count of 1. The expect-tx operation increases the tx-count of the mbarrier object by the specified expectCount value, setting the current phase to expect and tracks the completion of additional asynchronous transactions.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic
addressing, but the address must still be in the shared memory space. *
txcount: An unsigned integer specifying the expected transaction count for the expect-tx operation. This represents the number of asynchronous transactions expected to complete before the barrier phase completes. *predicate: Optional predicate for conditional execution.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.arrive.expect_tx'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- txcount() _ods_ir¶
- predicate() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.mbarrier_arrive_expect_tx(addr, txcount, *, predicate=None, loc=None, ip=None) MBarrierArriveExpectTxOp¶
- class mlir.dialects._nvvm_ops_gen.MBarrierArriveNocompleteOp(res, addr, count, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.arrive.nocompleteoperation performs an arrive-on operation on the mbarrier object with the guarantee that it will not cause the barrier to complete its current phase. Uses the default.release.ctasemantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.This operation causes the executing thread to signal its arrival at the barrier with a specified count, but ensures that the barrier phase will not complete as a result of this operation. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Theaddr
must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined. *
count: Integer specifying the count argument to the arrive-on operation. Must be in the valid range as specified in the mbarrier object contents.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.arrive.nocomplete'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- count() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mbarrier_arrive_nocomplete(res, addr, count, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MBarrierArriveOp(res, addr, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.arriveoperation performs an arrive-on operation on the mbarrier object at the specified address. Uses the default.release.ctasemantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.This operation causes the executing thread to signal its arrival at the barrier. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation. The contents of this state value are implementation-specific.
The operation takes the following operand:
addr: A pointer to the memory location of the mbarrier object. Theaddr
must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.arrive'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mbarrier_arrive(res, addr, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MBarrierInitOp(addr, count, *, predicate=None, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.initoperation initializes an mbarrier object at the specified memory location.This operation initializes the mbarrier object with the following state:
Current phase: 0
Expected arrival count:
countPending arrival count:
countTransaction count (tx-count): 0
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Theaddr
must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined. *
count: Integer specifying the number of threads that will participate in barrier synchronization. Must be in the range [1, 2²⁰ - 1]. *predicate: Optional predicate for conditional execution.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.init'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- count() _ods_ir¶
- predicate() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.mbarrier_init(addr, count, *, predicate=None, loc=None, ip=None) MBarrierInitOp¶
- class mlir.dialects._nvvm_ops_gen.MBarrierInvalOp(addr, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.invaloperation invalidates an mbarrier object at the specified memory location.This operation marks the mbarrier object as invalid, making it safe to repurpose the memory location for other uses or to reinitialize it as a new mbarrier object. It is undefined behavior if the mbarrier object is already invalid.
The operation takes the following operand:
addr: A pointer to the memory location of the mbarrier object. Theaddr
must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.inval'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mbarrier_inval(addr, *, loc=None, ip=None) MBarrierInvalOp¶
- class mlir.dialects._nvvm_ops_gen.MBarrierTestWaitOp(res, addr, state, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.test.waitoperation performs a non-blocking test for the completion of a specific phase of an mbarrier object. It uses the default.acquire.ctasemantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from thembarrier.arriveoperation, establishing memory ordering within the CTA.This operation tests whether the mbarrier phase specified by the state operand has completed. It is a non-blocking instruction that immediately returns the completion status without suspending the executing thread.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic
addressing, but the address must still be in the shared memory space. *
state: An opaque value returned by a previousmbarrier.arriveoperation on the same mbarrier object during the current or immediately preceding phase.The operation returns a boolean value indicating whether the specified phase has completed:
true: The immediately preceding phase has completedfalse: The phase is still incomplete (current phase)
Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:
#. All memory accesses (except async operations) requested prior to
mbarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread. #. Allcp.asyncoperations requested prior tocp.async.mbarrier.arriveby participating CTA threads are visible to the executing thread. #. Allcp.async.bulkoperations using the same mbarrier object requested prior tombarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread. #. Memory accesses requested after this wait are not visible to memory accesses performed prior tombarrier.arriveby other participating threads. #. No ordering guarantee exists for memory accesses by the same thread betweenmbarrier.arriveand this wait.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.test.wait'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- state() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mbarrier_test_wait(res, addr, state, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MBarrierTryWaitParityOp(addr, phase, ticks, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mbarrier.try_wait.parityoperation performs a potentially-blocking test for the completion of a specific phase of an mbarrier object using phase parity. It uses the default.acquire.ctasemantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from thembarrier.arriveoperation, establishing memory ordering within the CTA.This operation waits for the completion of the mbarrier phase indicated by the phase parity. While it uses the underlying PTX
mbarrier.try_wait.parityinstruction, this MLIR operation generates a loop that enforces the test to complete before continuing execution, ensuring the barrier phase is actually completed rather than potentially timing out.The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic
addressing, but the address must still be in the shared memory space. *
phase: An integer specifying the phase parity (0 or 1). Even phases have parity 0, odd phases have parity 1. *ticks: An unsigned integer specifying the suspend time hint in nanoseconds. This may be used instead of the system-dependent time limit.Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:
#. All memory accesses (except async operations) requested prior to
mbarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread. #. Allcp.asyncoperations requested prior tocp.async.mbarrier.arriveby participating CTA threads are visible to the executing thread. #. Allcp.async.bulkoperations using the same mbarrier object requested prior tombarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread. #. Memory accesses requested after this wait are not visible to memory accesses performed prior tombarrier.arriveby other participating threads. #. No ordering guarantee exists for memory accesses by the same thread betweenmbarrier.arriveand this wait.Implementation behavior: This operation generates a PTX loop that repeatedly calls the underlying
mbarrier.try_wait.parityinstruction until the barrier phase completes. Unlike the raw PTX instruction which may return without completion after a timeout, this MLIR operation guarantees completion by continuing to loop until the specified phase is reached.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.mbarrier.try_wait.parity'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- phase() _ods_ir¶
- ticks() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mbarrier_try_wait_parity(addr, phase, ticks, *, loc=None, ip=None) MBarrierTryWaitParityOp¶
- class mlir.dialects._nvvm_ops_gen.MapaOp(res, a, b, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.mapa'¶
- _ODS_REGIONS = (0, True)¶
- a() _ods_ir¶
- b() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mapa(res, a, b, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MatchSyncOp(res, thread_mask, val, kind, *, loc=None, ip=None)¶
Bases:
_ods_irThe
match.syncop performs broadcast and compare of operandvalacross all non-exited threads inthread_maskand returns a mask depending on the kind and an optional predicate.The matching operation kinds are:
any: Returns a mask corresponding to the non-exited threads in the
thread_maskthat have the same value of operandval. *all: Returns a mask and a predicate. If all non-exited threads in thethread_maskhave the same value of operandval, the predicate is set to true and the mask corresponds to the non-exited threads in thethread_mask. Otherwise, the predicate is set to false and the mask is 0.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.match.sync'¶
- _ODS_REGIONS = (0, True)¶
- thread_mask() _ods_ir¶
- val() _ods_ir¶
- kind() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.match_sync(res, thread_mask, val, kind, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MembarOp(scope, *, loc=None, ip=None)¶
Bases:
_ods_irmembaroperation guarantees that prior memory accesses requested by this thread are performed at the specifiedscope, before later memory operations requested by this thread following the membar instruction.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.memory.barrier'¶
- _ODS_REGIONS = (0, True)¶
- scope() _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.MmaOp(res, shape, layoutA, layoutB, operandA, operandB, operandC, *, b1Op=None, intOverflowBehavior=None, multiplicandAPtxType=None, multiplicandBPtxType=None, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.mma.syncoperation collectively performs the operationD = matmul(A, B) + Cusing all threads in a warp.All the threads in the warp must execute the same
mma.syncoperation.For each possible multiplicand PTX data type, there are one or more possible instruction shapes given as “mMnNkK”. The below table describes the posssibilities as well as the types required for the operands. Note that the data type for C (the accumulator) and D (the result) can vary independently when there are multiple possibilities in the “C/D Type” column.
When an optional attribute cannot be immediately inferred from the types of the operands and the result during parsing or validation, an error will be raised.
b1Opis only relevant when the binary (b1) type is given tomultiplicandDataType. It specifies how the multiply-and-acumulate is performed and is eitherxor_popcorand_poc. The default isxor_popc.intOverflowBehavioris only relevant when themultiplicandTypeattribute is one ofu8, s8, u4, s4, this attribute describes how overflow is handled in the accumulator. When the attribute issatfinite, the accumulator values are clamped in the int32 range on overflow. This is the default behavior. Alternatively, accumulator behaviorwrappedcan also be specified, in which case overflow wraps from one end of the range to the other.layoutAandlayoutBare required and should generally be set to#nvvm.mma_layout<row>and#nvvm.mma_layout<col>respectively, but other combinations are possible for certain layouts according to the table below.| A/B Type | Shape | ALayout | BLayout | A Type | B Type | C/D Type | |----------|-----------|---------|---------|----------|----------|-------------------| | f64 | .m8n8k4 | row | col | 1x f64 | 1x f64 | 2x f64 | | f16 | .m8n8k4 | row/col | row/col | 2x f16x2 | 2x f16x2 | 4x f16x2 or 8xf32 | | | .m16n8k8 | row | col | 2x f16x2 | 1x f16x2 | 2x f16x2 or 4 f32 | | | .m16n8k16 | row | col | 4x f16x2 | 2x f16x2 | 2x f16x2 or 4 f32 | | bf16 | .m16n8k8 | row | col | 2x i32 | 1x i32 | 4x f32 | | | .m16n8k16 | row | col | 4x i32 | 2x i32 | 4x f32 | | tf32 | .m16n8k4 | row | col | 2x i32 | 1x i32 | 4x f32 | | | .m16n8k8 | row | col | 4x i32 | 2x i32 | 2x f16x2 or 4 f32 | | u8/s8 | .m8n8k16 | row | col | 1x i32 | 1x i32 | 2x i32 | | | .m16n8k16 | row | col | 2x i32 | 1x i32 | 4x i32 | | | .m16n8k32 | row | col | 4x i32 | 2x i32 | 4x i32 | | u4/s4 | .m8n8k32 | row | col | 1x i32 | 1x i32 | 2x i32 | | | m16n8k32 | row | col | 2x i32 | 1x i32 | 4x i32 | | | m16n8k64 | row | col | 4x i32 | 2x i32 | 4x i32 | | b1 | m8n8k128 | row | col | 1x i32 | 1x i32 | 2x i32 | | | m16n8k128 | row | col | 2x i32 | 1x i32 | 4x i32 |
Example:
%128 = nvvm.mma.sync A[%120, %121, %122, %123] B[%124, %125] C[%126, %127] {layoutA = #nvvm.mma_layout<row>, layoutB = #nvvm.mma_layout<col>, shape = {k = 16 : i32, m = 16 : i32, n = 8 : i32}} : (vector<2xf16>, vector<2xf16>, vector<2xf16>) -> !llvm.struct<(vector<2xf16>, vector<2xf16>)>
- OPERATION_NAME = 'nvvm.mma.sync'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- operandA() _ods_ir¶
- operandB() _ods_ir¶
- operandC() _ods_ir¶
- shape() _ods_ir¶
- b1Op() _ods_ir | None¶
- intOverflowBehavior() _ods_ir | None¶
- layoutA() _ods_ir¶
- layoutB() _ods_ir¶
- multiplicandAPtxType() _ods_ir | None¶
- multiplicandBPtxType() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.mma_sync(res, shape, layout_a, layout_b, operand_a, operand_b, operand_c, *, b1_op=None, int_overflow_behavior=None, multiplicand_a_ptx_type=None, multiplicand_b_ptx_type=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.NanosleepOp(duration, *, loc=None, ip=None)¶
Bases:
_ods_irThe op suspends the thread for a sleep duration approximately close to the delay
$duration, specified in nanoseconds.The sleep duration is approximated, but guaranteed to be in the interval [0, 2*t]. The maximum sleep duration is 1 millisecond. The implementation may reduce the sleep duration for individual threads within a warp such that all sleeping threads in the warp wake up together.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.nanosleep'¶
- _ODS_REGIONS = (0, True)¶
- duration() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.nanosleep(duration, *, loc=None, ip=None) NanosleepOp¶
- class mlir.dialects._nvvm_ops_gen.PMEventOp(*, maskedEventId=None, eventId=None, loc=None, ip=None)¶
Bases:
_ods_irTriggers one or more of a fixed number of performance monitor events, with event index or mask specified by immediate operand.
Without
maskit triggers a single performance monitor event indexed by immediate operand a, in the range 0..15.With
maskit triggers one or more of the performance monitor events. Each bit in the 16-bit immediate operand controls an event.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.pmevent'¶
- _ODS_REGIONS = (0, True)¶
- maskedEventId() _ods_ir | None¶
- eventId() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.pmevent(*, masked_event_id=None, event_id=None, loc=None, ip=None) PMEventOp¶
- class mlir.dialects._nvvm_ops_gen.PrefetchOp(addr, *, cacheLevel=None, evictPriority=None, predicate=None, tensormap=None, uniform=None, in_param_space=None, loc=None, ip=None)¶
Bases:
_ods_irPrefetches the cache line containing the address given by
addr. The operand may be a global, local, or generic pointer. Whentensormapis specified, the operand may instead be a constant or generic pointer. If the address maps to shared memory, the operation has no effect.At most one of
cacheLevelortensormapmay be present. ThecacheLevelattribute selects the target cache level. When combined withuniform, the prefetch is performed to the uniform cache, in which caseaddrmust be a generic pointer.When
tensormapis used, the line containingaddris brought from the constant or parameter state space for later use bycp.async.bulk.tensor. Ifin_param_spaceis specified, the generic pointer is interpreted as referring to the parameter state space.uniformcan be specified after thecacheLevelto indicate that the prefetch is performed to the specified uniform cache level. Ifuniformis specified,addrmust be a generic address pointer and no operation is performed ifaddrmaps to aconst,local, orsharedmemory location.The
evictPriorityattribute is optional and specifies the cache eviction priority whencacheLevelis L2.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.prefetch'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- predicate() _ods_ir | None¶
- cacheLevel() _ods_ir | None¶
- evictPriority() _ods_ir | None¶
- tensormap() bool¶
- uniform() bool¶
- in_param_space() bool¶
- mlir.dialects._nvvm_ops_gen.prefetch(addr, *, cache_level=None, evict_priority=None, predicate=None, tensormap=None, uniform=None, in_param_space=None, loc=None, ip=None) PrefetchOp¶
- class mlir.dialects._nvvm_ops_gen.RcpApproxFtzF32Op(res, arg, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.rcp.approx.ftz.f'¶
- _ODS_REGIONS = (0, True)¶
- arg() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.rcp_approx_ftz_f(res, arg, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ReduxOp(res, val, kind, mask_and_clamp, *, abs=None, nan=None, loc=None, ip=None)¶
Bases:
_ods_irredux.syncperforms a reduction operationkindof the 32 bit source register across all non-exited threads in the membermask.The
absandnanattributes can be used in the case of f32 input type, where theabsattribute causes the absolute value of the input to be used in the reduction operation, and thenanattribute causes the reduction operation to return NaN if any of the inputs to participating threads are NaN.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.redux.sync'¶
- _ODS_REGIONS = (0, True)¶
- val() _ods_ir¶
- mask_and_clamp() _ods_ir¶
- kind() _ods_ir¶
- abs() _ods_ir¶
- nan() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.redux_sync(res, val, kind, mask_and_clamp, *, abs=None, nan=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.SetMaxRegisterOp(regCount, action, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.setmaxregister'¶
- _ODS_REGIONS = (0, True)¶
- regCount() _ods_ir¶
- action() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.setmaxregister(reg_count, action, *, loc=None, ip=None) SetMaxRegisterOp¶
- class mlir.dialects._nvvm_ops_gen.ShflOp(res, thread_mask, val, offset, mask_and_clamp, kind, *, return_value_and_is_valid=None, loc=None, ip=None)¶
Bases:
_ods_irThe
shfl.syncOp implements data shuffle within threads of a warp. Thethread_maskdenotes the threads participating in the Op where the bit position corresponds to a particular thread’s laneid. Theoffsetspecifies a source lane or source lane offset (depending onkind). Thevalis the input value to be copied from the source. Themask_and_clampcontains two packed values specifying a mask for logically splitting warps into sub-segments and an upper bound for clamping the source lane index.The
return_value_and_is_validunit attribute can be specified to indicate that the return value is a two-element struct, where the first element is the result value and the second element is a predicate indicating if the computed source lane index is valid.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.shfl.sync'¶
- _ODS_REGIONS = (0, True)¶
- thread_mask() _ods_ir¶
- val() _ods_ir¶
- offset() _ods_ir¶
- mask_and_clamp() _ods_ir¶
- kind() _ods_ir¶
- return_value_and_is_valid() bool¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.shfl_sync(res, thread_mask, val, offset, mask_and_clamp, kind, *, return_value_and_is_valid=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.SmDimOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nsmid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nsmid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.SmIdOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.smid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_smid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.StMatrixOp(ptr, sources, layout, shape, eltType, *, loc=None, ip=None)¶
Bases:
_ods_irCollectively store one or more matrices across all threads in a warp to the location indicated by the address operand $ptr in shared memory.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.stmatrix'¶
- _ODS_REGIONS = (0, True)¶
- ptr() _ods_ir¶
- sources() _ods_ir¶
- layout() _ods_ir¶
- shape() _ods_ir¶
- eltType() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.stmatrix(ptr, sources, layout, shape, elt_type, *, loc=None, ip=None) StMatrixOp¶
- class mlir.dialects._nvvm_ops_gen.SyncWarpOp(mask, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.bar.warp.syncoperation performs barrier synchronization for threads within a warp.This operation causes the executing thread to wait until all threads corresponding to the
maskoperand have executed abar.warp.syncwith the same mask value before resuming execution.The
maskoperand specifies the threads participating in the barrier, where each bit position corresponds to the thread’s lane ID within the warp. Only threads with their corresponding bit set in the mask participate in the barrier synchronization.Important constraints:
The behavior is undefined if the executing thread is not included in the mask
(i.e., the bit corresponding to the thread’s lane ID is not set) * For compute capability sm_6x or below, all threads in the mask must execute the same
bar.warp.syncinstruction in convergenceThis operation also guarantees memory ordering among participating threads. Threads within the warp that wish to communicate via memory can store to memory, execute
bar.warp.sync, and then safely read values stored by other threads in the warp.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.bar.warp.sync'¶
- _ODS_REGIONS = (0, True)¶
- mask() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.bar_warp_sync(mask, *, loc=None, ip=None) SyncWarpOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05AllocOp(addr, nCols, *, group=None, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.allocOp allocates tensor core memory for the amount specified bynColsand writes the destination address to theaddrargument. ThenColsoperand specifies the number of columns to be allocated and it must be a power-of-two. For more information, see PTX ISA- OPERATION_NAME = 'nvvm.tcgen05.alloc'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- nCols() _ods_ir¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_alloc(addr, n_cols, *, group=None, loc=None, ip=None) Tcgen05AllocOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05CommitOp(addr, *, multicastMask=None, group=None, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.commitmakes the mbarrier object, specified by the operandaddr, track the completion of all the prior async-tcgen05 operations initiated by the executing thread. The multicast variants allow signaling on the mbarrier objects of multiple CTAs within the cluster. OperandmulticastMask, when present, specifies the destination CTAs in the cluster such that each bit position in the 16-bitmulticastMaskoperand corresponds to thenvvm.read.ptx.sreg.ctaidof the destination CTA. For more information, see PTX ISA- OPERATION_NAME = 'nvvm.tcgen05.commit'¶
- _ODS_REGIONS = (0, True)¶
- addr() _ods_ir¶
- multicastMask() _ods_ir | None¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_commit(addr, *, multicast_mask=None, group=None, loc=None, ip=None) Tcgen05CommitOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05CpOp(shape, taddr, smem_desc, *, group=None, multicast=None, srcFormat=None, loc=None, ip=None)¶
Bases:
_ods_irInstruction tcgen05.cp initiates an asynchronous copy operation from shared memory to the location specified by the address operand
taddrin the Tensor Memory. The 64-bit register operandsmem_descspecifies the matrix descriptor representing the source matrix in the shared memory that needs to be copied.Example:
nvvm.tcgen05.cp %taddr, %smem_desc { group = #nvvm.tcgen05_group<cta_2>, shape = #nvvm.tcgen05_cp_shape<shape_64x128b>, multicast = #nvvm.tcgen05_cp_multicast<warpx2_01_23>, srcFormat = #nvvm.tcgen05_cp_src_fmt<b6x16_p32> }
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.cp'¶
- _ODS_REGIONS = (0, True)¶
- taddr() _ods_ir¶
- smem_desc() _ods_ir¶
- shape() _ods_ir¶
- group() _ods_ir¶
- multicast() _ods_ir¶
- srcFormat() _ods_ir | None¶
- mlir.dialects._nvvm_ops_gen.tcgen05_cp(shape, taddr, smem_desc, *, group=None, multicast=None, src_format=None, loc=None, ip=None) Tcgen05CpOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05DeallocOp(taddr, nCols, *, group=None, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.deallocOp de-allocates the tensor core memory specified bytmemAddr, which must be from a previous tensor memory allocation. ThenColsoperand specifies the number of columns to be de-allocated, and it must be a power-of-two. For more information, see PTX ISA- OPERATION_NAME = 'nvvm.tcgen05.dealloc'¶
- _ODS_REGIONS = (0, True)¶
- taddr() _ods_ir¶
- nCols() _ods_ir¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_dealloc(taddr, n_cols, *, group=None, loc=None, ip=None) Tcgen05DeallocOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05FenceOp(kind, *, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.fence<before>orders all prior async tcgen05 operations with respect to the subsequent tcgen05 and execution ordering operations. Thetcgen05.fence<after>orders all subsequent async tcgen05 operations with respect to the prior tcgen05 and execution ordering operations.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.fence'¶
- _ODS_REGIONS = (0, True)¶
- kind() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_fence(kind, *, loc=None, ip=None) Tcgen05FenceOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05LdOp(res, shape, tmemAddr, *, pack=None, offset=None, loc=None, ip=None)¶
Bases:
_ods_irInstruction
tcgen05.ldasynchronously loads data from the Tensor Memory at the location specified by the 32-bit address operandtmemAddrinto the destination registerres, collectively across all threads of the warps.The
shapeand thenumattribute together determines the total dimension of the data which is loaded from the Tensor Memory. Theshapeattribute indicates the base dimension of data to be accessed as described in the Data Movement Shape. Thenumattribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.The shape
16x32bx2performs two accesses into Tensor Memory of the shape16x32b. The base address of the first access is specified bytmemAddrand the base address of the second access is specified bytmemAddr + offset, whereoffsetis an immediate argument.The unit attribute
packcan be used to pack two 16-bit elements from adjacent columns into a single 32-bit element during the load.The following table describes the size of the vector for various combinations of
numandshapeattributes:|=====================================================================| | num/shape | 16x32bx2/16x64b/32x32b | 16x128b | 16x256b | |=====================================================================| | x1 | 1 | 2 | 4 | | x2 | 2 | 4 | 8 | | x4 | 4 | 8 | 16 | | x8 | 8 | 16 | 32 | | x16 | 16 | 32 | 64 | | x32 | 32 | 64 | 128 | | x64 | 64 | 128 | NA | | x128 | 128 | NA | NA | |=====================================================================|
Example:
nvvm.tcgen05.ld %tmemAddr, %offset pack { shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>, } : <2xi32>
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.ld'¶
- _ODS_REGIONS = (0, True)¶
- tmemAddr() _ods_ir¶
- offset() _ods_ir | None¶
- pack() bool¶
- shape() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_ld(res, shape, tmem_addr, *, pack=None, offset=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05MmaSmemDescOp(res, startAddr, leadingDimOffset, strideDimOffset, baseOffset, leadingDimMode, swizzleMode, *, loc=None, ip=None)¶
Bases:
_ods_irThe
nvvm.tcgen05_mma_smem_descconstructs a Shared Memory descriptor for tcgen05.mma. This descriptor is a 64-bit value which describes the properties of multiplicand matrix in shared memory including its location in the shared memory of the current CTA.+-----------+------+------------------------------------------------------+ | Bit-field | Size | Description | +-----------+------+------------------------------------------------------+ | 0-13 | 14 | Matrix start address | | 14-15 | 2 | Reserved | | 16-29 | 14 | Leading dim relative-offset (or) absolute-address | | 30-31 | 2 | Reserved | | 32-45 | 14 | Stride dimension byte offset | | 46-48 | 3 | Fixed constant value of 0b001 | | 49-51 | 3 | Matrix base offset | | 52 | 1 | Leading dimension stride mode: | | | | 0: byte offset relative | | | | 1: byte address absolute | | 53-60 | 8 | Fixed constant value of 0xb00000000 | | 61-63 | 3 | Swizzling mode: | | | | 0: No swizzling | | | | 1: 128-Byte with 32B atomic swizzling | | | | 2: 128-Byte swizzling | | | | 4: 64-Byte swizzling | | | | 6: 32-Byte swizzling | | | | (Values 3, 5 and 7 are invalid) | +-----------+------+------------------------------------------------------+
Example:
%desc = nvvm.tcgen05.mma_smem_desc (%startAddr, %leadingDimOffset, %strideDimOffset, %baseOffset, %leadingDimMode, %swizzleMode) : (i32, i32, i32, i8, i1, i8) -> i64
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.mma_smem_desc'¶
- _ODS_REGIONS = (0, True)¶
- startAddr() _ods_ir¶
- leadingDimOffset() _ods_ir¶
- strideDimOffset() _ods_ir¶
- baseOffset() _ods_ir¶
- leadingDimMode() _ods_ir¶
- swizzleMode() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_mma_smem_desc(res, start_addr, leading_dim_offset, stride_dim_offset, base_offset, leading_dim_mode, swizzle_mode, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05RelinquishAllocPermitOp(*, group=None, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.relinquish_alloc_permitOp specifies that the CTA of the executing thread is relinquishing the right to allocate Tensor Memory. So, it is illegal for a CTA to performtcgen05.allocafter any of its constituent threads executetcgen05.relinquish_alloc_permit. For more information, see PTX ISA- OPERATION_NAME = 'nvvm.tcgen05.relinquish_alloc_permit'¶
- _ODS_REGIONS = (0, True)¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_relinquish_alloc_permit(*, group=None, loc=None, ip=None) Tcgen05RelinquishAllocPermitOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05ShiftOp(taddr, *, group=None, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.shiftis an asynchronous instruction which initiates the shifting of 32-byte elements downwards across all the rows, except the last, by one row. The operandtaddrspecifies the base address of the matrix in Tensor Memory whose rows must be down shifted.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.shift'¶
- _ODS_REGIONS = (0, True)¶
- taddr() _ods_ir¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_shift(taddr, *, group=None, loc=None, ip=None) Tcgen05ShiftOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05StOp(shape, tmemAddr, val, *, unpack=None, offset=None, loc=None, ip=None)¶
Bases:
_ods_irInstruction
tcgen05.stasynchronously stores data from the source registerrinto the Tensor Memory at the location specified by the 32-bit address operandtmemAddr, collectively across all threads of the warps.The
shapeand thenumattribute together determines the total dimension of the data which is stored to the Tensor Memory. Theshapeindicates the base dimension of data to be accessed. Thenumattribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.The shape
16x32bx2performs two accesses into Tensor Memory of the shape16x32b. The base address of the first access is specified bytmemAddrand the base address of the second access is specified bytmemAddr + offset, whereoffsetis an immediate argument.The unit attribute
unpackcan be used to unpack a 32-bit element in the register into two 16-bit elements and store them in adjacent columns.The following table describes the size of the vector for various combinations of
numandshapeattributes:|=====================================================================| | num/shape | 16x32bx2/16x64b/32x32b | 16x128b | 16x256b | |=====================================================================| | x1 | 1 | 2 | 4 | | x2 | 2 | 4 | 8 | | x4 | 4 | 8 | 16 | | x8 | 8 | 16 | 32 | | x16 | 16 | 32 | 64 | | x32 | 32 | 64 | 128 | | x64 | 64 | 128 | NA | | x128 | 128 | NA | NA | |=====================================================================|
Example:
nvvm.tcgen05.st %tmemAddr, %val, %offset unpack { shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>, } : <2xi32>
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.tcgen05.st'¶
- _ODS_REGIONS = (0, True)¶
- tmemAddr() _ods_ir¶
- val() _ods_ir¶
- offset() _ods_ir | None¶
- unpack() bool¶
- shape() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_st(shape, tmem_addr, val, *, unpack=None, offset=None, loc=None, ip=None) Tcgen05StOp¶
- class mlir.dialects._nvvm_ops_gen.Tcgen05WaitOp(kind, *, loc=None, ip=None)¶
Bases:
_ods_irThe
tcgen05.wait<load>causes the executing thread to block until all priortcgen05.ldoperations issued by the executing thread have completed. Similarly, thetcgen05.wait<store>causes the executing thread to block until all priortcgen05.stoperations issued by the executing thread have completed. For more information, see PTX ISA- OPERATION_NAME = 'nvvm.tcgen05.wait'¶
- _ODS_REGIONS = (0, True)¶
- kind() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.tcgen05_wait(kind, *, loc=None, ip=None) Tcgen05WaitOp¶
- class mlir.dialects._nvvm_ops_gen.ThreadIdXOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.x'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_x(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ThreadIdYOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.y'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_y(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.ThreadIdZOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.z'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_z(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.VoteSyncOp(res, mask, pred, kind, *, loc=None, ip=None)¶
Bases:
_ods_irThe
vote.syncop will cause executing thread to wait until all non-exited threads corresponding to membermask have executedvote.syncwith the same qualifiers and same membermask value before resuming execution.The vote operation kinds are:
any: True if source predicate is True for some thread in membermask.all: True if source predicate is True for all non-exited threads in
membermask. *
uni: True if source predicate has the same value in all non-exited threads in membermask. *ballot: In the ballot form, the destination result is a 32 bit integer. In this form, the predicate from each thread in membermask are copied into the corresponding bit position of the result, where the bit position corresponds to the thread’s lane id.For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.vote.sync'¶
- _ODS_REGIONS = (0, True)¶
- mask() _ods_ir¶
- pred() _ods_ir¶
- kind() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.vote_sync(res, mask, pred, kind, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WMMALoadOp(res, ptr, stride, m, n, k, layout, eltype, frag, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.wmma.load'¶
- _ODS_REGIONS = (0, True)¶
- ptr() _ods_ir¶
- stride() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- layout() _ods_ir¶
- eltype() _ods_ir¶
- frag() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.wmma_load(res, ptr, stride, m, n, k, layout, eltype, frag, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WMMAMmaOp(res, m, n, k, layoutA, layoutB, eltypeA, eltypeB, args, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.wmma.mma'¶
- _ODS_REGIONS = (0, True)¶
- args() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- layoutA() _ods_ir¶
- layoutB() _ods_ir¶
- eltypeA() _ods_ir¶
- eltypeB() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.wmma_mma(res, m, n, k, layout_a, layout_b, eltype_a, eltype_b, args, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WMMAStoreOp(ptr, m, n, k, layout, eltype, args, stride, *, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.wmma.store'¶
- _ODS_REGIONS = (0, True)¶
- ptr() _ods_ir¶
- args() _ods_ir¶
- stride() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- layout() _ods_ir¶
- eltype() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.wmma_store(ptr, m, n, k, layout, eltype, args, stride, *, loc=None, ip=None) WMMAStoreOp¶
- class mlir.dialects._nvvm_ops_gen.WarpDimOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.nwarpid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nwarpid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WarpIdOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.warpid'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_warpid(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WarpSizeOp(res, *, range=None, loc=None, ip=None)¶
Bases:
_ods_ir- OPERATION_NAME = 'nvvm.read.ptx.sreg.warpsize'¶
- _ODS_REGIONS = (0, True)¶
- range() _ods_ir | None¶
- res() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.read_ptx_sreg_warpsize(res, *, range=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WgmmaFenceAlignedOp(*, loc=None, ip=None)¶
Bases:
_ods_irEnforce an ordering of register accesses between warpgroup level matrix multiplication and other operations.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.wgmma.fence.aligned'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.wgmma_fence_aligned(*, loc=None, ip=None) WgmmaFenceAlignedOp¶
- class mlir.dialects._nvvm_ops_gen.WgmmaGroupSyncAlignedOp(*, loc=None, ip=None)¶
Bases:
_ods_irCommits all prior uncommitted warpgroup level matrix multiplication operations.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.wgmma.commit.group.sync.aligned'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._nvvm_ops_gen.wgmma_commit_group_sync_aligned(*, loc=None, ip=None) WgmmaGroupSyncAlignedOp¶
- class mlir.dialects._nvvm_ops_gen.WgmmaMmaAsyncOp(results_, inouts, descriptorA, descriptorB, shape, typeA, typeB, typeD, scaleD, scaleA, scaleB, layoutA, layoutB, *, satfinite=None, loc=None, ip=None)¶
Bases:
_ods_irThe warpgroup (128 threads) level matrix multiply and accumulate operation has either of the following forms, where matrix D is called accumulator: D = A * B + D D = A * B, where the input from accumulator D is disabled.
Supported shapes:
|--------------|--------------|------------|--------------|---------------| | | | | |f16+=e4m3*e4m3 | | | | | |f16+=e5m2*e5m2 | |f32+=tf32*tf32|f16+=f16 *f16 | s32+=s8*s8 |s32 += b1 * b1|f16+=e5m2*e4m3 | | |f32+=f16 *f16 | s32+=u8*u8 | |f16+=e4m3*e5m2 | | |f32+=bf16*bf16| s32+=u8*u8 | |f16+=e4m3*e5m2 | | |f32+=bf16*bf16| s32+=s8*u8 | |f32+=e4m3*e4m3 | | | | s32+=u8*s8 | |f32+=e5m2*e5m2 | | | | | |f32+=e4m3*e5m2 | | | | | |f32+=e4m3*e5m2 | |--------------|--------------|------------|--------------|---------------| | .m64n8k8 | .m64n8k16 | .m64n8k32 | .m64n8k256 | .m64n8k32 | | .m64n16k8 | .m64n16k16 | .m64n16k32 | .m64n16k256 | .m64n16k32 | | .m64n24k8 | .m64n24k16 | .m64n24k32 | .m64n24k256 | .m64n24k32 | | .m64n32k8 | .m64n32k16 | .m64n32k32 | .m64n32k256 | .m64n32k32 | | .m64n40k8 | .m64n40k16 | .m64n48k32 | .m64n48k256 | .m64n40k32 | | .m64n48k8 | .m64n48k16 | .m64n64k32 | .m64n64k256 | .m64n48k32 | | .m64n56k8 | .m64n56k16 | .m64n80k32 | .m64n80k256 | .m64n56k32 | | .m64n64k8 | .m64n64k16 | .m64n96k32 | .m64n96k256 | .m64n64k32 | | .m64n72k8 | .m64n72k16 | .m64n112k32| .m64n112k256 | .m64n72k32 | | .m64n80k8 | .m64n80k16 | .m64n128k32| .m64n128k256 | .m64n80k32 | | .m64n88k8 | .m64n88k16 | .m64n144k32| .m64n144k256 | .m64n88k32 | | .m64n96k8 | .m64n96k16 | .m64n160k32| .m64n160k256 | .m64n96k32 | | .m64n104k8 | .m64n104k16 | .m64n176k32| .m64n176k256 | .m64n104k32 | | .m64n112k8 | .m64n112k16 | .m64n192k32| .m64n192k256 | .m64n112k32 | | .m64n120k8 | .m64n120k16 | .m64n208k32| .m64n208k256 | .m64n120k32 | | .m64n128k8 | .m64n128k16 | .m64n224k32| .m64n224k256 | .m64n128k32 | | .m64n136k8 | .m64n136k16 | .m64n240k32| .m64n240k256 | .m64n136k32 | | .m64n144k8 | .m64n144k16 | .m64n256k32| .m64n256k256 | .m64n144k32 | | .m64n152k8 | .m64n152k16 | | | .m64n152k32 | | .m64n160k8 | .m64n160k16 | | | .m64n160k32 | | .m64n168k8 | .m64n168k16 | | | .m64n168k32 | | .m64n176k8 | .m64n176k16 | | | .m64n176k32 | | .m64n184k8 | .m64n184k16 | | | .m64n184k32 | | .m64n192k8 | .m64n192k16 | | | .m64n192k32 | | .m64n200k8 | .m64n200k16 | | | .m64n200k32 | | .m64n208k8 | .m64n208k16 | | | .m64n208k32 | | .m64n216k8 | .m64n216k16 | | | .m64n216k32 | | .m64n224k8 | .m64n224k16 | | | .m64n224k32 | | .m64n232k8 | .m64n232k16 | | | .m64n232k32 | | .m64n240k8 | .m64n240k16 | | | .m64n240k32 | | .m64n248k8 | .m64n248k16 | | | .m64n248k32 | | .m64n256k8 | .m64n256k16 | | | .m64n256k32 | |--------------|--------------|------------|--------------|---------------|
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.wgmma.mma_async'¶
- _ODS_REGIONS = (0, True)¶
- inouts() _ods_ir¶
- descriptorA() _ods_ir¶
- descriptorB() _ods_ir¶
- shape() _ods_ir¶
- typeA() _ods_ir¶
- typeB() _ods_ir¶
- typeD() _ods_ir¶
- scaleD() _ods_ir¶
- scaleA() _ods_ir¶
- scaleB() _ods_ir¶
- layoutA() _ods_ir¶
- layoutB() _ods_ir¶
- satfinite() _ods_ir | None¶
- results_() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.wgmma_mma_async(results_, inouts, descriptor_a, descriptor_b, shape, type_a, type_b, type_d, scale_d, scale_a, scale_b, layout_a, layout_b, *, satfinite=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._nvvm_ops_gen.WgmmaWaitGroupSyncOp(group, *, loc=None, ip=None)¶
Bases:
_ods_irSignal the completion of a preceding warpgroup operation.
For more information, see PTX ISA
- OPERATION_NAME = 'nvvm.wgmma.wait.group.sync.aligned'¶
- _ODS_REGIONS = (0, True)¶
- group() _ods_ir¶
- mlir.dialects._nvvm_ops_gen.wgmma_wait_group_sync_aligned(group, *, loc=None, ip=None) WgmmaWaitGroupSyncOp¶