mlir.dialects._nvvm_ops_gen

Attributes

Classes

_Dialect

Barrier0Op

The nvvm.barrier0 operation is a convenience operation that performs barrier

BarrierArriveOp

Thread that executes this op announces their arrival at the barrier with

BarrierOp

The nvvm.barrier operation performs barrier synchronization and communication

BlockDimXOp

BlockDimYOp

BlockDimZOp

BlockIdXOp

BlockIdYOp

BlockIdZOp

BlockInClusterIdXOp

BlockInClusterIdYOp

BlockInClusterIdZOp

Breakpoint

Breakpoint suspends execution of the program for debugging.

BulkStoreOp

Initializes a region of shared memory at the address given by addr.

Clock64Op

ClockOp

ClusterArriveOp

The cluster.arrive can be used by the threads within the cluster for synchronization and

ClusterArriveRelaxedOp

The cluster.arrive can be used by the threads within the cluster for synchronization and

ClusterDim

ClusterDimBlocksXOp

ClusterDimBlocksYOp

ClusterDimBlocksZOp

ClusterDimXOp

ClusterDimYOp

ClusterDimZOp

ClusterId

ClusterIdXOp

ClusterIdYOp

ClusterIdZOp

ClusterLaunchControlQueryCancelOp

clusterlaunchcontrol.query.cancel queries the response of a

ClusterLaunchControlTryCancelOp

clusterlaunchcontrol.try.cancel requests atomically canceling the launch

ClusterWaitOp

The cluster.wait causes the executing thread to wait for all non-exited threads

ConvertBF16x2ToF8x2Op

This Op converts the given bf16 inputs in a bf16x2 vector to the specified

ConvertF4x2ToF16x2Op

This Op converts the given f4 inputs in a packed i8 to f16.

ConvertF6x2ToF16x2Op

This Op converts the given f6 inputs in a i8x2 vector to f16.

ConvertF8x2ToBF16x2Op

This Op converts the given f8 inputs in a i8x2 vector to bf16.

ConvertF8x2ToF16x2Op

This Op converts the given f8 inputs in a i8x2 vector to f16.

ConvertF16x2ToF8x2Op

This Op converts the given f16 inputs in an f16x2 vector to the specified

ConvertF32x2ToBF16x2Op

Converts two F32 values to packed bf16x2 format using stochastic

ConvertF32x2ToF4x2Op

This Op converts each of the given float inputs to the specified fp4 type.

ConvertF32x2ToF6x2Op

This Op converts each of the given float inputs to the specified fp6 type.

ConvertF32x2ToF8x2Op

This Op converts each of the given float inputs to the specified fp8 type.

ConvertF32x2ToF16x2Op

Converts two F32 values to packed f16x2 format using stochastic

ConvertF32x4ToF4x4Op

Converts a vector<4xf32> to packed f4x4 format using

ConvertF32x4ToF6x4Op

Converts a vector<4xf32> to packed f6x4 format using

ConvertF32x4ToF8x4Op

Converts a vector<4xf32> to packed f8x4 format using

ConvertFloatToTF32Op

This Op converts the given f32 input to tf32.

CpAsyncBulkCommitGroupOp

This Op commits all prior initiated but uncommitted cp.async.bulk

CpAsyncBulkGlobalToSharedClusterOp

Initiates an asynchronous copy operation from global memory to cluster's

CpAsyncBulkPrefetchOp

Initiates an asynchronous prefetch of data from the location

CpAsyncBulkSharedCTAToGlobalOp

Initiates an asynchronous copy operation from Shared CTA memory to

CpAsyncBulkSharedCTAToSharedClusterOp

Initiates an asynchronous copy operation from Shared CTA memory to Shared

CpAsyncBulkTensorGlobalToSharedClusterOp

Initiates an asynchronous copy operation on the tensor data from global

CpAsyncBulkTensorPrefetchOp

Initiates an asynchronous prefetch operation on the tensor data from global

CpAsyncBulkTensorReduceOp

Initiates an asynchronous reduction operation of tensor data in

CpAsyncBulkTensorSharedCTAToGlobalOp

Initiates an asynchronous copy of the tensor data from shared::cta

CpAsyncBulkWaitGroupOp

Op waits for completion of the most recent bulk async-groups.

CpAsyncCommitGroupOp

CpAsyncMBarrierArriveOp

The cp.async.mbarrier.arrive Op makes the mbarrier object track

CpAsyncOp

CpAsyncWaitGroupOp

DotAccumulate2WayOp

Performs a two-way 16-bit to 8-bit dot-product which is accumulated in a

DotAccumulate4WayOp

Performs a four-way byte dot-product which is accumulated in a 32-bit

ElectSyncOp

The elect.sync instruction elects one predicated active leader

EnvReg0Op

EnvReg1Op

EnvReg2Op

EnvReg3Op

EnvReg4Op

EnvReg5Op

EnvReg6Op

EnvReg7Op

EnvReg8Op

EnvReg9Op

EnvReg10Op

EnvReg11Op

EnvReg12Op

EnvReg13Op

EnvReg14Op

EnvReg15Op

EnvReg16Op

EnvReg17Op

EnvReg18Op

EnvReg19Op

EnvReg20Op

EnvReg21Op

EnvReg22Op

EnvReg23Op

EnvReg24Op

EnvReg25Op

EnvReg26Op

EnvReg27Op

EnvReg28Op

EnvReg29Op

EnvReg30Op

EnvReg31Op

Exit

Ends execution of a thread.

FenceMbarrierInitOp

Fence operation that applies on the prior nvvm.mbarrier.init

FenceProxyAcquireOp

fence.proxy.acquire is a uni-directional fence used to establish ordering

FenceProxyOp

Fence operation with proxy to establish an ordering between memory accesses

FenceProxyReleaseOp

fence.proxy.release is a uni-directional fence used to establish ordering

FenceScClusterOp

GlobalTimerLoOp

GlobalTimerOp

GridDimXOp

GridDimYOp

GridDimZOp

GridIdOp

GriddepcontrolOp

If the $kind attribute is set to wait, it causes the

InlinePtxOp

This op allows using PTX directly within the NVVM

LaneIdOp

LaneMaskEqOp

LaneMaskGeOp

LaneMaskGtOp

LaneMaskLeOp

LaneMaskLtOp

LdMatrixOp

MBarrierArriveExpectTxOp

The nvvm.mbarrier.arrive.expect_tx operation performs an expect-tx operation

MBarrierArriveNocompleteOp

The nvvm.mbarrier.arrive.nocomplete operation performs an arrive-on operation

MBarrierArriveOp

The nvvm.mbarrier.arrive operation performs an arrive-on operation on the

MBarrierInitOp

The nvvm.mbarrier.init operation initializes an mbarrier object at the specified

MBarrierInvalOp

The nvvm.mbarrier.inval operation invalidates an mbarrier object at the

MBarrierTestWaitOp

The nvvm.mbarrier.test.wait operation performs a non-blocking test for the

MBarrierTryWaitParityOp

The nvvm.mbarrier.try_wait.parity operation performs a potentially-blocking

MapaOp

MatchSyncOp

The match.sync op performs broadcast and compare of operand val across

MembarOp

membar operation guarantees that prior memory accesses requested by this

MmaOp

The nvvm.mma.sync operation collectively performs the operation

NanosleepOp

The op suspends the thread for a sleep duration approximately close to the

PMEventOp

Triggers one or more of a fixed number of performance monitor events, with

PrefetchOp

Prefetches the cache line containing the address given by addr. The

RcpApproxFtzF32Op

ReduxOp

redux.sync performs a reduction operation kind of the 32 bit source

SetMaxRegisterOp

ShflOp

The shfl.sync Op implements data shuffle within threads of a warp.

SmDimOp

SmIdOp

StMatrixOp

Collectively store one or more matrices across all threads in a warp to the

SyncWarpOp

The nvvm.bar.warp.sync operation performs barrier synchronization for threads

Tcgen05AllocOp

The tcgen05.alloc Op allocates tensor core memory for

Tcgen05CommitOp

The tcgen05.commit makes the mbarrier object, specified by

Tcgen05CpOp

Instruction tcgen05.cp initiates an asynchronous copy operation from

Tcgen05DeallocOp

The tcgen05.dealloc Op de-allocates the tensor core memory

Tcgen05FenceOp

The tcgen05.fence<before> orders all prior async tcgen05 operations

Tcgen05LdOp

Instruction tcgen05.ld asynchronously loads data from the Tensor Memory at

Tcgen05MmaSmemDescOp

The nvvm.tcgen05_mma_smem_desc constructs a Shared Memory descriptor

Tcgen05RelinquishAllocPermitOp

The tcgen05.relinquish_alloc_permit Op specifies that the CTA

Tcgen05ShiftOp

The tcgen05.shift is an asynchronous instruction which initiates

Tcgen05StOp

Instruction tcgen05.st asynchronously stores data from the source register r

Tcgen05WaitOp

The tcgen05.wait<load> causes the executing thread to block until

ThreadIdXOp

ThreadIdYOp

ThreadIdZOp

VoteSyncOp

The vote.sync op will cause executing thread to wait until all non-exited

WMMALoadOp

WMMAMmaOp

WMMAStoreOp

WarpDimOp

WarpIdOp

WarpSizeOp

WgmmaFenceAlignedOp

Enforce an ordering of register accesses between warpgroup level matrix

WgmmaGroupSyncAlignedOp

Commits all prior uncommitted warpgroup level matrix multiplication operations.

WgmmaMmaAsyncOp

The warpgroup (128 threads) level matrix multiply and accumulate operation

WgmmaWaitGroupSyncOp

Signal the completion of a preceding warpgroup operation.

Functions

barrier0(→ Barrier0Op)

barrier_arrive(→ BarrierArriveOp)

barrier(→ Union[_ods_ir, _ods_ir, BarrierOp])

read_ptx_sreg_ntid_x(→ _ods_ir)

read_ptx_sreg_ntid_y(→ _ods_ir)

read_ptx_sreg_ntid_z(→ _ods_ir)

read_ptx_sreg_ctaid_x(→ _ods_ir)

read_ptx_sreg_ctaid_y(→ _ods_ir)

read_ptx_sreg_ctaid_z(→ _ods_ir)

read_ptx_sreg_cluster_ctaid_x(→ _ods_ir)

read_ptx_sreg_cluster_ctaid_y(→ _ods_ir)

read_ptx_sreg_cluster_ctaid_z(→ _ods_ir)

breakpoint(→ Breakpoint)

st_bulk(→ BulkStoreOp)

read_ptx_sreg_clock64(→ _ods_ir)

read_ptx_sreg_clock(→ _ods_ir)

cluster_arrive(→ ClusterArriveOp)

cluster_arrive_relaxed(→ ClusterArriveRelaxedOp)

read_ptx_sreg_cluster_nctarank(→ _ods_ir)

read_ptx_sreg_cluster_nctaid_x(→ _ods_ir)

read_ptx_sreg_cluster_nctaid_y(→ _ods_ir)

read_ptx_sreg_cluster_nctaid_z(→ _ods_ir)

read_ptx_sreg_nclusterid_x(→ _ods_ir)

read_ptx_sreg_nclusterid_y(→ _ods_ir)

read_ptx_sreg_nclusterid_z(→ _ods_ir)

read_ptx_sreg_cluster_ctarank(→ _ods_ir)

read_ptx_sreg_clusterid_x(→ _ods_ir)

read_ptx_sreg_clusterid_y(→ _ods_ir)

read_ptx_sreg_clusterid_z(→ _ods_ir)

clusterlaunchcontrol_query_cancel(→ _ods_ir)

clusterlaunchcontrol_try_cancel(...)

cluster_wait(→ ClusterWaitOp)

convert_bf16x2_to_f8x2(→ _ods_ir)

convert_f4x2_to_f16x2(→ _ods_ir)

convert_f6x2_to_f16x2(→ _ods_ir)

convert_f8x2_to_bf16x2(→ _ods_ir)

convert_f8x2_to_f16x2(→ _ods_ir)

convert_f16x2_to_f8x2(→ _ods_ir)

convert_f32x2_to_bf16x2(→ _ods_ir)

convert_f32x2_to_f4x2(→ _ods_ir)

convert_f32x2_to_f6x2(→ _ods_ir)

convert_f32x2_to_f8x2(→ _ods_ir)

convert_f32x2_to_f16x2(→ _ods_ir)

convert_f32x4_to_f4x4(→ _ods_ir)

convert_f32x4_to_f6x4(→ _ods_ir)

convert_f32x4_to_f8x4(→ _ods_ir)

convert_float_to_tf32(→ _ods_ir)

cp_async_bulk_commit_group(→ CpAsyncBulkCommitGroupOp)

cp_async_bulk_shared_cluster_global(...)

cp_async_bulk_prefetch(→ CpAsyncBulkPrefetchOp)

cp_async_bulk_global_shared_cta(...)

cp_async_bulk_shared_cluster_shared_cta(...)

cp_async_bulk_tensor_shared_cluster_global(...)

cp_async_bulk_tensor_prefetch(...)

cp_async_bulk_tensor_reduce(→ CpAsyncBulkTensorReduceOp)

cp_async_bulk_tensor_global_shared_cta(...)

cp_async_bulk_wait_group(→ CpAsyncBulkWaitGroupOp)

cp_async_commit_group(→ CpAsyncCommitGroupOp)

cp_async_mbarrier_arrive(→ CpAsyncMBarrierArriveOp)

cp_async_shared_global(→ CpAsyncOp)

cp_async_wait_group(→ CpAsyncWaitGroupOp)

dot_accumulate_2way(→ _ods_ir)

dot_accumulate_4way(→ _ods_ir)

elect_sync(→ _ods_ir)

read_ptx_sreg_envreg0(→ _ods_ir)

read_ptx_sreg_envreg1(→ _ods_ir)

read_ptx_sreg_envreg2(→ _ods_ir)

read_ptx_sreg_envreg3(→ _ods_ir)

read_ptx_sreg_envreg4(→ _ods_ir)

read_ptx_sreg_envreg5(→ _ods_ir)

read_ptx_sreg_envreg6(→ _ods_ir)

read_ptx_sreg_envreg7(→ _ods_ir)

read_ptx_sreg_envreg8(→ _ods_ir)

read_ptx_sreg_envreg9(→ _ods_ir)

read_ptx_sreg_envreg10(→ _ods_ir)

read_ptx_sreg_envreg11(→ _ods_ir)

read_ptx_sreg_envreg12(→ _ods_ir)

read_ptx_sreg_envreg13(→ _ods_ir)

read_ptx_sreg_envreg14(→ _ods_ir)

read_ptx_sreg_envreg15(→ _ods_ir)

read_ptx_sreg_envreg16(→ _ods_ir)

read_ptx_sreg_envreg17(→ _ods_ir)

read_ptx_sreg_envreg18(→ _ods_ir)

read_ptx_sreg_envreg19(→ _ods_ir)

read_ptx_sreg_envreg20(→ _ods_ir)

read_ptx_sreg_envreg21(→ _ods_ir)

read_ptx_sreg_envreg22(→ _ods_ir)

read_ptx_sreg_envreg23(→ _ods_ir)

read_ptx_sreg_envreg24(→ _ods_ir)

read_ptx_sreg_envreg25(→ _ods_ir)

read_ptx_sreg_envreg26(→ _ods_ir)

read_ptx_sreg_envreg27(→ _ods_ir)

read_ptx_sreg_envreg28(→ _ods_ir)

read_ptx_sreg_envreg29(→ _ods_ir)

read_ptx_sreg_envreg30(→ _ods_ir)

read_ptx_sreg_envreg31(→ _ods_ir)

exit(→ Exit)

fence_mbarrier_init(→ FenceMbarrierInitOp)

fence_proxy_acquire(→ FenceProxyAcquireOp)

fence_proxy(→ FenceProxyOp)

fence_proxy_release(→ FenceProxyReleaseOp)

fence_sc_cluster(→ FenceScClusterOp)

read_ptx_sreg_globaltimer_lo(→ _ods_ir)

read_ptx_sreg_globaltimer(→ _ods_ir)

read_ptx_sreg_nctaid_x(→ _ods_ir)

read_ptx_sreg_nctaid_y(→ _ods_ir)

read_ptx_sreg_nctaid_z(→ _ods_ir)

read_ptx_sreg_gridid(→ _ods_ir)

griddepcontrol(→ GriddepcontrolOp)

inline_ptx(→ Union[_ods_ir, _ods_ir, InlinePtxOp])

read_ptx_sreg_laneid(→ _ods_ir)

read_ptx_sreg_lanemask_eq(→ _ods_ir)

read_ptx_sreg_lanemask_ge(→ _ods_ir)

read_ptx_sreg_lanemask_gt(→ _ods_ir)

read_ptx_sreg_lanemask_le(→ _ods_ir)

read_ptx_sreg_lanemask_lt(→ _ods_ir)

ldmatrix(→ _ods_ir)

mbarrier_arrive_expect_tx(→ MBarrierArriveExpectTxOp)

mbarrier_arrive_nocomplete(→ _ods_ir)

mbarrier_arrive(→ _ods_ir)

mbarrier_init(→ MBarrierInitOp)

mbarrier_inval(→ MBarrierInvalOp)

mbarrier_test_wait(→ _ods_ir)

mbarrier_try_wait_parity(→ MBarrierTryWaitParityOp)

mapa(→ _ods_ir)

match_sync(→ _ods_ir)

memory_barrier(→ MembarOp)

mma_sync(→ _ods_ir)

nanosleep(→ NanosleepOp)

pmevent(→ PMEventOp)

prefetch(→ PrefetchOp)

rcp_approx_ftz_f(→ _ods_ir)

redux_sync(→ _ods_ir)

setmaxregister(→ SetMaxRegisterOp)

shfl_sync(→ _ods_ir)

read_ptx_sreg_nsmid(→ _ods_ir)

read_ptx_sreg_smid(→ _ods_ir)

stmatrix(→ StMatrixOp)

bar_warp_sync(→ SyncWarpOp)

tcgen05_alloc(→ Tcgen05AllocOp)

tcgen05_commit(→ Tcgen05CommitOp)

tcgen05_cp(→ Tcgen05CpOp)

tcgen05_dealloc(→ Tcgen05DeallocOp)

tcgen05_fence(→ Tcgen05FenceOp)

tcgen05_ld(→ _ods_ir)

tcgen05_mma_smem_desc(→ _ods_ir)

tcgen05_relinquish_alloc_permit(...)

tcgen05_shift(→ Tcgen05ShiftOp)

tcgen05_st(→ Tcgen05StOp)

tcgen05_wait(→ Tcgen05WaitOp)

read_ptx_sreg_tid_x(→ _ods_ir)

read_ptx_sreg_tid_y(→ _ods_ir)

read_ptx_sreg_tid_z(→ _ods_ir)

vote_sync(→ _ods_ir)

wmma_load(→ _ods_ir)

wmma_mma(→ _ods_ir)

wmma_store(→ WMMAStoreOp)

read_ptx_sreg_nwarpid(→ _ods_ir)

read_ptx_sreg_warpid(→ _ods_ir)

read_ptx_sreg_warpsize(→ _ods_ir)

wgmma_fence_aligned(→ WgmmaFenceAlignedOp)

wgmma_commit_group_sync_aligned(→ WgmmaGroupSyncAlignedOp)

wgmma_mma_async(→ _ods_ir)

wgmma_wait_group_sync_aligned(→ WgmmaWaitGroupSyncOp)

Module Contents

mlir.dialects._nvvm_ops_gen._ods_ir
class mlir.dialects._nvvm_ops_gen._Dialect(descriptor: object)

Bases: _ods_ir

DIALECT_NAMESPACE = 'nvvm'
class mlir.dialects._nvvm_ops_gen.Barrier0Op(*, loc=None, ip=None)

Bases: _ods_ir

The nvvm.barrier0 operation is a convenience operation that performs barrier synchronization and communication within a CTA (Cooperative Thread Array) using barrier ID 0. It is functionally equivalent to nvvm.barrier or nvvm.barrier id=0.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.barrier0'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.barrier0(*, loc=None, ip=None) Barrier0Op
class mlir.dialects._nvvm_ops_gen.BarrierArriveOp(numberOfThreads, *, barrierId=None, loc=None, ip=None)

Bases: _ods_ir

Thread that executes this op announces their arrival at the barrier with given id and continue their execution.

The default barrier id is 0 that is similar to nvvm.barrier Op. When barrierId is not present, the default barrier id is used.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.barrier.arrive'
_ODS_REGIONS = (0, True)
barrierId() _ods_ir | None
numberOfThreads() _ods_ir
mlir.dialects._nvvm_ops_gen.barrier_arrive(number_of_threads, *, barrier_id=None, loc=None, ip=None) BarrierArriveOp
class mlir.dialects._nvvm_ops_gen.BarrierOp(res, *, barrierId=None, numberOfThreads=None, reductionOp=None, reductionPredicate=None, loc=None, ip=None)

Bases: _ods_ir

The nvvm.barrier operation performs barrier synchronization and communication within a CTA (Cooperative Thread Array). It causes executing threads to wait for all non-exited threads participating in the barrier to arrive.

The operation takes two optional operands:

  • barrierId: Specifies a logical barrier resource with value 0 through 15.

Each CTA instance has sixteen barriers numbered 0..15. Defaults to 0 if not specified. * numberOfThreads: Specifies the number of threads participating in the barrier. When specified, the value must be a multiple of the warp size. If not specified, all threads in the CTA participate in the barrier. * reductionOp: specifies the reduction operation (popc, and, or). * reductionPredicate: specifies the predicate to be used with the reductionOp.

The barrier operation guarantees that when the barrier completes, prior memory accesses requested by participating threads are performed relative to all threads participating in the barrier. It also ensures that no new memory access is requested by participating threads before the barrier completes.

When a barrier completes, the waiting threads are restarted without delay, and the barrier is reinitialized so that it can be immediately reused.

This operation generates an aligned barrier, indicating that all threads in the CTA will execute the same barrier instruction. Behavior is undefined if all threads in the CTA do not reach this instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.barrier'
_ODS_OPERAND_SEGMENTS = [0, 0, 0]
_ODS_REGIONS = (0, True)
barrierId() _ods_ir | None
numberOfThreads() _ods_ir | None
reductionPredicate() _ods_ir | None
reductionOp() _ods_ir | None
res() _ods_ir | None
mlir.dialects._nvvm_ops_gen.barrier(res, *, barrier_id=None, number_of_threads=None, reduction_op=None, reduction_predicate=None, loc=None, ip=None) _ods_ir | _ods_ir | BarrierOp
class mlir.dialects._nvvm_ops_gen.BlockDimXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockDimYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockDimZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ntid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ntid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockIdXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockIdYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockIdZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.ctaid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_ctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockInClusterIdXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockInClusterIdYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.BlockInClusterIdZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctaid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.Breakpoint(*, loc=None, ip=None)

Bases: _ods_ir

Breakpoint suspends execution of the program for debugging. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.breakpoint'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.breakpoint(*, loc=None, ip=None) Breakpoint
class mlir.dialects._nvvm_ops_gen.BulkStoreOp(addr, size, *, initVal=None, loc=None, ip=None)

Bases: _ods_ir

Initializes a region of shared memory at the address given by addr. The size operand specifies the number of bytes to initialize and must be a multiple of 8. The initVal operand specifies the value to initialize the memory to. The only supported value is 0.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.st.bulk'
_ODS_REGIONS = (0, True)
addr() _ods_ir
size() _ods_ir
initVal() _ods_ir
mlir.dialects._nvvm_ops_gen.st_bulk(addr, size, *, init_val=None, loc=None, ip=None) BulkStoreOp
class mlir.dialects._nvvm_ops_gen.Clock64Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.clock64'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clock64(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClockOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.clock'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clock(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterArriveOp(*, aligned=None, loc=None, ip=None)

Bases: _ods_ir

The cluster.arrive can be used by the threads within the cluster for synchronization and communication. The cluster.arrive instruction marks the warps’ arrival at the barrier without causing the executing thread to wait for other participating threads.

The aligned attribute, when provided, generates the .aligned version of the PTX instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cluster.arrive'
_ODS_REGIONS = (0, True)
aligned() bool
mlir.dialects._nvvm_ops_gen.cluster_arrive(*, aligned=None, loc=None, ip=None) ClusterArriveOp
class mlir.dialects._nvvm_ops_gen.ClusterArriveRelaxedOp(*, aligned=None, loc=None, ip=None)

Bases: _ods_ir

The cluster.arrive can be used by the threads within the cluster for synchronization and communication. The cluster.arrive instruction marks the warps’ arrival at the barrier without causing the executing thread to wait for other participating threads.

The aligned attribute, when provided, generates the .aligned version of the PTX instruction. The .relaxed qualifier on cluster.arrive specifies that there are no memory ordering and visibility guarantees provided for the memory accesses performed prior to cluster.arrive.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cluster.arrive.relaxed'
_ODS_REGIONS = (0, True)
aligned() bool
mlir.dialects._nvvm_ops_gen.cluster_arrive_relaxed(*, aligned=None, loc=None, ip=None) ClusterArriveRelaxedOp
class mlir.dialects._nvvm_ops_gen.ClusterDim(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctarank'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctarank(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimBlocksZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.nctaid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_nctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterDimZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nclusterid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nclusterid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterId(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.cluster.ctarank'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_cluster_ctarank(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterIdXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterIdYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterIdZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.clusterid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_clusterid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterLaunchControlQueryCancelOp(res, query_type, try_cancel_response, *, loc=None, ip=None)

Bases: _ods_ir

clusterlaunchcontrol.query.cancel queries the response of a clusterlaunchcontrol.try.cancel operation specified by operand try_cancel_response.

Operand query_type specifies the type of query to perform and can be one of the following:

  • is_canceled : Returns true if the try cancel request succeeded,

and false otherwise. * get_first_cta_id_{x/y/z} : Returns the x, y, or z coordinate of the first CTA in the canceled cluster. Behaviour is defined only if the try cancel request succeeded.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.clusterlaunchcontrol.query.cancel'
_ODS_REGIONS = (0, True)
try_cancel_response() _ods_ir
query_type() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.clusterlaunchcontrol_query_cancel(res, query_type, try_cancel_response, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ClusterLaunchControlTryCancelOp(smemAddress, mbarrier, *, multicast=None, loc=None, ip=None)

Bases: _ods_ir

clusterlaunchcontrol.try.cancel requests atomically canceling the launch of a cluster that has not started running yet. It asynchronously writes an opaque response to shared memory indicating whether the operation succeeded or failed.

Operand smemAddress specifies the naturally aligned address of the 16-byte wide shared memory location where the request’s response is written.

Operand mbarrier specifies the mbarrier object used to track the completion of the asynchronous operation.

If multicast is specified, the response is asynchronously written to the corresponding local shared memory location (specifed by addr) of each CTA in the requesting cluster.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.clusterlaunchcontrol.try.cancel'
_ODS_REGIONS = (0, True)
smemAddress() _ods_ir
mbarrier() _ods_ir
multicast() bool
mlir.dialects._nvvm_ops_gen.clusterlaunchcontrol_try_cancel(smem_address, mbarrier, *, multicast=None, loc=None, ip=None) ClusterLaunchControlTryCancelOp
class mlir.dialects._nvvm_ops_gen.ClusterWaitOp(*, aligned=None, loc=None, ip=None)

Bases: _ods_ir

The cluster.wait causes the executing thread to wait for all non-exited threads of the cluster to perform cluster.arrive. The aligned attribute, when provided, generates the .aligned version of the PTX instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cluster.wait'
_ODS_REGIONS = (0, True)
aligned() bool
mlir.dialects._nvvm_ops_gen.cluster_wait(*, aligned=None, loc=None, ip=None) ClusterWaitOp
class mlir.dialects._nvvm_ops_gen.ConvertBF16x2ToF8x2Op(dst, a, dstTy, *, rnd=None, sat=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given bf16 inputs in a bf16x2 vector to the specified f8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values from a are packed such that the value converted from the first element of a is stored in the upper 8 bits of dst and the value converted from the second element of a is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The rnd and sat attributes specify the rounding and saturation modes respectively.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.bf16x2.to.f8x2'
_ODS_REGIONS = (0, True)
a() _ods_ir
rnd() _ods_ir
sat() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_bf16x2_to_f8x2(dst, a, dst_ty, *, rnd=None, sat=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF4x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f4 inputs in a packed i8 to f16.

The result dst is represented as a vector of f16 elements. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f4x2.to.f16x2'
_ODS_REGIONS = (0, True)
src() _ods_ir
relu() _ods_ir
srcType() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f4x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF6x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f6 inputs in a i8x2 vector to f16.

The result dst is represented as a vector of f16 elements. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f6x2.to.f16x2'
_ODS_REGIONS = (0, True)
src() _ods_ir
relu() _ods_ir
srcType() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f6x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF8x2ToBF16x2Op(dst, src, srcType, *, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f8 inputs in a i8x2 vector to bf16.

The result dst is represented as a vector of bf16 elements.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f8x2.to.bf16x2'
_ODS_REGIONS = (0, True)
src() _ods_ir
srcType() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f8x2_to_bf16x2(dst, src, src_type, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF8x2ToF16x2Op(dst, src, srcType, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f8 inputs in a i8x2 vector to f16.

The result dst is represented as a vector of f16 elements. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.”

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f8x2.to.f16x2'
_ODS_REGIONS = (0, True)
src() _ods_ir
relu() _ods_ir
srcType() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f8x2_to_f16x2(dst, src, src_type, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF16x2ToF8x2Op(dst, a, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f16 inputs in an f16x2 vector to the specified f8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values from a are packed such that the value converted from the first element of a is stored in the upper 8 bits of dst and the value converted from the second element of a is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f16x2.to.f8x2'
_ODS_REGIONS = (0, True)
a() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f16x2_to_f8x2(dst, a, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToBF16x2Op(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None)

Bases: _ods_ir

Converts two F32 values to packed bf16x2 format using stochastic rounding (.rs) mode with randomness provided by the rbits parameter. The relu attribute clamps negative results to 0. The sat attribute determines saturation behavior. The src_hi and src_lo parameters correspond to operands a and b in the PTX ISA, respectively.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x2.to.bf16x2'
_ODS_REGIONS = (0, True)
src_hi() _ods_ir
src_lo() _ods_ir
rbits() _ods_ir
rnd() _ods_ir
sat() _ods_ir
relu() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x2_to_bf16x2(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF4x2Op(dst, a, b, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts each of the given float inputs to the specified fp4 type. The result dst is returned as an i8 type where the converted values are packed such that the value converted from a is stored in the upper 4 bits of dst and the value converted from b is stored in the lower 4 bits of dst. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x2.to.f4x2'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f4x2(dst, a, b, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF6x2Op(dst, a, b, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts each of the given float inputs to the specified fp6 type. The result dst is represented either as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values are packed such that the value converted from a is stored in the upper 8 bits of dst with 2 MSB bits padded with zeros and the value converted from b is stored in the lower 8 bits of dst with 2 MSB bits padded with zeros. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x2.to.f6x2'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f6x2(dst, a, b, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF8x2Op(dst, a, b, dstTy, *, rnd=None, sat=None, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts each of the given float inputs to the specified fp8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values are packed such that the value converted from a is stored in the upper 8 bits of dst and the value converted from b is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The rnd and sat attributes specify the rounding and saturation modes respectively. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x2.to.f8x2'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
rnd() _ods_ir
sat() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f8x2(dst, a, b, dst_ty, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x2ToF16x2Op(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None)

Bases: _ods_ir

Converts two F32 values to packed f16x2 format using stochastic rounding (.rs) mode with randomness provided by the rbits parameter. The relu attribute clamps negative results to 0. The sat attribute determines saturation behavior. The src_hi and src_lo parameters correspond to operands a and b in the PTX ISA, respectively.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x2.to.f16x2'
_ODS_REGIONS = (0, True)
src_hi() _ods_ir
src_lo() _ods_ir
rbits() _ods_ir
rnd() _ods_ir
sat() _ods_ir
relu() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x2_to_f16x2(dst, src_hi, src_lo, rbits, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF4x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

Converts a vector<4xf32> to packed f4x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the rbits parameter. The dstTy attribute specifies the target floating-point format. The relu attribute clamps negative results to 0.

Note: These operations always use RS rounding mode and SATFINITE saturation mode.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x4.to.f4x4'
_ODS_REGIONS = (0, True)
src() _ods_ir
rbits() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f4x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF6x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

Converts a vector<4xf32> to packed f6x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the rbits parameter. The dstTy attribute specifies the target floating-point format. The relu attribute clamps negative results to 0.

Note: These operations always use RS rounding mode and SATFINITE saturation mode.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x4.to.f6x4'
_ODS_REGIONS = (0, True)
src() _ods_ir
rbits() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f6x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertF32x4ToF8x4Op(dst, src, rbits, dstTy, *, relu=None, loc=None, ip=None)

Bases: _ods_ir

Converts a vector<4xf32> to packed f8x4 format using stochastic rounding (.rs) mode with SATFINITE saturation. Randomness is provided by the rbits parameter. The dstTy attribute specifies the target floating-point format. The relu attribute clamps negative results to 0.

Note: These operations always use RS rounding mode and SATFINITE saturation mode.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.f32x4.to.f8x4'
_ODS_REGIONS = (0, True)
src() _ods_ir
rbits() _ods_ir
relu() _ods_ir
dstTy() _ods_ir
dst() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_f32x4_to_f8x4(dst, src, rbits, dst_ty, *, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ConvertFloatToTF32Op(res, src, *, rnd=None, sat=None, relu=None, loc=None, ip=None)

Bases: _ods_ir

This Op converts the given f32 input to tf32. The result res is represented as an i32 type. The relu attribute, when set, lowers to the ‘.relu’ variant of the cvt instruction. The rnd and sat attributes specify the the rounding and saturation modes respectively.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.convert.float.to.tf32'
_ODS_REGIONS = (0, True)
src() _ods_ir
rnd() _ods_ir
sat() _ods_ir
relu() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.convert_float_to_tf32(res, src, *, rnd=None, sat=None, relu=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkCommitGroupOp(*, loc=None, ip=None)

Bases: _ods_ir

This Op commits all prior initiated but uncommitted cp.async.bulk instructions into a cp.async.bulk-group.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.commit.group'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.cp_async_bulk_commit_group(*, loc=None, ip=None) CpAsyncBulkCommitGroupOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkGlobalToSharedClusterOp(dstMem, srcMem, mbar, size, *, multicastMask=None, l2CacheHint=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous copy operation from global memory to cluster’s shared memory.

The multicastMask operand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. Operand multicastMask specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA.

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.shared.cluster.global'
_ODS_OPERAND_SEGMENTS = [1, 1, 1, 1, 0, 0]
_ODS_REGIONS = (0, True)
dstMem() _ods_ir
srcMem() _ods_ir
mbar() _ods_ir
size() _ods_ir
multicastMask() _ods_ir | None
l2CacheHint() _ods_ir | None
mlir.dialects._nvvm_ops_gen.cp_async_bulk_shared_cluster_global(dst_mem, src_mem, mbar, size, *, multicast_mask=None, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkGlobalToSharedClusterOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkPrefetchOp(srcMem, size, *, l2CacheHint=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous prefetch of data from the location specified by srcMem to the L2 cache.

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

Example:

nvvm.cp.async.bulk.prefetch %src, %size : !llvm.ptr<1>

// with l2_cache_hint
nvvm.cp.async.bulk.prefetch %src, %size l2_cache_hint = %ch : !llvm.ptr<1>

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.prefetch'
_ODS_REGIONS = (0, True)
srcMem() _ods_ir
size() _ods_ir
l2CacheHint() _ods_ir | None
mlir.dialects._nvvm_ops_gen.cp_async_bulk_prefetch(src_mem, size, *, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkPrefetchOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkSharedCTAToGlobalOp(dstMem, srcMem, size, *, l2CacheHint=None, byteMask=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous copy operation from Shared CTA memory to global memory. The 32-bit operand size specifies the amount of memory to be copied, in terms of number of bytes. size must be a multiple of 16. The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access. The byteMask operand is optional. The i-th bit in the 16-bit wide byteMask specifies whether the i-th byte of each 16-byte wide chunk of source data is copied to the destination. If the bit is set, the byte is copied.

Example:

nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size
    : !llvm.ptr<1>, !llvm.ptr<3>

// with l2_cache_hint
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch
    : !llvm.ptr<1>, !llvm.ptr<3>

// with byte_mask
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size byte_mask = %mask
    : !llvm.ptr<1>, !llvm.ptr<3>

// with both l2_cache_hint and byte_mask
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch byte_mask = %mask
    : !llvm.ptr<1>, !llvm.ptr<3>

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.global.shared.cta'
_ODS_OPERAND_SEGMENTS = [1, 1, 1, 0, 0]
_ODS_REGIONS = (0, True)
dstMem() _ods_ir
srcMem() _ods_ir
size() _ods_ir
l2CacheHint() _ods_ir | None
byteMask() _ods_ir | None
mlir.dialects._nvvm_ops_gen.cp_async_bulk_global_shared_cta(dst_mem, src_mem, size, *, l2_cache_hint=None, byte_mask=None, loc=None, ip=None) CpAsyncBulkSharedCTAToGlobalOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkSharedCTAToSharedClusterOp(dstMem, srcMem, mbar, size, *, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous copy operation from Shared CTA memory to Shared cluster memory.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.shared.cluster.shared.cta'
_ODS_REGIONS = (0, True)
dstMem() _ods_ir
srcMem() _ods_ir
mbar() _ods_ir
size() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_bulk_shared_cluster_shared_cta(dst_mem, src_mem, mbar, size, *, loc=None, ip=None) CpAsyncBulkSharedCTAToSharedClusterOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorGlobalToSharedClusterOp(dstMem, tmaDescriptor, coordinates, mbar, im2colOffsets, *, multicastMask=None, l2CacheHint=None, mode=None, isCTAOnly=None, group=None, predicate=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous copy operation on the tensor data from global memory to shared::cluster (or) shared::cta memory. This Op supports all the load modes specified in TMALoadMode.

The multicastMask operand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. Operand multicastMask specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA.

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

When the isCTAOnly attribute is set to true, the destination is shared::cta only. Hence, multicastMask and CTAGroup are not applicable when isCTAOnly is true.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.shared.cluster.global'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
dstMem() _ods_ir
tmaDescriptor() _ods_ir
coordinates() _ods_ir
mbar() _ods_ir
im2colOffsets() _ods_ir
multicastMask() _ods_ir | None
l2CacheHint() _ods_ir | None
predicate() _ods_ir | None
mode() _ods_ir
isCTAOnly() _ods_ir
group() _ods_ir | None
mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_shared_cluster_global(dst_mem, tma_descriptor, coordinates, mbar, im2col_offsets, *, multicast_mask=None, l2_cache_hint=None, mode=None, is_cta_only=None, group=None, predicate=None, loc=None, ip=None) CpAsyncBulkTensorGlobalToSharedClusterOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorPrefetchOp(tmaDescriptor, coordinates, im2colOffsets, *, mode=None, l2CacheHint=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous prefetch operation on the tensor data from global memory to L2 cache. This Op supports all the load modes specified in TMALoadMode.

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.prefetch'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
tmaDescriptor() _ods_ir
coordinates() _ods_ir
im2colOffsets() _ods_ir
l2CacheHint() _ods_ir | None
mode() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_prefetch(tma_descriptor, coordinates, im2col_offsets, *, mode=None, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkTensorPrefetchOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorReduceOp(tmaDescriptor, srcMem, redKind, coordinates, *, mode=None, l2CacheHint=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous reduction operation of tensor data in global memory with tensor data in shared memory.

The mode attribute indicates whether the copy mode is tile or im2col. The redOp attribute specifies the reduction operations applied. The supported reduction operations are: {add, min, max, inc, dec, and, or, xor}

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.reduce'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
tmaDescriptor() _ods_ir
srcMem() _ods_ir
coordinates() _ods_ir
l2CacheHint() _ods_ir | None
redKind() _ods_ir
mode() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_reduce(tma_descriptor, src_mem, red_kind, coordinates, *, mode=None, l2_cache_hint=None, loc=None, ip=None) CpAsyncBulkTensorReduceOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkTensorSharedCTAToGlobalOp(tmaDescriptor, srcMem, coordinates, *, l2CacheHint=None, mode=None, predicate=None, loc=None, ip=None)

Bases: _ods_ir

Initiates an asynchronous copy of the tensor data from shared::cta memory to global memory. This Op supports all the store modes specified in TMAStoreMode.

The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.tensor.global.shared.cta'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
tmaDescriptor() _ods_ir
srcMem() _ods_ir
coordinates() _ods_ir
l2CacheHint() _ods_ir | None
predicate() _ods_ir | None
mode() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_bulk_tensor_global_shared_cta(tma_descriptor, src_mem, coordinates, *, l2_cache_hint=None, mode=None, predicate=None, loc=None, ip=None) CpAsyncBulkTensorSharedCTAToGlobalOp
class mlir.dialects._nvvm_ops_gen.CpAsyncBulkWaitGroupOp(group, *, read=None, loc=None, ip=None)

Bases: _ods_ir

Op waits for completion of the most recent bulk async-groups.

The $group operand tells waiting has to be done until for $group or fewer of the most recent bulk async-groups. If $group is 0, the op wait until all the most recent bulk async-groups have completed.

The $read indicates that the waiting has to be done until all the bulk async operations in the specified bulk async-group have completed reading from their source locations.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.bulk.wait_group'
_ODS_REGIONS = (0, True)
group() _ods_ir
read() bool
mlir.dialects._nvvm_ops_gen.cp_async_bulk_wait_group(group, *, read=None, loc=None, ip=None) CpAsyncBulkWaitGroupOp
class mlir.dialects._nvvm_ops_gen.CpAsyncCommitGroupOp(*, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.cp.async.commit.group'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.cp_async_commit_group(*, loc=None, ip=None) CpAsyncCommitGroupOp
class mlir.dialects._nvvm_ops_gen.CpAsyncMBarrierArriveOp(addr, *, noinc=None, loc=None, ip=None)

Bases: _ods_ir

The cp.async.mbarrier.arrive Op makes the mbarrier object track all prior cp.async operations initiated by the executing thread. The addr operand specifies the address of the mbarrier object in generic or shared::cta address space. When it is generic, the underlying memory should fall within the shared::cta space; otherwise the behavior is undefined. The noinc attr impacts how the mbarrier’s state is updated.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.cp.async.mbarrier.arrive'
_ODS_REGIONS = (0, True)
addr() _ods_ir
noinc() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_mbarrier_arrive(addr, *, noinc=None, loc=None, ip=None) CpAsyncMBarrierArriveOp
class mlir.dialects._nvvm_ops_gen.CpAsyncOp(dst, src, size, modifier, *, cpSize=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.cp.async.shared.global'
_ODS_REGIONS = (0, True)
dst() _ods_ir
src() _ods_ir
cpSize() _ods_ir | None
size() _ods_ir
modifier() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_shared_global(dst, src, size, modifier, *, cp_size=None, loc=None, ip=None) CpAsyncOp
class mlir.dialects._nvvm_ops_gen.CpAsyncWaitGroupOp(n, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.cp.async.wait.group'
_ODS_REGIONS = (0, True)
n() _ods_ir
mlir.dialects._nvvm_ops_gen.cp_async_wait_group(n, *, loc=None, ip=None) CpAsyncWaitGroupOp
class mlir.dialects._nvvm_ops_gen.DotAccumulate2WayOp(res, a, a_type, b, b_type, c, b_hi, *, loc=None, ip=None)

Bases: _ods_ir

Performs a two-way 16-bit to 8-bit dot-product which is accumulated in a 32-bit result. Operand a is a vector of two 16-bit elements and operand b a vector of four 8-bit elements between which the dot product is computed.

The a_type and b_type attributes specify the type of the elements in a and b respectively. If a_type or b_type is s, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. If a_type or b_type is u, then the elements in the corresponding vector are zero-extended to 32-bit instead.

The b_hi boolean attribute specifies which two bytes of b are used for the dot product. If b_hi is true, then the dot product is computed between a and elements at indices 2 and 3 of b. If b_hi is false, then the dot product is computed between a and elements at indices 0 and 1 of b.

Operand c is a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any of a_type or b_type is signed.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.dot.accumulate.2way'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
c() _ods_ir
a_type() _ods_ir
b_type() _ods_ir
b_hi() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.dot_accumulate_2way(res, a, a_type, b, b_type, c, b_hi, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.DotAccumulate4WayOp(res, a, a_type, b, b_type, c, *, loc=None, ip=None)

Bases: _ods_ir

Performs a four-way byte dot-product which is accumulated in a 32-bit result. Operand a and b are vectors of 4 bytes between which the dot product is computed.

The a_type and b_type attributes specify the type of the elements in a and b respectively. If a_type or b_type is signed, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. If a_type or b_type is unsigned, then the elements in the corresponding vector are zero-extended to 32-bit instead.

Operand c is a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any of a_type or b_type is s8.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.dot.accumulate.4way'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
c() _ods_ir
a_type() _ods_ir
b_type() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.dot_accumulate_4way(res, a, a_type, b, b_type, c, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ElectSyncOp(pred, *, membermask=None, loc=None, ip=None)

Bases: _ods_ir

The elect.sync instruction elects one predicated active leader thread from among a set of threads specified in the membermask. When the membermask is not provided explicitly, a default value of 0xFFFFFFFF is used. The predicate result is set to True for the leader thread, and False for all other threads.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.elect.sync'
_ODS_REGIONS = (0, True)
membermask() _ods_ir | None
pred() _ods_ir
mlir.dialects._nvvm_ops_gen.elect_sync(pred, *, membermask=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg0Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg0'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg0(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg1Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg1'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg1(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg2Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg2'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg2(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg3Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg3'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg3(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg4Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg4'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg4(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg5Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg5'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg5(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg6Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg6'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg6(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg7Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg7'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg7(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg8Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg8'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg8(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg9Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg9'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg9(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg10Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg10'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg10(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg11Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg11'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg11(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg12Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg12'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg12(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg13Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg13'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg13(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg14Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg14'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg14(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg15Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg15'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg15(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg16Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg16'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg16(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg17Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg17'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg17(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg18Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg18'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg18(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg19Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg19'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg19(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg20Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg20'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg20(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg21Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg21'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg21(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg22Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg22'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg22(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg23Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg23'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg23(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg24Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg24'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg24(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg25Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg25'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg25(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg26Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg26'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg26(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg27Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg27'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg27(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg28Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg28'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg28(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg29Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg29'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg29(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg30Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg30'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg30(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.EnvReg31Op(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.envreg31'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_envreg31(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.Exit(*, loc=None, ip=None)

Bases: _ods_ir

Ends execution of a thread. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.exit'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.exit(*, loc=None, ip=None) Exit
class mlir.dialects._nvvm_ops_gen.FenceMbarrierInitOp(*, loc=None, ip=None)

Bases: _ods_ir

Fence operation that applies on the prior nvvm.mbarrier.init

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.fence.mbarrier.init'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.fence_mbarrier_init(*, loc=None, ip=None) FenceMbarrierInitOp
class mlir.dialects._nvvm_ops_gen.FenceProxyAcquireOp(scope, addr, size, *, fromProxy=None, toProxy=None, loc=None, ip=None)

Bases: _ods_ir

fence.proxy.acquire is a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxy

The address operand addr and the operand size together specify the memory range [addr, addr+size) on which the ordering guarantees on the memory accesses across the proxies is to be provided. The only supported value for the size operand is 128 and must be an immediate. Generic Addressing is used unconditionally, and the address specified by the operand addr must fall within the .global state space. Otherwise, the behavior is undefined

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.fence.proxy.acquire'
_ODS_REGIONS = (0, True)
addr() _ods_ir
size() _ods_ir
scope() _ods_ir
fromProxy() _ods_ir
toProxy() _ods_ir
mlir.dialects._nvvm_ops_gen.fence_proxy_acquire(scope, addr, size, *, from_proxy=None, to_proxy=None, loc=None, ip=None) FenceProxyAcquireOp
class mlir.dialects._nvvm_ops_gen.FenceProxyOp(kind, *, space=None, loc=None, ip=None)

Bases: _ods_ir

Fence operation with proxy to establish an ordering between memory accesses that may happen through different proxies.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.fence.proxy'
_ODS_REGIONS = (0, True)
kind() _ods_ir
space() _ods_ir | None
mlir.dialects._nvvm_ops_gen.fence_proxy(kind, *, space=None, loc=None, ip=None) FenceProxyOp
class mlir.dialects._nvvm_ops_gen.FenceProxyReleaseOp(scope, *, fromProxy=None, toProxy=None, loc=None, ip=None)

Bases: _ods_ir

fence.proxy.release is a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxy. fence.proxy.release operation can form a release sequence that synchronizes with an acquire sequence that contains the fence.proxy.acquire proxy fence operation

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.fence.proxy.release'
_ODS_REGIONS = (0, True)
scope() _ods_ir
fromProxy() _ods_ir
toProxy() _ods_ir
mlir.dialects._nvvm_ops_gen.fence_proxy_release(scope, *, from_proxy=None, to_proxy=None, loc=None, ip=None) FenceProxyReleaseOp
class mlir.dialects._nvvm_ops_gen.FenceScClusterOp(*, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.fence.sc.cluster'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.fence_sc_cluster(*, loc=None, ip=None) FenceScClusterOp
class mlir.dialects._nvvm_ops_gen.GlobalTimerLoOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.globaltimer.lo'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_globaltimer_lo(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GlobalTimerOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.globaltimer'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_globaltimer(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GridDimXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GridDimYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GridDimZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nctaid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nctaid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GridIdOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.gridid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_gridid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.GriddepcontrolOp(kind, *, loc=None, ip=None)

Bases: _ods_ir

If the $kind attribute is set to wait, it causes the executing thread to wait until all prerequisite grids in flight have completed and all the memory operations from the prerequisite grids are performed and made visible to the current grid.

When the $kind is launch_dependents, it signals that specific dependents the runtime system designated to react to this instruction can be scheduled as soon as all other CTAs in the grid issue the same instruction or have completed.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.griddepcontrol'
_ODS_REGIONS = (0, True)
kind() _ods_ir
mlir.dialects._nvvm_ops_gen.griddepcontrol(kind, *, loc=None, ip=None) GriddepcontrolOp
class mlir.dialects._nvvm_ops_gen.InlinePtxOp(writeOnlyArgs, readOnlyArgs, readWriteArgs, ptxCode, *, predicate=None, loc=None, ip=None)

Bases: _ods_ir

This op allows using PTX directly within the NVVM dialect, while greatly simplifying llvm.inline_asm generation. It automatically handles register size selection and sets the correct read/write access for each operand. The operation leverages the BasicPtxBuilderInterface to abstract away low-level details of PTX assembly formatting.

The `predicate` attribute is used to specify a predicate for the
PTX instruction.

Example 1: Read-only Parameters
```mlir
nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count) : !llvm.ptr, i32

// Lowers to:
llvm.inline_asm has_side_effects asm_dialect = att
  "mbarrier.init.b64 [$0], $1;", "l,r" %arg0, %arg2 : (!llvm.ptr, i32) -> ()
```

Example 2: Read-only and Write-only Parameters
```mlir
%0 = nvvm.inline_ptx "ex2.approx.ftz.f32 $0, $1;" (%input) : f32 -> f32

// Lowers to:
%0 = llvm.inline_asm has_side_effects asm_dialect = att
  "ex2.approx.ftz.f32 $0, $1;", "=f,f" %arg0 : (f32) -> f32
```

Example 3: Predicate Usage
```mlir
nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count),
  predicate = %pred : !llvm.ptr, i32, i1

// Lowers to:
llvm.inline_asm has_side_effects asm_dialect = att
  "@$2 mbarrier.init.b64 [$0], $1;", "l,r,b" %arg0, %arg2, %arg3
  : (!llvm.ptr, i32, i1) -> ()
```
OPERATION_NAME = 'nvvm.inline_ptx'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
readOnlyArgs() _ods_ir
readWriteArgs() _ods_ir
predicate() _ods_ir | None
ptxCode() _ods_ir
writeOnlyArgs() _ods_ir
mlir.dialects._nvvm_ops_gen.inline_ptx(write_only_args, read_only_args, read_write_args, ptx_code, *, predicate=None, loc=None, ip=None) _ods_ir | _ods_ir | InlinePtxOp
class mlir.dialects._nvvm_ops_gen.LaneIdOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.laneid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_laneid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LaneMaskEqOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.eq'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_eq(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LaneMaskGeOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.ge'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_ge(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LaneMaskGtOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.gt'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_gt(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LaneMaskLeOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.le'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_le(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LaneMaskLtOp(res, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.lanemask.lt'
_ODS_REGIONS = (0, True)
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_lanemask_lt(res, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.LdMatrixOp(res, ptr, num, layout, shape, eltType, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.ldmatrix'
_ODS_REGIONS = (0, True)
ptr() _ods_ir
num() _ods_ir
layout() _ods_ir
shape() _ods_ir
eltType() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.ldmatrix(res, ptr, num, layout, shape, elt_type, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MBarrierArriveExpectTxOp(addr, txcount, *, predicate=None, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.arrive.expect_tx operation performs an expect-tx operation followed by an arrive-on operation on the mbarrier object. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.

This operation first performs an expect-tx operation with the specified transaction count, then performs an arrive-on operation with an implicit count of 1. The expect-tx operation increases the tx-count of the mbarrier object by the specified expectCount value, setting the current phase to expect and tracks the completion of additional asynchronous transactions.

The operation takes the following operands:

  • addr: A pointer to the memory location of the mbarrier object. Uses generic

addressing, but the address must still be in the shared memory space. * txcount: An unsigned integer specifying the expected transaction count for the expect-tx operation. This represents the number of asynchronous transactions expected to complete before the barrier phase completes. * predicate: Optional predicate for conditional execution.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.arrive.expect_tx'
_ODS_REGIONS = (0, True)
addr() _ods_ir
txcount() _ods_ir
predicate() _ods_ir | None
mlir.dialects._nvvm_ops_gen.mbarrier_arrive_expect_tx(addr, txcount, *, predicate=None, loc=None, ip=None) MBarrierArriveExpectTxOp
class mlir.dialects._nvvm_ops_gen.MBarrierArriveNocompleteOp(res, addr, count, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.arrive.nocomplete operation performs an arrive-on operation on the mbarrier object with the guarantee that it will not cause the barrier to complete its current phase. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.

This operation causes the executing thread to signal its arrival at the barrier with a specified count, but ensures that the barrier phase will not complete as a result of this operation. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation.

The operation takes the following operands:

  • addr: A pointer to the memory location of the mbarrier object. The addr

must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined. * count: Integer specifying the count argument to the arrive-on operation. Must be in the valid range as specified in the mbarrier object contents.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.arrive.nocomplete'
_ODS_REGIONS = (0, True)
addr() _ods_ir
count() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.mbarrier_arrive_nocomplete(res, addr, count, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MBarrierArriveOp(res, addr, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.arrive operation performs an arrive-on operation on the mbarrier object at the specified address. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like ‘mbarrier.test.wait’), they synchronize with this release pattern.

This operation causes the executing thread to signal its arrival at the barrier. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation. The contents of this state value are implementation-specific.

The operation takes the following operand:

  • addr: A pointer to the memory location of the mbarrier object. The addr

must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.arrive'
_ODS_REGIONS = (0, True)
addr() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.mbarrier_arrive(res, addr, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MBarrierInitOp(addr, count, *, predicate=None, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.init operation initializes an mbarrier object at the specified memory location.

This operation initializes the mbarrier object with the following state:

  • Current phase: 0

  • Expected arrival count: count

  • Pending arrival count: count

  • Transaction count (tx-count): 0

The operation takes the following operands:

  • addr: A pointer to the memory location of the mbarrier object. The addr

must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined. * count: Integer specifying the number of threads that will participate in barrier synchronization. Must be in the range [1, 2²⁰ - 1]. * predicate: Optional predicate for conditional execution.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.init'
_ODS_REGIONS = (0, True)
addr() _ods_ir
count() _ods_ir
predicate() _ods_ir | None
mlir.dialects._nvvm_ops_gen.mbarrier_init(addr, count, *, predicate=None, loc=None, ip=None) MBarrierInitOp
class mlir.dialects._nvvm_ops_gen.MBarrierInvalOp(addr, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.inval operation invalidates an mbarrier object at the specified memory location.

This operation marks the mbarrier object as invalid, making it safe to repurpose the memory location for other uses or to reinitialize it as a new mbarrier object. It is undefined behavior if the mbarrier object is already invalid.

The operation takes the following operand:

  • addr: A pointer to the memory location of the mbarrier object. The addr

must be a pointer to generic or shared::cta memory. When it is generic, the underlying address must be within the shared::cta memory space; otherwise the behavior is undefined.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.inval'
_ODS_REGIONS = (0, True)
addr() _ods_ir
mlir.dialects._nvvm_ops_gen.mbarrier_inval(addr, *, loc=None, ip=None) MBarrierInvalOp
class mlir.dialects._nvvm_ops_gen.MBarrierTestWaitOp(res, addr, state, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.test.wait operation performs a non-blocking test for the completion of a specific phase of an mbarrier object. It uses the default .acquire.cta semantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from the mbarrier.arrive operation, establishing memory ordering within the CTA.

This operation tests whether the mbarrier phase specified by the state operand has completed. It is a non-blocking instruction that immediately returns the completion status without suspending the executing thread.

The operation takes the following operands:

  • addr: A pointer to the memory location of the mbarrier object. Uses generic

addressing, but the address must still be in the shared memory space. * state: An opaque value returned by a previous mbarrier.arrive operation on the same mbarrier object during the current or immediately preceding phase.

The operation returns a boolean value indicating whether the specified phase has completed:

  • true: The immediately preceding phase has completed

  • false: The phase is still incomplete (current phase)

Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:

#. All memory accesses (except async operations) requested prior to mbarrier.arrive having release semantics by participating CTA threads are visible to the executing thread. #. All cp.async operations requested prior to cp.async.mbarrier.arrive by participating CTA threads are visible to the executing thread. #. All cp.async.bulk operations using the same mbarrier object requested prior to mbarrier.arrive having release semantics by participating CTA threads are visible to the executing thread. #. Memory accesses requested after this wait are not visible to memory accesses performed prior to mbarrier.arrive by other participating threads. #. No ordering guarantee exists for memory accesses by the same thread between mbarrier.arrive and this wait.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.test.wait'
_ODS_REGIONS = (0, True)
addr() _ods_ir
state() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.mbarrier_test_wait(res, addr, state, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MBarrierTryWaitParityOp(addr, phase, ticks, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mbarrier.try_wait.parity operation performs a potentially-blocking test for the completion of a specific phase of an mbarrier object using phase parity. It uses the default .acquire.cta semantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from the mbarrier.arrive operation, establishing memory ordering within the CTA.

This operation waits for the completion of the mbarrier phase indicated by the phase parity. While it uses the underlying PTX mbarrier.try_wait.parity instruction, this MLIR operation generates a loop that enforces the test to complete before continuing execution, ensuring the barrier phase is actually completed rather than potentially timing out.

The operation takes the following operands:

  • addr: A pointer to the memory location of the mbarrier object. Uses generic

addressing, but the address must still be in the shared memory space. * phase: An integer specifying the phase parity (0 or 1). Even phases have parity 0, odd phases have parity 1. * ticks: An unsigned integer specifying the suspend time hint in nanoseconds. This may be used instead of the system-dependent time limit.

Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:

#. All memory accesses (except async operations) requested prior to mbarrier.arrive having release semantics by participating CTA threads are visible to the executing thread. #. All cp.async operations requested prior to cp.async.mbarrier.arrive by participating CTA threads are visible to the executing thread. #. All cp.async.bulk operations using the same mbarrier object requested prior to mbarrier.arrive having release semantics by participating CTA threads are visible to the executing thread. #. Memory accesses requested after this wait are not visible to memory accesses performed prior to mbarrier.arrive by other participating threads. #. No ordering guarantee exists for memory accesses by the same thread between mbarrier.arrive and this wait.

Implementation behavior: This operation generates a PTX loop that repeatedly calls the underlying mbarrier.try_wait.parity instruction until the barrier phase completes. Unlike the raw PTX instruction which may return without completion after a timeout, this MLIR operation guarantees completion by continuing to loop until the specified phase is reached.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.mbarrier.try_wait.parity'
_ODS_REGIONS = (0, True)
addr() _ods_ir
phase() _ods_ir
ticks() _ods_ir
mlir.dialects._nvvm_ops_gen.mbarrier_try_wait_parity(addr, phase, ticks, *, loc=None, ip=None) MBarrierTryWaitParityOp
class mlir.dialects._nvvm_ops_gen.MapaOp(res, a, b, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.mapa'
_ODS_REGIONS = (0, True)
a() _ods_ir
b() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.mapa(res, a, b, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MatchSyncOp(res, thread_mask, val, kind, *, loc=None, ip=None)

Bases: _ods_ir

The match.sync op performs broadcast and compare of operand val across all non-exited threads in thread_mask and returns a mask depending on the kind and an optional predicate.

The matching operation kinds are:

  • any: Returns a mask corresponding to the non-exited threads in the

thread_mask that have the same value of operand val. * all: Returns a mask and a predicate. If all non-exited threads in the thread_mask have the same value of operand val, the predicate is set to true and the mask corresponds to the non-exited threads in the thread_mask. Otherwise, the predicate is set to false and the mask is 0.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.match.sync'
_ODS_REGIONS = (0, True)
thread_mask() _ods_ir
val() _ods_ir
kind() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.match_sync(res, thread_mask, val, kind, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.MembarOp(scope, *, loc=None, ip=None)

Bases: _ods_ir

membar operation guarantees that prior memory accesses requested by this thread are performed at the specified scope, before later memory operations requested by this thread following the membar instruction.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.memory.barrier'
_ODS_REGIONS = (0, True)
scope() _ods_ir
mlir.dialects._nvvm_ops_gen.memory_barrier(scope, *, loc=None, ip=None) MembarOp
class mlir.dialects._nvvm_ops_gen.MmaOp(res, shape, layoutA, layoutB, operandA, operandB, operandC, *, b1Op=None, intOverflowBehavior=None, multiplicandAPtxType=None, multiplicandBPtxType=None, loc=None, ip=None)

Bases: _ods_ir

The nvvm.mma.sync operation collectively performs the operation D = matmul(A, B) + C using all threads in a warp.

All the threads in the warp must execute the same mma.sync operation.

For each possible multiplicand PTX data type, there are one or more possible instruction shapes given as “mMnNkK”. The below table describes the posssibilities as well as the types required for the operands. Note that the data type for C (the accumulator) and D (the result) can vary independently when there are multiple possibilities in the “C/D Type” column.

When an optional attribute cannot be immediately inferred from the types of the operands and the result during parsing or validation, an error will be raised.

b1Op is only relevant when the binary (b1) type is given to multiplicandDataType. It specifies how the multiply-and-acumulate is performed and is either xor_popc or and_poc. The default is xor_popc.

intOverflowBehavior is only relevant when the multiplicandType attribute is one of u8, s8, u4, s4, this attribute describes how overflow is handled in the accumulator. When the attribute is satfinite, the accumulator values are clamped in the int32 range on overflow. This is the default behavior. Alternatively, accumulator behavior wrapped can also be specified, in which case overflow wraps from one end of the range to the other.

layoutA and layoutB are required and should generally be set to #nvvm.mma_layout<row> and #nvvm.mma_layout<col> respectively, but other combinations are possible for certain layouts according to the table below.

| A/B Type | Shape     | ALayout | BLayout | A Type   | B Type   | C/D Type          |
|----------|-----------|---------|---------|----------|----------|-------------------|
| f64      | .m8n8k4   | row     | col     | 1x f64   | 1x f64   | 2x f64            |
| f16      | .m8n8k4   | row/col | row/col | 2x f16x2 | 2x f16x2 | 4x f16x2 or 8xf32 |
|          | .m16n8k8  | row     | col     | 2x f16x2 | 1x f16x2 | 2x f16x2 or 4 f32 |
|          | .m16n8k16 | row     | col     | 4x f16x2 | 2x f16x2 | 2x f16x2 or 4 f32 |
| bf16     | .m16n8k8  | row     | col     | 2x i32   | 1x i32   | 4x f32            |
|          | .m16n8k16 | row     | col     | 4x i32   | 2x i32   | 4x f32            |
| tf32     | .m16n8k4  | row     | col     | 2x i32   | 1x i32   | 4x f32            |
|          | .m16n8k8  | row     | col     | 4x i32   | 2x i32   | 2x f16x2 or 4 f32 |
| u8/s8    | .m8n8k16  | row     | col     | 1x i32   | 1x i32   | 2x i32            |
|          | .m16n8k16 | row     | col     | 2x i32   | 1x i32   | 4x i32            |
|          | .m16n8k32 | row     | col     | 4x i32   | 2x i32   | 4x i32            |
| u4/s4    | .m8n8k32  | row     | col     | 1x i32   | 1x i32   | 2x i32            |
|          | m16n8k32  | row     | col     | 2x i32   | 1x i32   | 4x i32            |
|          | m16n8k64  | row     | col     | 4x i32   | 2x i32   | 4x i32            |
| b1       | m8n8k128  | row     | col     | 1x i32   | 1x i32   | 2x i32            |
|          | m16n8k128 | row     | col     | 2x i32   | 1x i32   | 4x i32            |

Example:

%128 = nvvm.mma.sync A[%120, %121, %122, %123]
                     B[%124, %125]
                     C[%126, %127]
                     {layoutA = #nvvm.mma_layout<row>,
                      layoutB = #nvvm.mma_layout<col>,
                      shape = {k = 16 : i32, m = 16 : i32, n = 8 : i32}}
    : (vector<2xf16>, vector<2xf16>, vector<2xf16>)
       -> !llvm.struct<(vector<2xf16>, vector<2xf16>)>
OPERATION_NAME = 'nvvm.mma.sync'
_ODS_OPERAND_SEGMENTS
_ODS_REGIONS = (0, True)
operandA() _ods_ir
operandB() _ods_ir
operandC() _ods_ir
shape() _ods_ir
b1Op() _ods_ir | None
intOverflowBehavior() _ods_ir | None
layoutA() _ods_ir
layoutB() _ods_ir
multiplicandAPtxType() _ods_ir | None
multiplicandBPtxType() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.mma_sync(res, shape, layout_a, layout_b, operand_a, operand_b, operand_c, *, b1_op=None, int_overflow_behavior=None, multiplicand_a_ptx_type=None, multiplicand_b_ptx_type=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.NanosleepOp(duration, *, loc=None, ip=None)

Bases: _ods_ir

The op suspends the thread for a sleep duration approximately close to the delay $duration, specified in nanoseconds.

The sleep duration is approximated, but guaranteed to be in the interval [0, 2*t]. The maximum sleep duration is 1 millisecond. The implementation may reduce the sleep duration for individual threads within a warp such that all sleeping threads in the warp wake up together.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.nanosleep'
_ODS_REGIONS = (0, True)
duration() _ods_ir
mlir.dialects._nvvm_ops_gen.nanosleep(duration, *, loc=None, ip=None) NanosleepOp
class mlir.dialects._nvvm_ops_gen.PMEventOp(*, maskedEventId=None, eventId=None, loc=None, ip=None)

Bases: _ods_ir

Triggers one or more of a fixed number of performance monitor events, with event index or mask specified by immediate operand.

Without mask it triggers a single performance monitor event indexed by immediate operand a, in the range 0..15.

With mask it triggers one or more of the performance monitor events. Each bit in the 16-bit immediate operand controls an event.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.pmevent'
_ODS_REGIONS = (0, True)
maskedEventId() _ods_ir | None
eventId() _ods_ir | None
mlir.dialects._nvvm_ops_gen.pmevent(*, masked_event_id=None, event_id=None, loc=None, ip=None) PMEventOp
class mlir.dialects._nvvm_ops_gen.PrefetchOp(addr, *, cacheLevel=None, evictPriority=None, predicate=None, tensormap=None, uniform=None, in_param_space=None, loc=None, ip=None)

Bases: _ods_ir

Prefetches the cache line containing the address given by addr. The operand may be a global, local, or generic pointer. When tensormap is specified, the operand may instead be a constant or generic pointer. If the address maps to shared memory, the operation has no effect.

At most one of cacheLevel or tensormap may be present. The cacheLevel attribute selects the target cache level. When combined with uniform, the prefetch is performed to the uniform cache, in which case addr must be a generic pointer.

When tensormap is used, the line containing addr is brought from the constant or parameter state space for later use by cp.async.bulk.tensor. If in_param_space is specified, the generic pointer is interpreted as referring to the parameter state space.

uniform can be specified after the cacheLevel to indicate that the prefetch is performed to the specified uniform cache level. If uniform is specified, addr must be a generic address pointer and no operation is performed if addr maps to a const, local, or shared memory location.

The evictPriority attribute is optional and specifies the cache eviction priority when cacheLevel is L2.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.prefetch'
_ODS_REGIONS = (0, True)
addr() _ods_ir
predicate() _ods_ir | None
cacheLevel() _ods_ir | None
evictPriority() _ods_ir | None
tensormap() bool
uniform() bool
in_param_space() bool
mlir.dialects._nvvm_ops_gen.prefetch(addr, *, cache_level=None, evict_priority=None, predicate=None, tensormap=None, uniform=None, in_param_space=None, loc=None, ip=None) PrefetchOp
class mlir.dialects._nvvm_ops_gen.RcpApproxFtzF32Op(res, arg, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.rcp.approx.ftz.f'
_ODS_REGIONS = (0, True)
arg() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.rcp_approx_ftz_f(res, arg, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ReduxOp(res, val, kind, mask_and_clamp, *, abs=None, nan=None, loc=None, ip=None)

Bases: _ods_ir

redux.sync performs a reduction operation kind of the 32 bit source register across all non-exited threads in the membermask.

The abs and nan attributes can be used in the case of f32 input type, where the abs attribute causes the absolute value of the input to be used in the reduction operation, and the nan attribute causes the reduction operation to return NaN if any of the inputs to participating threads are NaN.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.redux.sync'
_ODS_REGIONS = (0, True)
val() _ods_ir
mask_and_clamp() _ods_ir
kind() _ods_ir
abs() _ods_ir
nan() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.redux_sync(res, val, kind, mask_and_clamp, *, abs=None, nan=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.SetMaxRegisterOp(regCount, action, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.setmaxregister'
_ODS_REGIONS = (0, True)
regCount() _ods_ir
action() _ods_ir
mlir.dialects._nvvm_ops_gen.setmaxregister(reg_count, action, *, loc=None, ip=None) SetMaxRegisterOp
class mlir.dialects._nvvm_ops_gen.ShflOp(res, thread_mask, val, offset, mask_and_clamp, kind, *, return_value_and_is_valid=None, loc=None, ip=None)

Bases: _ods_ir

The shfl.sync Op implements data shuffle within threads of a warp. The thread_mask denotes the threads participating in the Op where the bit position corresponds to a particular thread’s laneid. The offset specifies a source lane or source lane offset (depending on kind). The val is the input value to be copied from the source. The mask_and_clamp contains two packed values specifying a mask for logically splitting warps into sub-segments and an upper bound for clamping the source lane index.

The return_value_and_is_valid unit attribute can be specified to indicate that the return value is a two-element struct, where the first element is the result value and the second element is a predicate indicating if the computed source lane index is valid.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.shfl.sync'
_ODS_REGIONS = (0, True)
thread_mask() _ods_ir
val() _ods_ir
offset() _ods_ir
mask_and_clamp() _ods_ir
kind() _ods_ir
return_value_and_is_valid() bool
res() _ods_ir
mlir.dialects._nvvm_ops_gen.shfl_sync(res, thread_mask, val, offset, mask_and_clamp, kind, *, return_value_and_is_valid=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.SmDimOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nsmid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nsmid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.SmIdOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.smid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_smid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.StMatrixOp(ptr, sources, layout, shape, eltType, *, loc=None, ip=None)

Bases: _ods_ir

Collectively store one or more matrices across all threads in a warp to the location indicated by the address operand $ptr in shared memory.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.stmatrix'
_ODS_REGIONS = (0, True)
ptr() _ods_ir
sources() _ods_ir
layout() _ods_ir
shape() _ods_ir
eltType() _ods_ir
mlir.dialects._nvvm_ops_gen.stmatrix(ptr, sources, layout, shape, elt_type, *, loc=None, ip=None) StMatrixOp
class mlir.dialects._nvvm_ops_gen.SyncWarpOp(mask, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.bar.warp.sync operation performs barrier synchronization for threads within a warp.

This operation causes the executing thread to wait until all threads corresponding to the mask operand have executed a bar.warp.sync with the same mask value before resuming execution.

The mask operand specifies the threads participating in the barrier, where each bit position corresponds to the thread’s lane ID within the warp. Only threads with their corresponding bit set in the mask participate in the barrier synchronization.

Important constraints:

  • The behavior is undefined if the executing thread is not included in the mask

(i.e., the bit corresponding to the thread’s lane ID is not set) * For compute capability sm_6x or below, all threads in the mask must execute the same bar.warp.sync instruction in convergence

This operation also guarantees memory ordering among participating threads. Threads within the warp that wish to communicate via memory can store to memory, execute bar.warp.sync, and then safely read values stored by other threads in the warp.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.bar.warp.sync'
_ODS_REGIONS = (0, True)
mask() _ods_ir
mlir.dialects._nvvm_ops_gen.bar_warp_sync(mask, *, loc=None, ip=None) SyncWarpOp
class mlir.dialects._nvvm_ops_gen.Tcgen05AllocOp(addr, nCols, *, group=None, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.alloc Op allocates tensor core memory for the amount specified by nCols and writes the destination address to the addr argument. The nCols operand specifies the number of columns to be allocated and it must be a power-of-two. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.alloc'
_ODS_REGIONS = (0, True)
addr() _ods_ir
nCols() _ods_ir
group() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_alloc(addr, n_cols, *, group=None, loc=None, ip=None) Tcgen05AllocOp
class mlir.dialects._nvvm_ops_gen.Tcgen05CommitOp(addr, *, multicastMask=None, group=None, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.commit makes the mbarrier object, specified by the operand addr, track the completion of all the prior async-tcgen05 operations initiated by the executing thread. The multicast variants allow signaling on the mbarrier objects of multiple CTAs within the cluster. Operand multicastMask, when present, specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.commit'
_ODS_REGIONS = (0, True)
addr() _ods_ir
multicastMask() _ods_ir | None
group() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_commit(addr, *, multicast_mask=None, group=None, loc=None, ip=None) Tcgen05CommitOp
class mlir.dialects._nvvm_ops_gen.Tcgen05CpOp(shape, taddr, smem_desc, *, group=None, multicast=None, srcFormat=None, loc=None, ip=None)

Bases: _ods_ir

Instruction tcgen05.cp initiates an asynchronous copy operation from shared memory to the location specified by the address operand taddr in the Tensor Memory. The 64-bit register operand smem_desc specifies the matrix descriptor representing the source matrix in the shared memory that needs to be copied.

Example:

nvvm.tcgen05.cp %taddr, %smem_desc {
  group = #nvvm.tcgen05_group<cta_2>,
  shape = #nvvm.tcgen05_cp_shape<shape_64x128b>,
  multicast = #nvvm.tcgen05_cp_multicast<warpx2_01_23>,
  srcFormat = #nvvm.tcgen05_cp_src_fmt<b6x16_p32>
}

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.cp'
_ODS_REGIONS = (0, True)
taddr() _ods_ir
smem_desc() _ods_ir
shape() _ods_ir
group() _ods_ir
multicast() _ods_ir
srcFormat() _ods_ir | None
mlir.dialects._nvvm_ops_gen.tcgen05_cp(shape, taddr, smem_desc, *, group=None, multicast=None, src_format=None, loc=None, ip=None) Tcgen05CpOp
class mlir.dialects._nvvm_ops_gen.Tcgen05DeallocOp(taddr, nCols, *, group=None, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.dealloc Op de-allocates the tensor core memory specified by tmemAddr, which must be from a previous tensor memory allocation. The nCols operand specifies the number of columns to be de-allocated, and it must be a power-of-two. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.dealloc'
_ODS_REGIONS = (0, True)
taddr() _ods_ir
nCols() _ods_ir
group() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_dealloc(taddr, n_cols, *, group=None, loc=None, ip=None) Tcgen05DeallocOp
class mlir.dialects._nvvm_ops_gen.Tcgen05FenceOp(kind, *, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.fence<before> orders all prior async tcgen05 operations with respect to the subsequent tcgen05 and execution ordering operations. The tcgen05.fence<after> orders all subsequent async tcgen05 operations with respect to the prior tcgen05 and execution ordering operations.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.fence'
_ODS_REGIONS = (0, True)
kind() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_fence(kind, *, loc=None, ip=None) Tcgen05FenceOp
class mlir.dialects._nvvm_ops_gen.Tcgen05LdOp(res, shape, tmemAddr, *, pack=None, offset=None, loc=None, ip=None)

Bases: _ods_ir

Instruction tcgen05.ld asynchronously loads data from the Tensor Memory at the location specified by the 32-bit address operand tmemAddr into the destination register res, collectively across all threads of the warps.

The shape and the num attribute together determines the total dimension of the data which is loaded from the Tensor Memory. The shape attribute indicates the base dimension of data to be accessed as described in the Data Movement Shape. The num attribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.

The shape 16x32bx2 performs two accesses into Tensor Memory of the shape 16x32b. The base address of the first access is specified by tmemAddr and the base address of the second access is specified by tmemAddr + offset, where offset is an immediate argument.

The unit attribute pack can be used to pack two 16-bit elements from adjacent columns into a single 32-bit element during the load.

The following table describes the size of the vector for various combinations of num and shape attributes:

|=====================================================================|
| num/shape      |     16x32bx2/16x64b/32x32b |  16x128b   | 16x256b  |
|=====================================================================|
| x1             |          1                 |    2       |    4     |
| x2             |          2                 |    4       |    8     |
| x4             |          4                 |    8       |    16    |
| x8             |          8                 |    16      |    32    |
| x16            |          16                |    32      |    64    |
| x32            |          32                |    64      |    128   |
| x64            |          64                |    128     |    NA    |
| x128           |          128               |    NA      |    NA    |
|=====================================================================|

Example:

nvvm.tcgen05.ld %tmemAddr, %offset pack {
  shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>,
} : <2xi32>

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.ld'
_ODS_REGIONS = (0, True)
tmemAddr() _ods_ir
offset() _ods_ir | None
pack() bool
shape() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_ld(res, shape, tmem_addr, *, pack=None, offset=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.Tcgen05MmaSmemDescOp(res, startAddr, leadingDimOffset, strideDimOffset, baseOffset, leadingDimMode, swizzleMode, *, loc=None, ip=None)

Bases: _ods_ir

The nvvm.tcgen05_mma_smem_desc constructs a Shared Memory descriptor for tcgen05.mma. This descriptor is a 64-bit value which describes the properties of multiplicand matrix in shared memory including its location in the shared memory of the current CTA.

+-----------+------+------------------------------------------------------+
| Bit-field | Size | Description                                          |
+-----------+------+------------------------------------------------------+
| 0-13      | 14   | Matrix start address                                 |
| 14-15     | 2    | Reserved                                             |
| 16-29     | 14   | Leading dim relative-offset (or) absolute-address    |
| 30-31     | 2    | Reserved                                             |
| 32-45     | 14   | Stride dimension byte offset                         |
| 46-48     | 3    | Fixed constant value of 0b001                        |
| 49-51     | 3    | Matrix base offset                                   |
| 52        | 1    | Leading dimension stride mode:                       |
|           |      |   0: byte offset relative                            |
|           |      |   1: byte address absolute                           |
| 53-60     | 8    | Fixed constant value of 0xb00000000                  |
| 61-63     | 3    | Swizzling mode:                                      |
|           |      |   0: No swizzling                                    |
|           |      |   1: 128-Byte with 32B atomic swizzling              |
|           |      |   2: 128-Byte swizzling                              |
|           |      |   4: 64-Byte swizzling                               |
|           |      |   6: 32-Byte swizzling                               |
|           |      |   (Values 3, 5 and 7 are invalid)                    |
+-----------+------+------------------------------------------------------+

Example:

%desc = nvvm.tcgen05.mma_smem_desc (%startAddr, %leadingDimOffset, %strideDimOffset,
                                    %baseOffset, %leadingDimMode, %swizzleMode) : (i32, i32, i32, i8, i1, i8) -> i64

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.mma_smem_desc'
_ODS_REGIONS = (0, True)
startAddr() _ods_ir
leadingDimOffset() _ods_ir
strideDimOffset() _ods_ir
baseOffset() _ods_ir
leadingDimMode() _ods_ir
swizzleMode() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_mma_smem_desc(res, start_addr, leading_dim_offset, stride_dim_offset, base_offset, leading_dim_mode, swizzle_mode, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.Tcgen05RelinquishAllocPermitOp(*, group=None, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.relinquish_alloc_permit Op specifies that the CTA of the executing thread is relinquishing the right to allocate Tensor Memory. So, it is illegal for a CTA to perform tcgen05.alloc after any of its constituent threads execute tcgen05.relinquish_alloc_permit. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.relinquish_alloc_permit'
_ODS_REGIONS = (0, True)
group() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_relinquish_alloc_permit(*, group=None, loc=None, ip=None) Tcgen05RelinquishAllocPermitOp
class mlir.dialects._nvvm_ops_gen.Tcgen05ShiftOp(taddr, *, group=None, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.shift is an asynchronous instruction which initiates the shifting of 32-byte elements downwards across all the rows, except the last, by one row. The operand taddr specifies the base address of the matrix in Tensor Memory whose rows must be down shifted.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.shift'
_ODS_REGIONS = (0, True)
taddr() _ods_ir
group() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_shift(taddr, *, group=None, loc=None, ip=None) Tcgen05ShiftOp
class mlir.dialects._nvvm_ops_gen.Tcgen05StOp(shape, tmemAddr, val, *, unpack=None, offset=None, loc=None, ip=None)

Bases: _ods_ir

Instruction tcgen05.st asynchronously stores data from the source register r into the Tensor Memory at the location specified by the 32-bit address operand tmemAddr, collectively across all threads of the warps.

The shape and the num attribute together determines the total dimension of the data which is stored to the Tensor Memory. The shape indicates the base dimension of data to be accessed. The num attribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.

The shape 16x32bx2 performs two accesses into Tensor Memory of the shape 16x32b. The base address of the first access is specified by tmemAddr and the base address of the second access is specified by tmemAddr + offset, where offset is an immediate argument.

The unit attribute unpack can be used to unpack a 32-bit element in the register into two 16-bit elements and store them in adjacent columns.

The following table describes the size of the vector for various combinations of num and shape attributes:

|=====================================================================|
| num/shape      |     16x32bx2/16x64b/32x32b |  16x128b   | 16x256b  |
|=====================================================================|
| x1             |          1                 |    2       |    4     |
| x2             |          2                 |    4       |    8     |
| x4             |          4                 |    8       |    16    |
| x8             |          8                 |    16      |    32    |
| x16            |          16                |    32      |    64    |
| x32            |          32                |    64      |    128   |
| x64            |          64                |    128     |    NA    |
| x128           |          128               |    NA      |    NA    |
|=====================================================================|

Example:

nvvm.tcgen05.st %tmemAddr, %val, %offset unpack {
  shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>,
} : <2xi32>

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.st'
_ODS_REGIONS = (0, True)
tmemAddr() _ods_ir
val() _ods_ir
offset() _ods_ir | None
unpack() bool
shape() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_st(shape, tmem_addr, val, *, unpack=None, offset=None, loc=None, ip=None) Tcgen05StOp
class mlir.dialects._nvvm_ops_gen.Tcgen05WaitOp(kind, *, loc=None, ip=None)

Bases: _ods_ir

The tcgen05.wait<load> causes the executing thread to block until all prior tcgen05.ld operations issued by the executing thread have completed. Similarly, the tcgen05.wait<store> causes the executing thread to block until all prior tcgen05.st operations issued by the executing thread have completed. For more information, see PTX ISA

OPERATION_NAME = 'nvvm.tcgen05.wait'
_ODS_REGIONS = (0, True)
kind() _ods_ir
mlir.dialects._nvvm_ops_gen.tcgen05_wait(kind, *, loc=None, ip=None) Tcgen05WaitOp
class mlir.dialects._nvvm_ops_gen.ThreadIdXOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.x'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_x(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ThreadIdYOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.y'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_y(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.ThreadIdZOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.tid.z'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_tid_z(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.VoteSyncOp(res, mask, pred, kind, *, loc=None, ip=None)

Bases: _ods_ir

The vote.sync op will cause executing thread to wait until all non-exited threads corresponding to membermask have executed vote.sync with the same qualifiers and same membermask value before resuming execution.

The vote operation kinds are:

  • any: True if source predicate is True for some thread in membermask.

  • all: True if source predicate is True for all non-exited threads in

membermask. * uni: True if source predicate has the same value in all non-exited threads in membermask. * ballot: In the ballot form, the destination result is a 32 bit integer. In this form, the predicate from each thread in membermask are copied into the corresponding bit position of the result, where the bit position corresponds to the thread’s lane id.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.vote.sync'
_ODS_REGIONS = (0, True)
mask() _ods_ir
pred() _ods_ir
kind() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.vote_sync(res, mask, pred, kind, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WMMALoadOp(res, ptr, stride, m, n, k, layout, eltype, frag, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.wmma.load'
_ODS_REGIONS = (0, True)
ptr() _ods_ir
stride() _ods_ir
m() _ods_ir
n() _ods_ir
k() _ods_ir
layout() _ods_ir
eltype() _ods_ir
frag() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.wmma_load(res, ptr, stride, m, n, k, layout, eltype, frag, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WMMAMmaOp(res, m, n, k, layoutA, layoutB, eltypeA, eltypeB, args, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.wmma.mma'
_ODS_REGIONS = (0, True)
args() _ods_ir
m() _ods_ir
n() _ods_ir
k() _ods_ir
layoutA() _ods_ir
layoutB() _ods_ir
eltypeA() _ods_ir
eltypeB() _ods_ir
res() _ods_ir
mlir.dialects._nvvm_ops_gen.wmma_mma(res, m, n, k, layout_a, layout_b, eltype_a, eltype_b, args, *, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WMMAStoreOp(ptr, m, n, k, layout, eltype, args, stride, *, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.wmma.store'
_ODS_REGIONS = (0, True)
ptr() _ods_ir
args() _ods_ir
stride() _ods_ir
m() _ods_ir
n() _ods_ir
k() _ods_ir
layout() _ods_ir
eltype() _ods_ir
mlir.dialects._nvvm_ops_gen.wmma_store(ptr, m, n, k, layout, eltype, args, stride, *, loc=None, ip=None) WMMAStoreOp
class mlir.dialects._nvvm_ops_gen.WarpDimOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.nwarpid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_nwarpid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WarpIdOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.warpid'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_warpid(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WarpSizeOp(res, *, range=None, loc=None, ip=None)

Bases: _ods_ir

OPERATION_NAME = 'nvvm.read.ptx.sreg.warpsize'
_ODS_REGIONS = (0, True)
range() _ods_ir | None
res() _ods_ir
mlir.dialects._nvvm_ops_gen.read_ptx_sreg_warpsize(res, *, range=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WgmmaFenceAlignedOp(*, loc=None, ip=None)

Bases: _ods_ir

Enforce an ordering of register accesses between warpgroup level matrix multiplication and other operations.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.wgmma.fence.aligned'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.wgmma_fence_aligned(*, loc=None, ip=None) WgmmaFenceAlignedOp
class mlir.dialects._nvvm_ops_gen.WgmmaGroupSyncAlignedOp(*, loc=None, ip=None)

Bases: _ods_ir

Commits all prior uncommitted warpgroup level matrix multiplication operations.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.wgmma.commit.group.sync.aligned'
_ODS_REGIONS = (0, True)
mlir.dialects._nvvm_ops_gen.wgmma_commit_group_sync_aligned(*, loc=None, ip=None) WgmmaGroupSyncAlignedOp
class mlir.dialects._nvvm_ops_gen.WgmmaMmaAsyncOp(results_, inouts, descriptorA, descriptorB, shape, typeA, typeB, typeD, scaleD, scaleA, scaleB, layoutA, layoutB, *, satfinite=None, loc=None, ip=None)

Bases: _ods_ir

The warpgroup (128 threads) level matrix multiply and accumulate operation has either of the following forms, where matrix D is called accumulator: D = A * B + D D = A * B, where the input from accumulator D is disabled.

Supported shapes:

|--------------|--------------|------------|--------------|---------------|
|              |              |            |              |f16+=e4m3*e4m3 |
|              |              |            |              |f16+=e5m2*e5m2 |
|f32+=tf32*tf32|f16+=f16 *f16 | s32+=s8*s8 |s32 += b1 * b1|f16+=e5m2*e4m3 |
|              |f32+=f16 *f16 | s32+=u8*u8 |              |f16+=e4m3*e5m2 |
|              |f32+=bf16*bf16| s32+=u8*u8 |              |f16+=e4m3*e5m2 |
|              |f32+=bf16*bf16| s32+=s8*u8 |              |f32+=e4m3*e4m3 |
|              |              | s32+=u8*s8 |              |f32+=e5m2*e5m2 |
|              |              |            |              |f32+=e4m3*e5m2 |
|              |              |            |              |f32+=e4m3*e5m2 |
|--------------|--------------|------------|--------------|---------------|
|   .m64n8k8   |  .m64n8k16   | .m64n8k32  | .m64n8k256   | .m64n8k32     |
|   .m64n16k8  |  .m64n16k16  | .m64n16k32 | .m64n16k256  | .m64n16k32    |
|   .m64n24k8  |  .m64n24k16  | .m64n24k32 | .m64n24k256  | .m64n24k32    |
|   .m64n32k8  |  .m64n32k16  | .m64n32k32 | .m64n32k256  | .m64n32k32    |
|   .m64n40k8  |  .m64n40k16  | .m64n48k32 | .m64n48k256  | .m64n40k32    |
|   .m64n48k8  |  .m64n48k16  | .m64n64k32 | .m64n64k256  | .m64n48k32    |
|   .m64n56k8  |  .m64n56k16  | .m64n80k32 | .m64n80k256  | .m64n56k32    |
|   .m64n64k8  |  .m64n64k16  | .m64n96k32 | .m64n96k256  | .m64n64k32    |
|   .m64n72k8  |  .m64n72k16  | .m64n112k32| .m64n112k256 | .m64n72k32    |
|   .m64n80k8  |  .m64n80k16  | .m64n128k32| .m64n128k256 | .m64n80k32    |
|   .m64n88k8  |  .m64n88k16  | .m64n144k32| .m64n144k256 | .m64n88k32    |
|   .m64n96k8  |  .m64n96k16  | .m64n160k32| .m64n160k256 | .m64n96k32    |
|   .m64n104k8 |  .m64n104k16 | .m64n176k32| .m64n176k256 | .m64n104k32   |
|   .m64n112k8 |  .m64n112k16 | .m64n192k32| .m64n192k256 | .m64n112k32   |
|   .m64n120k8 |  .m64n120k16 | .m64n208k32| .m64n208k256 | .m64n120k32   |
|   .m64n128k8 |  .m64n128k16 | .m64n224k32| .m64n224k256 | .m64n128k32   |
|   .m64n136k8 |  .m64n136k16 | .m64n240k32| .m64n240k256 | .m64n136k32   |
|   .m64n144k8 |  .m64n144k16 | .m64n256k32| .m64n256k256 | .m64n144k32   |
|   .m64n152k8 |  .m64n152k16 |            |              | .m64n152k32   |
|   .m64n160k8 |  .m64n160k16 |            |              | .m64n160k32   |
|   .m64n168k8 |  .m64n168k16 |            |              | .m64n168k32   |
|   .m64n176k8 |  .m64n176k16 |            |              | .m64n176k32   |
|   .m64n184k8 |  .m64n184k16 |            |              | .m64n184k32   |
|   .m64n192k8 |  .m64n192k16 |            |              | .m64n192k32   |
|   .m64n200k8 |  .m64n200k16 |            |              | .m64n200k32   |
|   .m64n208k8 |  .m64n208k16 |            |              | .m64n208k32   |
|   .m64n216k8 |  .m64n216k16 |            |              | .m64n216k32   |
|   .m64n224k8 |  .m64n224k16 |            |              | .m64n224k32   |
|   .m64n232k8 |  .m64n232k16 |            |              | .m64n232k32   |
|   .m64n240k8 |  .m64n240k16 |            |              | .m64n240k32   |
|   .m64n248k8 |  .m64n248k16 |            |              | .m64n248k32   |
|   .m64n256k8 |  .m64n256k16 |            |              | .m64n256k32   |
|--------------|--------------|------------|--------------|---------------|

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.wgmma.mma_async'
_ODS_REGIONS = (0, True)
inouts() _ods_ir
descriptorA() _ods_ir
descriptorB() _ods_ir
shape() _ods_ir
typeA() _ods_ir
typeB() _ods_ir
typeD() _ods_ir
scaleD() _ods_ir
scaleA() _ods_ir
scaleB() _ods_ir
layoutA() _ods_ir
layoutB() _ods_ir
satfinite() _ods_ir | None
results_() _ods_ir
mlir.dialects._nvvm_ops_gen.wgmma_mma_async(results_, inouts, descriptor_a, descriptor_b, shape, type_a, type_b, type_d, scale_d, scale_a, scale_b, layout_a, layout_b, *, satfinite=None, loc=None, ip=None) _ods_ir
class mlir.dialects._nvvm_ops_gen.WgmmaWaitGroupSyncOp(group, *, loc=None, ip=None)

Bases: _ods_ir

Signal the completion of a preceding warpgroup operation.

For more information, see PTX ISA

OPERATION_NAME = 'nvvm.wgmma.wait.group.sync.aligned'
_ODS_REGIONS = (0, True)
group() _ods_ir
mlir.dialects._nvvm_ops_gen.wgmma_wait_group_sync_aligned(group, *, loc=None, ip=None) WgmmaWaitGroupSyncOp