mlir.dialects._amdgpu_ops_gen¶
Attributes¶
Classes¶
This operation represents DPP functionality in a GPU program. |
|
Extend one or two 8-bit floats in |
|
Wraps the memory pointed to by |
|
The |
|
|
|
The |
|
Wait for the specified counters to be less-than or equal-to the provided |
|
Scale and round the inputs |
|
Round the input |
|
Round the inputs |
|
High-level wrapper on |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The scales applied to the input microfloats are stored in two bytes which |
|
Extend and scale two packed floats in |
|
The |
|
|
|
High-level wrapper on bitmode |
|
The |
|
The |
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Module Contents¶
- mlir.dialects._amdgpu_ops_gen._ods_ir¶
- class mlir.dialects._amdgpu_ops_gen._Dialect(descriptor: object)¶
Bases:
_ods_ir- DIALECT_NAMESPACE = 'amdgpu'¶
- class mlir.dialects._amdgpu_ops_gen.DPPOp(old, src, kind, *, permArgument=None, row_mask=None, bank_mask=None, bound_ctrl=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThis operation represents DPP functionality in a GPU program. DPP provides the following operations:
Full crossbar in a group of four (
quad_perm)Wavefront shift left by one lane (
wave_shl)Wavefront shift right by one lane (
wave_shr)Wavefront rotate right by one lane (
wave_ror)Wavefront rotate left by one lane (
wave_rol)Row shift left by 1–15 lanes (
row_shl)Row shift right by 1–15 lanes (
row_shr)Row rotate right by 1–15 lanes (
row_ror)Reverse within a row (
row_mirror)Reverse within a half-row (
row_half_mirror)Broadcast the 15th lane of each row to the next row (
row_bcast)Broadcast lane 31 to rows 2 and 3 (
row_bcast)
- OPERATION_NAME = 'amdgpu.dpp'¶
- _ODS_REGIONS = (0, True)¶
- old() _ods_ir¶
- src() _ods_ir¶
- kind() _ods_ir¶
- permArgument() _ods_ir | None¶
- row_mask() _ods_ir¶
- bank_mask() _ods_ir¶
- bound_ctrl() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._amdgpu_ops_gen.dpp(old, src, kind, *, perm_argument=None, row_mask=None, bank_mask=None, bound_ctrl=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.ExtPackedFp8Op(res, source, index, *, loc=None, ip=None)¶
Bases:
_ods_irExtend one or two 8-bit floats in
source[index]to a 32-bit float or two floats and return them.This rather unusual signature arises from the fact that AMD GPUs cannot easily work with sub 32-bit quantities, so the compiler intrinsics for extending 8-bit floats (which are, currently, the only way to work with this operation) take packed vectors of 4 such floats.
If the passed-in vector has fewer than four elements, or the input is scalar, the remaining values in the <4 x i8> will be filled with undefined values as needed.
- OPERATION_NAME = 'amdgpu.ext_packed_fp8'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- index() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.ext_packed_fp8(res, source, index, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.FatRawBufferCastOp(source, *, validBytes=None, cacheSwizzleStride=None, boundsCheck=None, resetOffset=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irWraps the memory pointed to by
sourceas a raw buffer fat pointer, or, in LLVM terms, aptr addrspace(7), returning a memref that has the same sizes and layout but the#amdgpu.address_space<fat_raw_buffer>address space.This memref can be used with standard memref operations like
memref.load,memref.store, andmemref.atomicrmw, which will be lowered to the relevant buffer intrinsics. (vector.masked_load/storewill work once there’s backend support for lowering them, and then this document will be updated)If
validBytesis given, it is the number of bytes that will be valid as an offset toout. If it is not provided, this will be inferred from the size of the memref during lowering. This size is max_{d = 0 upto rank(source)} (sizes[d] * strides[d]) * sizeof(element type).The flags of the buffer descriptor will be set up to enable raw usage - for example, stride = 0, add_tid = 0, and so on. The
boundsCheckproperty determines if bounds checking is enabled or not (on architectures where this can be controlled - that is, on RDNA chips).If
cacheSwizzleStrideis provided, L1 cache swizzling will be enabled on architectures that support it. This swizzling, unlike the main swizzling mode (whose usage makes a buffer non-raw) does not affect index calculation, but does affect cache behavior. Mixing access between cache-swizzled raw buffers and other forms of memory access, like ordinary pointer loads or unswizzled buffer pointers can cause incorrect behavior and must be avoided.This operation preserves the sizes, strides, and offset of the input memref - they’ll be added in by
memref.loadlater. However, ifresetOffsetis set, that offset will be added to the base pointer. If the value of the memref’s offset is not uniform (independent of the lane/thread ID), this will lead to substantially decreased performance due to the need for a waterfall loop on the base address of the buffer resource.- OPERATION_NAME = 'amdgpu.fat_raw_buffer_cast'¶
- _ODS_OPERAND_SEGMENTS = [1, 0, 0]¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- validBytes() _ods_ir | None¶
- cacheSwizzleStride() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- resetOffset() bool¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._amdgpu_ops_gen.fat_raw_buffer_cast(source, *, valid_bytes=None, cache_swizzle_stride=None, bounds_check=None, reset_offset=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.GatherToLDSOp(src, srcIndices, dst, dstIndices, transferType, *, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.gather_to_ldsop is a wrapper around theglobal_load_ldsinstructions.Operands:
$src: global memory (including fat buffer) memref to read from.$srcIndices: indices into$srcto read from for this thread.$dst: LDS memory memref to write to.$dstIndices: base indices into$dstto write to for the subgroup of this thread.
The elements gathered by the subgroup will be written contiguously in order of lane ID starting at
$dst[$dstIndices]. Byte-sized (ex. i8) or short-sized (ex. i16) types will be zero-padded/extended to 32 bits before being written. 96-bit types (ex. vector<3xf32>) will be zero-padded to 128 bits before being written. Only the offsets held by lane 0 are used. *$transferType: type of the data to be transferred by each thread. This is used to determine the size of the data to be transferred and the number of threads in the subgroup. The transfer type must be a scalar type or a vector type with a single element type.The
$dst, along with its indices, points to the memory location the subgroup of this thread will write to.Note: only supported on gfx9 and gfx10.
- OPERATION_NAME = 'amdgpu.gather_to_lds'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- srcIndices() _ods_ir¶
- dst() _ods_ir¶
- dstIndices() _ods_ir¶
- transferType() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.gather_to_lds(src, src_indices, dst, dst_indices, transfer_type, *, loc=None, ip=None) GatherToLDSOp¶
- class mlir.dialects._amdgpu_ops_gen.LDSBarrierOp(*, loc=None, ip=None)¶
Bases:
_ods_iramdgpu.lds_barrieris both a barrier (all workitems in a workgroup must reach the barrier before any of them may proceed past it) and a wait for all operations that affect the Local Data Store (LDS) issued from that wrokgroup to complete before the workgroup may continue. Since the LDS is per-workgroup memory, this barrier may be used, for example, to ensure all workitems have written data to LDS before any workitem attempts to read from it.Note that
lds_barrierdoes not force reads to or from global memory to complete before execution continues. Therefore, it should be used when operations on global memory can be issued far in advance of when their results are used (for example, by writing them to LDS).WARNING: On architectures that do not support the BackOffBarrier feature, (those which will implement this barrier by emitting inline assembly), use of this operation will impede the usabiliity of memory watches (including breakpoints set on variables) when debugging.
- OPERATION_NAME = 'amdgpu.lds_barrier'¶
- _ODS_REGIONS = (0, True)¶
- mlir.dialects._amdgpu_ops_gen.lds_barrier(*, loc=None, ip=None) LDSBarrierOp¶
- class mlir.dialects._amdgpu_ops_gen.MFMAOp(m, n, k, sourceA, sourceB, destC, *, blocks=None, cbsz=None, abid=None, blgp=None, reducePrecision=None, negateA=None, negateB=None, negateC=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.mfmaop is an MLIR wrapper around intrinsics for variousmfmainstructions in the CDNA architecture, which perform multiple outer products in order to allow fast matrix multiplication.The wrapper will select an appropriate
mfmainstruction, if one is available, based on the providedm,k,n, andnBlksattributes, along with the types of the source and destination arguments.For information on the layouts of the input and output matrices (which are stored in
sourceA,sourceB,destC, anddestD), see the CDNA ISA documentation.The
cbsz,abid, andblgpparameters control how the lanes of the wave are permuted when matrix data is being loaded:blgpcan be any number of fixed permutations,cbszspecifies the log_2 of the number of chunks the lanes holding sourceA are split into, andabidselects one of those chunks.Note, this wrapper allows specifying
vector<4Kxi8>arguments to MFMA intrinsics that take an integer type of width4K. For example, one can provide a vector<4xi8> as an argument to an MFMA instruction that logically takes 4 i8s but whose intrinsics are specified to take an i32. In these cases, the bytes in the vector will be concatenated in little-endian order (that is, v[0] will go to arg[7:0], v[1] to arg[15:8] and so on).The negateA, negateB, and negateC flags are only supported for double-precision operations on gfx94x.
Example:
%0 = amdgpu.mfma 16x16x16 %matA * %matB + %matC : vector<4xf16>, vector<4xf16>, vector<4xf32> %1 = amdgpu.mfma 32x32x1 %matD * %matE + %matF { abid = 1 : i32, cbsz = 1 : i32, blocks = 2 : i32 } blgp = bcast_second_32 : f32, f32, vector<32xf32>
- OPERATION_NAME = 'amdgpu.mfma'¶
- _ODS_REGIONS = (0, True)¶
- sourceA() _ods_ir¶
- sourceB() _ods_ir¶
- destC() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- blocks() _ods_ir¶
- cbsz() _ods_ir¶
- abid() _ods_ir¶
- blgp() _ods_ir¶
- reducePrecision() bool¶
- negateA() bool¶
- negateB() bool¶
- negateC() bool¶
- destD() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.mfma(m, n, k, source_a, source_b, dest_c, *, blocks=None, cbsz=None, abid=None, blgp=None, reduce_precision=None, negate_a=None, negate_b=None, negate_c=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.MemoryCounterWaitOp(*, load=None, store=None, ds=None, exp=None, loc=None, ip=None)¶
Bases:
_ods_irWait for the specified counters to be less-than or equal-to the provided values before continuing.
Counters can lower to different instructions on different architectires, including clamping to the some HW supported max value or combining multiple counters into one.
- OPERATION_NAME = 'amdgpu.memory_counter_wait'¶
- _ODS_REGIONS = (0, True)¶
- load() _ods_ir | None¶
- store() _ods_ir | None¶
- ds() _ods_ir | None¶
- exp() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.memory_counter_wait(*, load=None, store=None, ds=None, exp=None, loc=None, ip=None) MemoryCounterWaitOp¶
- class mlir.dialects._amdgpu_ops_gen.PackedScaledTruncOp(res, source, scale, index, *, existing=None, loc=None, ip=None)¶
Bases:
_ods_irScale and round the inputs
source(which is undefined if not specified) into the low or high word (bottom two or top two) elements of the returned vector, keeping the other two elements ofexistingunchanged if present (or undefined if it was not passed in).The reason for this odd signature is that AMD GPUs cannot easily work with sub-registers, and so the conversion intrinsics take 32-bit wide packed vectors of float values.
- OPERATION_NAME = 'amdgpu.packed_scaled_trunc'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- scale() _ods_ir¶
- existing() _ods_ir | None¶
- index() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.packed_scaled_trunc(res, source, scale, index, *, existing=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.PackedStochRoundFp8Op(res, source, stochiasticParam, storeIndex, *, existing=None, loc=None, ip=None)¶
Bases:
_ods_irRound the input
source, adding instochiasticParam, and place it into thestoreIndex``th element of ``res.If
existingis passed in, elements ofresother than the one atstoreIndexare copied fromexisting.The reason for this odd signature is that AMD GPUs cannot easily work with sub-registers, and so the conversion intrinsics (which are currently the only way to work with 8-bit float types) take packed vectors of 4 8-bit values.
- OPERATION_NAME = 'amdgpu.packed_stoch_round_fp8'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- stochiasticParam() _ods_ir¶
- existing() _ods_ir | None¶
- storeIndex() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.packed_stoch_round_fp8(res, source, stochiastic_param, store_index, *, existing=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.PackedTrunc2xFp8Op(res, sourceA, wordIndex, *, sourceB=None, existing=None, loc=None, ip=None)¶
Bases:
_ods_irRound the inputs
sourceAandsourceB(which is undefined if not specified) into the low or high word (bottom two or top two) elements of the returned vector, keeping the other two elements ofexistingunchanged if present (or undefined if it was not passed in).The reason for this odd signature is that AMD GPUs cannot easily work with sub-registers, and so the conversion intrinsics (which are currently the only way to work with 8-bit float types) take packed vectors of 4 8-bit values.
- OPERATION_NAME = 'amdgpu.packed_trunc_2xfp8'¶
- _ODS_OPERAND_SEGMENTS = [1, 0, 0]¶
- _ODS_REGIONS = (0, True)¶
- sourceA() _ods_ir¶
- sourceB() _ods_ir | None¶
- existing() _ods_ir | None¶
- wordIndex() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.packed_trunc_2xfp8(res, source_a, word_index, *, source_b=None, existing=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.PermlaneSwapOp(src, row_length, *, fetch_inactive=None, bound_ctrl=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irHigh-level wrapper on
rocdl.permlane{16,32}.swapvariants for permutations on rows of lanes in a subgroup.Supports arbitrary int/float/vector types, which will be repacked to i32 and one or more
rocdl.permlane_swapops during lowering. Supported lane permutations:Swap the data between odd and even rows of 16 lanes
Swap the data between the first 32 lanes and the last 32 lanes
Example:
%0 = amdgpu.permlane_swap %src 16 : f16 %1 = amdgpu.permlane_swap %src 32 { fetch_inactive = true, bound_ctrl = true } : f16
Operands:
$src: Vector register to permute across lanes of the subgroup.$row_length: The length of a row to permute in number of lanes (valid values are 16 and 32).$fetch_inactive: Optional. Used to dertermine behavior of a fetch from a disabled lane.
fetch_inactive = false: If the source lane is disabled, usebound_ctrlto determine the source value.fetch_inactive = true: If the source lane is disabled, fetch the source value anyway (ignoringbound_ctrl). *$bound_ctrl: Optional. Used to determine what a thread should do if its source operand is from a disabled lane: use the value zero, or disable the write.bound_ctrl = false: Do not write when source is from a disabled lanebound_ctrl = true: Use zero as input if source is from a disabled laneNote: Lowering is only supported on gfx950 and up.
- OPERATION_NAME = 'amdgpu.permlane_swap'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- row_length() _ods_ir¶
- fetch_inactive() _ods_ir¶
- bound_ctrl() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._amdgpu_ops_gen.permlane_swap(src, row_length, *, fetch_inactive=None, bound_ctrl=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferAtomicCmpswapOp(src, cmp, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_atomic_cmpswapop is a wrapper around the buffer-based atomic compare-and-swap min available on AMD GPUs.The index into the buffer is computed as for
memref.storewith the addition ofindexOffset(which is used to aid in emitting vectorized code) and, if presentsgprOffset(which is added after bounds checks and includes any non-zero offset on the memref type).All indexing components are given in terms of the memref’s element size, not the byte lengths required by the intrinsic.
Out of bounds atomic operations are ignored in hardware.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_atomic_cmpswap'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- cmp() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- value() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_atomic_cmpswap(src, cmp, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferAtomicFaddOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_atomic_faddop is a wrapper around the buffer-based atomic floating point addition available on the MI-* series of AMD GPUs.The index into the buffer is computed as for
memref.storewith the addition ofindexOffset(which is used to aid in emitting vectorized code) and, if presentsgprOffset(which is added after bounds checks and includes any non-zero offset on the memref type).All indexing components are given in terms of the memref’s element size, not the byte lengths required by the intrinsic.
Out of bounds atomic operations are ignored in hardware.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_atomic_fadd'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_atomic_fadd(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) RawBufferAtomicFaddOp¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferAtomicFmaxOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_atomic_fmaxop is a wrapper around the buffer-based atomic floating point max available on AMD GPUs (except GFX9).The index into the buffer is computed as for
memref.storewith the addition ofindexOffset(which is used to aid in emitting vectorized code) and, if presentsgprOffset(which is added after bounds checks and includes any non-zero offset on the memref type).All indexing components are given in terms of the memref’s element size, not the byte lengths required by the intrinsic.
Out of bounds atomic operations are ignored in hardware.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_atomic_fmax'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_atomic_fmax(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) RawBufferAtomicFmaxOp¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferAtomicSmaxOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_atomic_smaxop is a wrapper around the buffer-based atomic signed integer max available on AMD GPUs.The index into the buffer is computed as for
memref.storewith the addition ofindexOffset(which is used to aid in emitting vectorized code) and, if presentsgprOffset(which is added after bounds checks and includes any non-zero offset on the memref type).All indexing components are given in terms of the memref’s element size, not the byte lengths required by the intrinsic.
Out of bounds atomic operations are ignored in hardware.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_atomic_smax'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_atomic_smax(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) RawBufferAtomicSmaxOp¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferAtomicUminOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_atomic_uminop is a wrapper around the buffer-based atomic signed integer min available on AMD GPUs.The index into the buffer is computed as for
memref.storewith the addition ofindexOffset(which is used to aid in emitting vectorized code) and, if presentsgprOffset(which is added after bounds checks and includes any non-zero offset on the memref type).All indexing components are given in terms of the memref’s element size, not the byte lengths required by the intrinsic.
Out of bounds atomic operations are ignored in hardware.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_atomic_umin'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_atomic_umin(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) RawBufferAtomicUminOp¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferLoadOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_loadop is a wrapper around the buffer load intrinsics available on AMD GPUs, including extensions in newer GPUs.The index into the buffer is computed as for
memref.loadwith the additon ofindexOffsetandsgprOffset(which may or may not be considered in bounds checks and includes any offset present on the memref type if it’s non-zero).All indices and offsets are in units of the memref’s data type and are converted to bytes during lowering.
When a load is out of bounds, the instruction returns zero. Partially-out of bounds have chipset-dependent behavior: whether reading 2 elements starting at index 7 of a
memref<8xf32>returns the last element in the first vector component depends on the architecture.The memref struct is converted into a buffer resource (a V#) and the arguments are translated to intrinsic arguments as follows:
The base address of the buffer is the base address of the memref
The stride is 0 to enable raw mode
The number of records is the size of the memref, in bytes
In the case of dynamically-shaped memrefs, this is computed at runtime as max_d (size(d) * stride(d)) * sizeof(elementType(memref)) * The offset enable bit is 1, the index enable bit is 0. * The thread ID addition bit is off * If
boundsCheckis false and the target chipset is RDNA, OOB_SELECT is set to 2 to disable bounds checks, otherwise it is 3 * The cache coherency bits are off- OPERATION_NAME = 'amdgpu.raw_buffer_load'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- value() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_load(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.RawBufferStoreOp(value, memref, indices, *, boundsCheck=None, indexOffset=None, sgprOffset=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.raw_buffer_storeop is a wrapper around the buffer store intrinsics available on AMD GPUs, including extensions in newer GPUs.The store index is computed as in
memref.storewith the addition ofindexOffset(which is included for uniformity with atomics and may be useful when writing vectorized code) andsgprOffset(which is added after bounds checks and implicitly includes the offset of the memref type if non-zero). All index components are in terms of the elements of the memref, not bytes, and are scaled up appropriately.Out of bounds stores are ignored in hardware. Wthether a vector write that includes some in-bounds and soeme out-of-bounds components is partically completed is chipset-dependent.
See
amdgpu.raw_buffer_loadfor a description of how the underlying instruction is constructed.- OPERATION_NAME = 'amdgpu.raw_buffer_store'¶
- _ODS_OPERAND_SEGMENTS¶
- _ODS_REGIONS = (0, True)¶
- value() _ods_ir¶
- memref() _ods_ir¶
- indices() _ods_ir¶
- sgprOffset() _ods_ir | None¶
- boundsCheck() _ods_ir¶
- indexOffset() _ods_ir | None¶
- mlir.dialects._amdgpu_ops_gen.raw_buffer_store(value, memref, indices, *, bounds_check=None, index_offset=None, sgpr_offset=None, loc=None, ip=None) RawBufferStoreOp¶
- class mlir.dialects._amdgpu_ops_gen.ScaledExtPacked816Op(res, source, scale, blockSize, firstScaleLane, firstScaleByte, *, loc=None, ip=None)¶
Bases:
_ods_irThe scales applied to the input microfloats are stored in two bytes which come from the
scalesinput provided in a half of the wave identified byfirstScaleLane. The pair of bytes used is selected byfirstScaleByte. The 16 vectors in consecutive lanes starting fromfirstScaleLane(which we’ll call the scale vectors) will be used by both halves of the wave (with lane L reading from L % 16’th scale vector), but each half will use a different byte.When the block size is 32,
firstScaleBytecan be either 0 or 2, selecting halves of the scale vectors. Lanes 0-15 will read fromfirstScaleByteand lanes 16-31 will read fromfirstScaleByte+ 1. For example:// Input: 8-element vector of F8E4M3FN, converting to F32 // Lanes 0-15 read from byte 0, lanes 16-31 read from byte 1 %result = amdgpu.scaled_ext_packed816 %source scale(%scales) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf32> // Input: 16-element vector of F6E2M3FN, converting to F16 // Lanes 0-15 read from byte 2, lanes 16-31 read from byte 3 %result = amdgpu.scaled_ext_packed816 %source scale(%scales) blockSize(32) firstScaleLane(1) firstScaleByte(2) : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
However, when the block size is 16,
firstScaleBytecan be 0 or 1. Lanes 0-15 read from thefirstScaleByte``th element of the scale vectors, while lanes 16-31 read from ``firstScaleByte+ 2. For example:// Input: 8-element vector of F8E5M2, converting to BF16 // Lanes 0-15 read from byte 0, lanes 16-31 read from byte 2 (0+2) %result = amdgpu.scaled_ext_packed816 %source scale(%scales) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xbf16> // Input: 16-element vector of F6E3M2FN, converting to F32 // Lanes 0-15 read from byte 1, lanes 16-31 read from byte 3 (1+2) %result = amdgpu.scaled_ext_packed816 %source scale(%scales) blockSize(16) firstScaleLane(1) firstScaleByte(1) : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
Note: the layout for the scales generally mirrors how the WMMA instructions use for matix scales. These selection operands allows one to choose portions of the matrix to convert.
Available on gfx1250+.
- OPERATION_NAME = 'amdgpu.scaled_ext_packed816'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- scale() _ods_ir¶
- blockSize() _ods_ir¶
- firstScaleLane() _ods_ir¶
- firstScaleByte() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.scaled_ext_packed816(res, source, scale, block_size, first_scale_lane, first_scale_byte, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.ScaledExtPackedOp(res, source, scale, index, *, loc=None, ip=None)¶
Bases:
_ods_irExtend and scale two packed floats in
source[index]to two floats and return them.This rather unusual signature arises from the fact that AMD GPUs cannot easily work with sub 32-bit quantities, so the compiler intrinsics for extending 8-bit floats (which are, currently, the only way to work with this operation) take packed vectors of 2 such floats.
If the passed-in vector has fewer than two elements, or the input is scalar, the remaining values in the <2 x i8> will be filled with undefined values as needed.
- OPERATION_NAME = 'amdgpu.scaled_ext_packed'¶
- _ODS_REGIONS = (0, True)¶
- source() _ods_ir¶
- scale() _ods_ir¶
- index() _ods_ir¶
- res() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.scaled_ext_packed(res, source, scale, index, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.ScaledMFMAOp(m, n, k, sourceA, sourceB, destC, scalesA, scalesB, scalesIdxA, scalesIdxB, *, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.scaled_mfmaop is an MLIR wrapper around intrinsics for various scaled versions ofmfmainstructions in the CDNA architecture, which perform multiple outer products in order to allow fast matrix multiplication.The wrapper will select an appropriate
mfmainstruction, if one is available, based on the providedm,k,n, andnBlksattributes, along with the types of the source and destination arguments.Note, this wrapper allows specifying
vector<4Kxi8>arguments to MFMA intrinsics that take an integer type of width4K. For example, one can provide avector<4xi8>as an argument to an MFMA instruction that logically takes 4 i8s but whose intrinsics are specified to take an i32. In these cases, the bytes in the vector will be concatenated in little-endian order (that is, v[0] will go to arg[7:0], v[1] to arg[15:8] and so on).This wrapper takes inspiration from
amdgpu.mfma, but has some key differences:amdgpu.scaled_mfmaoperates on fp4 (f4E2M1FN), fp6 (f6E2M3FN and f6E3M2FN) and
fp8 (f8E4M3FN and f8E5M2) types using either M=N=16, K=128 or M=N=32, K=64 as their tile size. *
amdgpu.scaled_mfmadoes not support broadcasting. So,cbsz,abid, andblgpare omitted from this wrapper. * ThenegateA,negateB, andnegateCflags inamdgpu.mfmaare only supported for double-precision operations on gfx94x and so are not included here.Example:
%0 = amdgpu.scaled_mfma 32x32x64 (%arg0[0] * %arg1) * (%arg0[1] * %arg1) + %arg2 : vector<4xf8E8M0FNU>, vector<32xf6E2M3FN>, f8E8M0FNU, vector<32xf6E2M3FN>, vector<16xf32>
- OPERATION_NAME = 'amdgpu.scaled_mfma'¶
- _ODS_REGIONS = (0, True)¶
- sourceA() _ods_ir¶
- sourceB() _ods_ir¶
- destC() _ods_ir¶
- scalesA() _ods_ir¶
- scalesB() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- scalesIdxA() _ods_ir¶
- scalesIdxB() _ods_ir¶
- destD() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.scaled_mfma(m, n, k, source_a, source_b, dest_c, scales_a, scales_b, scales_idx_a, scales_idx_b, *, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.SchedBarrierOp(opts, *, loc=None, ip=None)¶
Bases:
_ods_iramdgpu.sched_barrierserves as a barrier that could be configured to restrict movements of instructions through it as defined by sched_barrier_opts.- OPERATION_NAME = 'amdgpu.sched_barrier'¶
- _ODS_REGIONS = (0, True)¶
- opts() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.sched_barrier(opts, *, loc=None, ip=None) SchedBarrierOp¶
- class mlir.dialects._amdgpu_ops_gen.SwizzleBitModeOp(src, and_mask, or_mask, xor_mask, *, results=None, loc=None, ip=None)¶
Bases:
_ods_irHigh-level wrapper on bitmode
rocdl.ds_swizzleop, masks are represented as separate fields so user won’t need to do manual bitpacking.Supports arbitrary int/float/vector types, which will be repacked to i32 and one or more
rocdl.ds_swizzleops during lowering.- OPERATION_NAME = 'amdgpu.swizzle_bitmode'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- and_mask() _ods_ir¶
- or_mask() _ods_ir¶
- xor_mask() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._amdgpu_ops_gen.swizzle_bitmode(src, and_mask, or_mask, xor_mask, *, results=None, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.TransposeLoadOp(result, src, srcIndices, *, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.transpose_loadop is a wrapper around theds_read_trinstructions. The transpose load op represents a subgroup load from LDS memory, where the subgroup of threads collectively reads a matrix from the source memref, with each thread reading a vector of the matrix, and gets a transposed matrix in as the result. That is, each thread reads a vector of the col-major matrix at different indices, and the thread’s read result is a vector of the corresponding row of the transposed matrix.This op is a direct wrapper around the ROCDL
ds_read_trfamily intrinsics. Please refer to the CDNA4 ISA documentation for more details about its exact semantics.Format example:
%0 = amdgpu.transpose_load %src[%srcIndices] : memref<128x256xf16> -> vector<4xf16>
Operands:
$src: LDS memref to read from.$srcIndices: indices into$srcto read from for this thread.$result: target register this transpose load instruction will write to.
Note: Lowering is only supported on gfx950 and up.
- OPERATION_NAME = 'amdgpu.transpose_load'¶
- _ODS_REGIONS = (0, True)¶
- src() _ods_ir¶
- srcIndices() _ods_ir¶
- result() _ods_ir¶
Shortcut to get an op result if it has only one (throws an error otherwise).
- mlir.dialects._amdgpu_ops_gen.transpose_load(result, src, src_indices, *, loc=None, ip=None) _ods_ir¶
- class mlir.dialects._amdgpu_ops_gen.WMMAOp(m, n, k, sourceA, sourceB, destC, *, subwordOffset=None, unsignedA=None, unsignedB=None, clamp=None, results=None, loc=None, ip=None)¶
Bases:
_ods_irThe
amdgpu.wmmaop is an MLIR wrapper around intrinsics for variouswmmainstructions in the AMDGPU architecture, which perform matrix multiplication.On gfx11/RDNA3, wmma intrinsics have M=N=K=16 dimensions.
On gfx12/RDNA4, wmma intrinsics have M=N=16 dimensions and support K=16 for all element types, and K=32 for i4 sources.
On gfx1250, wmma intrinsics have M=N=16 and K dimensions of 4, 32, 64, or 128, depending on the element types.
On gfx11/RDNA3, emitting f16->f16 (or bf16->bf16) wmma the output is a 16xf16 (or 16xbf16) vector containing only 8 valid values:
If
subwordOffsetis 0, then the output is stored at indices 0, 2, 4, …, 14.If
subwordOffsetis 1, then the output is stored at indices 1, 3, 5, …, 15.
On gfx12/RDNA4 and gfx1250, the result is instead returned as vector where all the values are valid and the
subwordOffsetmust be0, as it cannot be used.unsignedAandunsignedBflag that theint8LLVM inputs are unsigned.The
clampflag is used to saturate the output of type T tonumeric_limits<T>::max()in case of overflow.Example:
%0 = amdgpu.wmma 16x16x16 %matA * %matB + %matC : vector<8xf16>, vector<8xf16>, vector<8xf16> %1 = amdgpu.wmma 16x16x64 %matD * %matE + %matF : vector<32xi8>, vector<8xf32>, vector<8xf32> %2 = amdgpu.wmma 16x16x128 %matG * %matH + %matI : vector<64xf4E2M1FN>, vector<64xf4E2M1FN>, vector<8xf32> %3 = amdgpu.wmma 16x16x4 %matJ * %matK + %matL : vector<2xf32>, vector<2xf32>, vector<8xf32>
- OPERATION_NAME = 'amdgpu.wmma'¶
- _ODS_REGIONS = (0, True)¶
- sourceA() _ods_ir¶
- sourceB() _ods_ir¶
- destC() _ods_ir¶
- m() _ods_ir¶
- n() _ods_ir¶
- k() _ods_ir¶
- subwordOffset() _ods_ir¶
- unsignedA() bool¶
- unsignedB() bool¶
- clamp() bool¶
- destD() _ods_ir¶
- mlir.dialects._amdgpu_ops_gen.wmma(m, n, k, source_a, source_b, dest_c, *, subword_offset=None, unsigned_a=None, unsigned_b=None, clamp=None, results=None, loc=None, ip=None) _ods_ir¶