< Device has ECC support enabled
< Number of asynchronous engines
< Device can map host memory with cudaHostAlloc/cudaHostGetDevicePointer
< Device can access host registered memory at the same virtual address as the CPU
< Clock frequency in kilohertz
< Compute mode (See ::cudaComputeMode)
< Device supports Compute Preemption
< Device can possibly execute multiple kernels concurrently
< Device can coherently access managed memory concurrently with the CPU
< Device supports launching cooperative kernels via ::cudaLaunchCooperativeKernel
< Device can participate in cooperative kernels launched via ::cudaLaunchCooperativeKernelMultiDevice
< Device can concurrently copy memory and execute a kernel. Deprecated. Use instead asyncEngineCount.
< Host can directly access managed memory on the device without migration.
< Device supports caching globals in L1
< Link between the device and the host supports native atomic operations
< Device is integrated as opposed to discrete
< Device is on a multi-GPU board
< Specified whether there is a run time limit on kernels
< Size of L2 cache in bytes
< Device supports caching locals in L1
< 8-byte locally unique identifier. Value is undefined on TCC and non-Windows platforms
< LUID device node mask. Value is undefined on TCC and non-Windows platforms
< Major compute capability
< Device supports allocating managed memory on this system
< Maximum size of each dimension of a grid
< Maximum 1D surface size
< Maximum 1D layered surface dimensions
< Maximum 2D surface dimensions
< Maximum 2D layered surface dimensions
< Maximum 3D surface dimensions
< Maximum Cubemap surface dimensions
< Maximum Cubemap layered surface dimensions
< Maximum 1D texture size
< Maximum 1D layered texture dimensions
< Maximum size for 1D textures bound to linear memory
< Maximum 1D mipmapped texture size
< Maximum 2D texture dimensions
< Maximum 2D texture dimensions if texture gather operations have to be performed
< Maximum 2D layered texture dimensions
< Maximum dimensions (width, height, pitch) for 2D textures bound to pitched memory
< Maximum 2D mipmapped texture dimensions
< Maximum 3D texture dimensions
< Maximum alternate 3D texture dimensions
< Maximum Cubemap texture dimensions
< Maximum Cubemap layered texture dimensions
< Maximum size of each dimension of a block
< Maximum number of threads per block
< Maximum resident threads per multiprocessor
< Maximum pitch in bytes allowed by memory copies
< Global memory bus width in bits
< Peak memory clock frequency in kilohertz
< Minor compute capability
< Unique identifier for a group of devices on the same multi-GPU board
< Number of multiprocessors on device
< ASCII string identifying device
< Device supports coherently accessing pageable memory without calling cudaHostRegister on it
< Device accesses pageable memory via the host's page tables
< PCI bus ID of the device
< PCI device ID of the device
< PCI domain ID of the device
< 32-bit registers available per block
< 32-bit registers available per multiprocessor
< Shared memory available per block in bytes
< Per device maximum shared memory per block usable by special opt in
< Shared memory available per multiprocessor in bytes
< Ratio of single precision performance (in floating-point operations per second) to double precision performance
< Device supports stream priorities
< Alignment requirements for surfaces
< 1 if device is a Tesla device using TCC driver, 0 otherwise
< Alignment requirement for textures
< Pitch alignment requirement for texture references bound to pitched memory
< Constant memory available on device in bytes
< Global memory available on device in bytes
< Device shares a unified address space with the host
< 16-byte unique identifier
< Warp size in threads
CUDA device properties