Essentials
Initialization
CUDA.functional
— Methodfunctional(show_reason=false)
Check if the package has been configured successfully and is ready to use.
This call is intended for packages that support conditionally using an available GPU. If you fail to check whether CUDA is functional, actual use of functionality might warn and error.
CUDA.has_cuda
— Functionhas_cuda()::Bool
Check whether the local system provides an installation of the CUDA driver and toolkit. Use this function if your code loads packages that require CUDA.jl.
Note that CUDA-dependent packages might still fail to load if the installation is broken, so it's recommended to guard against that and print a warning to inform the user:
using CUDA
if has_cuda()
try
using CuArrays
catch ex
@warn "CUDA is installed, but CuArrays.jl fails to load" exception=(ex,catch_backtrace())
end
end
CUDA.has_cuda_gpu
— Functionhas_cuda_gpu()::Bool
Check whether the local system provides an installation of the CUDA driver and toolkit, and if it contains a CUDA-capable GPU. See has_cuda
for more details.
Note that this function initializes the CUDA API in order to check for the number of GPUs.
Global state
CUDA.context
— Functioncontext(ptr)
Identify the context a CUDA memory buffer was allocated in.
context()::CuContext
Get or create a CUDA context for the current thread (as opposed to current_context
which may return nothing
if there is no context bound to the current thread).
CUDA.context!
— Functioncontext!(ctx::CuContext)
context!(ctx::CuContext) do ... end
Bind the current host thread to the context ctx
. Returns the previously-bound context. If used with do-block syntax, the change is only temporary.
Note that the contexts used with this call should be previously acquired by calling context
, and not arbitrary contexts created by calling the CuContext
constructor.
CUDA.device
— Functiondevice(::CuContext)
Returns the device for a context.
device(ptr)
Identify the device a CUDA memory buffer was allocated on.
device()::CuDevice
Get the CUDA device for the current thread, similar to how context()
works compared to current_context()
.
CUDA.device!
— Functiondevice!(dev::Integer)
device!(dev::CuDevice)
device!(dev) do ... end
Sets dev
as the current active device for the calling host thread. Devices can be specified by integer id, or as a CuDevice
(slightly faster). Both functions can be used with do-block syntax, in which case the device is only changed temporarily, without changing the default device used to initialize new threads or tasks.
Calling this function at the start of a session will make sure CUDA is initialized (i.e., a primary context will be created and activated).
CUDA.device_reset!
— Functiondevice_reset!(dev::CuDevice=device())
Reset the CUDA state associated with a device. This call with release the underlying context, at which point any objects allocated in that context will be invalidated.
CUDA.stream
— Functionstream()
Get the CUDA stream that should be used as the default one for the currently executing task.
CUDA.stream!
— Functionstream!(::CuStream)
stream!(::CuStream) do ... end
Change the default CUDA stream for the currently executing task, temporarily if using the do-block version of this function.