APIUtils

Public

CUDA.APIUtils.with_workspaceMethod
with_workspace([cache], bytesize) do workspace
    ...
end

Create a GPU workspace vector with size bytesize (either a number, or a callable function), and pass it to the do block. Afterwards, the buffer is freed. If you instead want to cache the workspace, pass any previous instance as the first argument, which will result in it getting resized instead.

This helper protects against the rare but real issue of the workspace size getter returning different results based on the GPU device memory pressure, which might change after initial allocation of the workspace (which can cause a GC collection).

See also: with_workspaces, if you need both a GPU and CPU workspace.

source
CUDA.APIUtils.with_workspacesMethod
with_workspaces([cache_gpu], [cache_cpu], size_gpu, size_cpu) do workspace_gpu, workspace_cpu
    ...
end

Create GPU and CPU workspace vectors with size bytesize (either a number, or a callable function), and pass them to the do block. Afterwards, the buffers are freed. If you instead want to cache the workspaces, pass any previous instances as the first arguments, which will result in them getting resized instead.

This helper protects against the rare but real issue of the workspace size getters returning different results based on the memory pressure, which might change after initial allocation of the workspace (which can cause a GC collection).

See also: with_workspace, if you only need a GPU workspace.

source

Private