CUDA and Backend Utilities¶
Utilities across backends¶
Copies the elements of an ndarray to those of another one. |
|
CUDA¶
Device, context and memory management on CuPy.
Note
The package chainer.cuda
has been renamed to
chainer.backends.cuda
as of v4.0.0, but the previous module path
chainer.cuda
is also available.
Chainer uses CuPy (with very thin wrapper)
to exploit the speed of GPU computation. Following modules and classes defined
in CuPy are imported to chainer.backends.cuda
module for convenience
(refer to this table when reading chainer’s source codes).
imported name |
original name |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
Chainer replaces the default allocator of CuPy by its memory pool implementation. It enables us to reuse the device memory over multiple forward/backward computations, and temporary arrays for consecutive elementwise operations.
Devices¶
Gets the device from a device object, an ID integer or an array object. |
|
Gets the device from an ID integer. |
|
Gets the device from a list of CuPy array or a single CuPy array. |
CuPy array allocation and copy¶
Copies a |
|
Copies the given GPU array to host CPU. |
|
Copies the given CPU array to the specified device. |
Kernel definition utilities¶
Makes a function memoizing the result for each argument and device. |
|
Clears the memoized results for all functions decorated by memoize. |
|
Creates an elementwise kernel function. |
|
Creates a raw kernel function. |
|
Creates a global reduction kernel function. |
CPU/GPU generic code support¶
cuDNN support¶
Sets the workspace size for cuDNN. |
|
Gets the workspace size for cuDNN. |
iDeep¶
iDeep is a module that provides NumPy-like API and DNN acceleration using MKL-DNN for Intel CPUs. See Tips and FAQs and Performance Best Practices for details.
Returns if iDeep is available. |