Configuring Chainer

Chainer provides some global settings that affect the behavior of some functionalities. Such settings can be configured using the unified configuration system. The system provides a transparent way to manage the configuration for each process and for each thread.

The configuration is managed by two global objects: chainer.global_config and chainer.config.

  • The global_config object maintains the configuration shared in the Python process. This is an instance of the GlobalConfig class. It can be used just as a plain object, and users can freely set any attributes on it.

  • The config object, on the other hand, maintains the configuration for the current thread. This is an instance of the LocalConfig class. It behaves like a thread-local object, and any attribute modifications are only visible to the current thread.

If no value is set to config for a given key, global_config is transparently referred. Thanks to this transparent lookup, users can always use config to read any configuration so that the thread-local configuration is used if available and otherwise the default global setting is used.

The following entries of the configuration are currently provided by Chainer. Some entries support environment variables to set the default values. Note that the default values are set in the global config.

Configuration Keys

  • cudnn_deterministic (default: False)

    Flag to configure deterministic computations in cuDNN APIs.

    If it is True, convolution functions that use cuDNN use the deterministic mode (i.e, the computation is reproducible). Otherwise, the results of convolution functions using cuDNN may be non-deterministic in exchange for better performance.

  • debug (default: False)

    Debug mode flag.

    If it is True, Chainer runs in debug mode. Enabling debug mode may introduce some performance overhead. See Debug Mode for more information of the debug mode.

    You can change the default value to True by setting CHAINER_DEBUG environment variable to 1.

  • dtype (default: numpy.float32)

    Default floating point data type.

    Chainer uses this dtype to construct arrays when the dtype is not specified (e.g. initializers).

    You can change the default value by setting CHAINER_DTYPE environment variable to float16, float32 or float64.

  • enable_backprop (default: True)

    Flag to enable backpropagation support.

    If it is True, computational graphs are created during forward passes by FunctionNodes, allowing backpropagation to start from any Variable in the graph. Otherwise, computational graphs are not created but memory consumptions are reduced. So calling backward() on the results of a function will not compute any gradients of any input.

  • keep_graph_on_report (default: False)

    Flag to configure whether or not to let report() keep the computational graph.

    If it is False, report() does not keep the computational graph when a Variable object is reported. It means that report() stores a copy of the Variable object which is purged from the computational graph. If it is True, report() just stores the Variable object as is with the computational graph left attached.

    You can change the default value to True by setting CHAINER_KEEP_GRAPH_ON_REPORT environment variable to 1.

  • train (default: True)

    Training mode flag.

    If it is True, Chainer runs in training mode. Otherwise, it runs in the testing (evaluation) mode.

    This configuration is used by Functions and Links that need to behave differently between training phase and evaluation (inference) phase. One example is chainer.links.BatchNormalization updates statistics using input data only when train is set to True. The other example is chainer.functions.dropout(), which does nothing when train is set to False.

    Generally, you are responsible to change the configuration to False during evaluation. If you are using Trainer with Evaluator extension, train configuration will automatically be switched to False during evaluation in the training loop.

    Note that this parameter does not reduce memory consumption or affect the creation of computational graphs required in order to compute gradients.

  • type_check (default: True)

    Type checking mode flag.

    If it is True, Chainer checks the types (data types and shapes) of inputs on Function applications. Otherwise, it skips type checking.

    You can change the default value to False by setting CHAINER_TYPE_CHECK environment variable to 0.

  • use_cudnn (default: 'auto')

    Flag to configure whether or not to use cuDNN.

    This is a ternary flag with 'always', 'auto', and 'never' as its allowed values. The meaning of each flag is as follows.

    • If it is 'always', Chainer will try to use cuDNN everywhere if possible.

    • If it is 'auto', Chainer will use cuDNN only if it is known that the usage does not degrade the performance.

    • If it is 'never', Chainer will never use cuDNN anywhere.

    You can change the default value by setting CHAINER_USE_CUDNN environment variable to any of 'always', 'auto' or 'never'.

  • use_ideep (default: 'never')

    Flag to configure whether or not to use iDeep.

    This is a ternary flag with 'always', 'auto', and 'never' as its allowed values. The meaning of each flag is as follows.

    • If it is 'always', Chainer will try to use iDeep everywhere if possible.

    • If it is 'auto', Chainer will use iDeep only if it is known that the usage does not degrade the performance.

    • If it is 'never', Chainer will never use iDeep anywhere.

    You can change the default value by setting CHAINER_USE_IDEEP environment variable to any of 'always', 'auto' or 'never'.

    Note that in spite of the configuration, optimizers will use iDeep if and only if the link is converted manually to iDeep (e.g., model.to_intel64()).

  • lazy_grad_sum (default: False)

    Flag to control the behavior of gradient accumulation.

    If it is True, gradients are accumulated in batch for performance. Otherwise gradients are accumulated one by one.

    You can change the default value to True by setting CHAINER_LAZY_GRAD_SUM environment variable to 1.

  • use_cudnn_tensor_core (default: 'auto')

    Flag to configure whether or not to enable Tensor Core operatons in cuDNN.

    This is a ternary flag with 'always', 'auto', and 'never' as its allowed values. The meaning of each flag is as follows.

    • If it is always, Chainer uses cuDNN’s Tensor Core operations.

    • If it is never, Chainer does not use cuDNN’s Tensor Core operations.

    • If it is auto, Chainer checks cuDNN version, the data type of input, the compute capability of the GPU used, and configures whether or not to use cuDNN’s Tensor Core operations.

  • autotune (default: False)

    Autotune for convolutional networks flag.

    If it is True, Chainer uses the cuDNN autotune feature to find the fastest calculation process for chainer.links.Convolution2D, ConvolutionND, Deconvolution2D, or DeconvolutionND links.

  • cudnn_fast_batch_normalization (default: False)

    Flag to configure whether or not to enable use of fast implementation for batch normalization in cuDNN.

    If True, Chainer will try to use the fast implementation for batch normalization in cuDNN by setting cuDNN’s batch normalization mode to CUDNN_BATCHNORM_SPATIAL_PERSISTENT. You can change the default value to True by setting CHAINER_CUDNN_FAST_BATCH_NORMALIZATION environment variable to 1.

  • in_recomputing (default: False)

    This flag is automatically set by chainer.functions.forget() and not intended to be changed by users. You can use this flag when implementing your own Link to avoid updating the internal states during recomputation done by chainer.functions.forget(). See the documentation of chainer.functions.forget() for details.

User-defined Keys

Users can also define their own configurations. There are two ways:

  1. Use Chainer’s configuration objects. In this case, it is strongly recommended to prefix the name by “user_” to avoid name conflicts with configurations introduced to Chainer in the future.

  2. Use your own configuration objects. Users can define their own configuration objects using chainer.configuration.GlobalConfig and chainer.configuration.LocalConfig. In this case, there is no need to take care of the name conflicts.

Changing Configuration

If you want to share a setting within the process, set an attribute to the global configuration. This value is automatically extracted by referring to the local config.

>>> chainer.global_config.train
True
>>> chainer.config.train
True

>>> chainer.global_config.train = False

>>> chainer.global_config.train
False
>>> chainer.config.train
False

If you set an attribute to the local configuration, the value is only visible to the current thread.

>>> chainer.global_config.train
True
>>> chainer.config.train
True

>>> chainer.config.train = False

>>> chainer.global_config.train
True
>>> chainer.config.train
False

If you want to temporarily modify the configuration for the specific scope, you can use using_config(). For example, if you only want to enable debug mode in a fragment of code, write as follows.

>>> with chainer.using_config('debug', True):
...     pass  # code running in debug mode

If you want to switch to the test mode for an evaluation, you can do that in the same way.

>>> # Do training here
>>> with chainer.using_config('train', False):
...     pass  # Perform evaluation here

Note that Evaluator automatically switches to the test mode, and thus you do not need to manually switch in the loss function for the evaluation.

You can also make your own code behave differently in training and test modes as follows.

if chainer.config.train:
    pass  # code only running in the training mode
else:
    pass  # code only running in the test mode

chainer.global_config

chainer.config

Thread-local configuration of Chainer.

chainer.using_config

Context manager to temporarily change the thread-local configuration.

chainer.configuration.GlobalConfig

chainer.configuration.LocalConfig

Thread-local configuration of Chainer.

Environment Variables

Here are the environment variables Chainer uses.

CHAINER_SEED

Default seed value of random number generators for CUDA. If it is not set, the seed value is generated from Python random module. Set an integer value in decimal format.

CHAINER_DATASET_ROOT

Default directory path to store the downloaded datasets. See Datasets for details.

CHAINER_CUDNN

Set 0 to completely disable cuDNN in Chainer. In this case, cuDNN will not be used regardless of CHAINER_USE_CUDNN and chainer.config.use_cudnn configuration. Otherwise cuDNN is enabled automatically.

CHAINER_USE_CUDNN

Used as the default value for chainer.config.use_cudnn configuration. The value must be any of 'always', 'auto' or 'never'. If CHAINER_CUDNN is set to 0, this environment variable has no effect. See Configuring Chainer for details.

CHAINER_CUDNN_FAST_BATCH_NORMALIZATION

Used as the default value for chainer.config.cudnn_fast_batch_normalization configuration. Set 1 to enable use of fast implementation for batch normalization in cuDNN. See Configuring Chainer for details.

CHAINER_USE_IDEEP

Used as the default value for chainer.config.use_ideep configuration. The value must be any of 'always', 'auto' or 'never'. See Configuring Chainer for details.

CHAINER_LAZY_GRAD_SUM

Used as the default value for chainer.config.lazy_grad_sum configuration. Set 1 to enable batch accumulation of gradients. See Configuring Chainer for details.

CHAINER_DTYPE

Used as the default value for chainer.config.dtype configuration. The value must be any of 'float16', 'float32' or 'float64'. See Configuring Chainer for details.

CHAINER_TYPE_CHECK

Used as the default value for chainer.config.type_check configuration. Set 0 to disable type checking. Otherwise type checking is enabled automatically. See Configuring Chainer and Type checking utilities for details.

CHAINER_DEBUG

Used as the default value for chainer.config.debug configuration. Set 1 to enable debug mode. It is disabled by default. In debug mode, Chainer performs various runtime checks that can help debug user’s code at the cost of some overhead. See Configuring Chainer and Debug Mode for details.

CHAINER_KEEP_GRAPH_ON_REPORT

Used as the default value for chainer.config.keep_graph_on_report configuration. Set 1 to let report() keep the computational graph. See Configuring Chainer for details.

CHAINER_PYTHON_350_FORCE

Set 1 to force using Chainer with Python 3.5.0. Note that Chainer does not work with Python 3.5.0. Use Python 3.5.1+ or other supported versions (see Installation).

The following environment variables are only effective when running unit tests.

CHAINER_TEST_GPU_LIMIT

Number of GPUs available for unit tests. When running unit test, test cases that require more GPUs than the specified value will be skipped. Set 0 to skip all test cases that require GPU. See Unit Testing for details.

CHAINER_TEST_RANDOM_NONDETERMINISTIC

Set 1 to use non-fixed seed for random number generators, even for test cases annotated with fix_random.