chainer.training.updaters.StandardUpdater

class chainer.training.updaters.StandardUpdater(iterator, optimizer, converter=<function concat_examples>, device=None, loss_func=None, loss_scale=None, auto_new_epoch=True)[source]

Standard implementation of Updater.

This is the standard implementation of Updater. It accepts one or more training datasets and one or more optimizers. The default update routine assumes that there is only one training dataset and one optimizer. Users can override this update routine by inheriting this class and overriding the update_core() method. Each batch is converted to input arrays by chainer.dataset.concat_examples() by default, which can also be manually set by converter argument.

Parameters
  • iterator – Dataset iterator for the training dataset. It can also be a dictionary that maps strings to iterators. If this is just an iterator, then the iterator is registered by the name 'main'.

  • optimizer – Optimizer to update parameters. It can also be a dictionary that maps strings to optimizers. If this is just an optimizer, then the optimizer is registered by the name 'main'.

  • converter – Converter function to build input arrays. Each batch extracted by the main iterator and the device option are passed to this function. chainer.dataset.concat_examples() is used by default.

  • device – Device to which the training data is sent. Negative value indicates the host memory (CPU).

  • loss_func – Loss function. The target link of the main optimizer is used by default.

  • loss_scale (float) – Loss scaling factor. Loss scaling is a usefull technique to mitigate vanishing gradient issue that tends to happen when low precision data type like float16 is used during training. If you set loss scaling factor, gradients of loss values are to be multiplied by the factor before backprop starts. The factor is propagated to whole gradients in a computational graph along the backprop. The gradients of parameters are divided by the factor just before the parameters are to be updated.

  • auto_new_epoch (bool) – If True, new_epoch() of the main optimizer is automatically called when the is_new_epoch attribute of the main iterator is True.

Variables
  • converter – Converter function.

  • loss_func – Loss function. If it is None, the target link of the main optimizer is used instead.

  • device – Device to which the training data is sent.

  • iteration – Current number of completed updates.

  • auto_new_epoch – If True, new_epoch() is automatically called by update_core(). In this case, the use_auto_new_epoch attribute of each optimizer is also set to True. If update_core() is overridden, the implementation should correctly call new_epoch() of each optimizer.

Methods

connect_trainer(trainer)[source]

Connects the updater to the trainer that will call it.

The typical usage of this method is to register additional links to the reporter of the trainer. This method is called at the end of the initialization of Trainer. The default implementation does nothing.

Parameters

trainer (Trainer) – Trainer object to which the updater is registered.

finalize()[source]

Finalizes the updater object.

This method calls the finalize method of each iterator that this updater has. It is called at the end of training loops.

get_all_optimizers()[source]

Gets a dictionary of all optimizers for this updater.

Returns

Dictionary that maps names to optimizers.

Return type

dict

get_iterator(name)[source]

Gets the dataset iterator of given name.

Parameters

name (str) – Name of the dataset iterator.

Returns

Corresponding dataset iterator.

Return type

Iterator

get_optimizer(name)[source]

Gets the optimizer of given name.

Parameters

name (str) – Name of the optimizer.

Returns

Corresponding optimizer.

Return type

Optimizer

serialize(serializer)[source]

Serializes the current state of the updater object.

update()[source]

Updates the parameters of the target model.

This method implements an update formula for the training task, including data loading, forward/backward computations, and actual updates of parameters.

This method is called once at each iteration of the training loop.

update_core()[source]

Attributes

epoch
epoch_detail
is_new_epoch
previous_epoch_detail