tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager

Class ExponentialUpdateLossScaleManager

Inherits From: LossScaleManager

Defined in tensorflow/contrib/mixed_precision/python/loss_scale_manager.py.

Loss scale manager uses an exponential update strategy.

In general, the strategy increases loss scale by a greater-than-one factor after encountering a consecutive series of steps with finite gradients; Similarly, it decreases the loss scale by a factor when the accumulated number of steps with non-finite (nan or inf) gradients are met. An update is not applied if its result is less than 1 or overflows the float32 dynamic range.

The number of finite and non-finite steps are cleared every time the loss scale is changed. The condition to decrease the loss scale is looser than to increase it since the former does not require the steps to be consecutive.

__init__

__init__(
    init_loss_scale,
    incr_every_n_steps,
    decr_every_n_nan_or_inf=2,
    incr_ratio=2,
    decr_ratio=0.8
)

Constructor of exponential-update loss scale manager.

Args:

  • init_loss_scale: A Python float. The loss scale to use at the beginning.
  • incr_every_n_steps: Increases loss scale every n consecutive steps with finite gradients.
  • decr_every_n_nan_or_inf: Decreases loss scale every n accumulated steps with nan or inf gradients.
  • incr_ratio: The multiplier to use when increasing the loss scale.
  • decr_ratio: The less-than-one-multiplier to use when decreasing the loss scale.

Methods

tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager.get_loss_scale

get_loss_scale()

Returns the loss scale.

tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager.update_loss_scale

update_loss_scale(finite_grads)

Updates loss scale based on if gradients are finite in current step.