tf.mixed_precision.experimental.DynamicLossScale

View source on GitHub

Loss scale that dynamically adjusts itself.

Inherits From: LossScale

tf.mixed_precision.experimental.DynamicLossScale(
    initial_loss_scale=(2 ** 15), increment_period=2000, multiplier=2.0
)

Dynamic loss scaling works by adjusting the loss scale as training progresses. The goal is to keep the loss scale as high as possible without overflowing the gradients. As long as the gradients do not overflow, raising the loss scale never hurts.

The algorithm starts by setting the loss scale to an initial value. Every N steps that the gradients are finite, the loss scale is increased by some factor. However, if a NaN or Inf gradient is found, the gradients for that step are not applied, and the loss scale is decreased by the factor. This process tends to keep the loss scale as high as possible without gradients overflowing.

Args:

Attributes:

Methods

__call__

View source

__call__()

Returns the current loss scale as a scalar float32 tensor.

from_config

View source

@classmethod
from_config(
    config
)

Creates the LossScale from its config.

get_config

View source

get_config()

Returns the config of this loss scale.

update

View source

update(
    grads
)

Updates loss scale based on if gradients are finite in current step.