View source on GitHub |
Loss scale that dynamically adjusts itself.
Inherits From: LossScale
tf.compat.v1.mixed_precision.DynamicLossScale(
initial_loss_scale=(2 ** 15), increment_period=2000, multiplier=2.0
)
Dynamic loss scaling works by adjusting the loss scale as training progresses. The goal is to keep the loss scale as high as possible without overflowing the gradients. As long as the gradients do not overflow, raising the loss scale never hurts.
The algorithm starts by setting the loss scale to an initial value. Every N steps that the gradients are finite, the loss scale is increased by some factor. However, if a NaN or Inf gradient is found, the gradients for that step are not applied, and the loss scale is decreased by the factor. This process tends to keep the loss scale as high as possible without gradients overflowing.
Attributes | |
---|---|
increment_period
|
|
initial_loss_scale
|
|
multiplier
|
Methods
from_config
@classmethod
from_config( config )
Creates the LossScale from its config.
get_config
get_config()
Returns the config of this loss scale.
update
update(
grads
)
Updates loss scale based on if gradients are finite in current step.
__call__
__call__()
Returns the current loss scale as a scalar float32
tensor.