Class LARSOptimizer
Inherits From: Optimizer
Defined in tensorflow/contrib/opt/python/training/lars_optimizer.py
.
Layer-wise Adaptive Rate Scaling for large batch training.
Introduced by "Large Batch Training of Convolutional Networks" by Y. You, I. Gitman, and B. Ginsburg. (https://arxiv.org/abs/1708.03888)
Implements the LARS learning rate scheme presented in the paper above. This optimizer is useful when scaling the batch size to up to 32K without significant performance degradation. It is recommended to use the optimizer in conjunction with: - Gradual learning rate warm-up - Linear learning rate scaling - Poly rule learning rate decay
Note, LARS scaling is currently only enabled for dense tensors. Sparse tensors use the default momentum optimizer.
__init__
__init__(
learning_rate,
momentum=0.9,
weight_decay=0.0001,
eeta=0.001,
epsilon=0.0,
name='LARSOptimizer',
skip_list=None,
use_nesterov=False
)
Construct a new LARS Optimizer.
Args:
learning_rate
: ATensor
or floating point value. The base learning rate.momentum
: A floating point value. Momentum hyperparameter.weight_decay
: A floating point value. Weight decay hyperparameter.eeta
: LARS coefficient as used in the paper. Dfault set to LARS coefficient from the paper. (eeta / weight_decay) determines the highest scaling factor in LARS.epsilon
: Optional epsilon parameter to be set in models that have very small gradients. Default set to 0.0.name
: Optional name prefix for variables and ops created by LARSOptimizer.skip_list
: List of strings to enable skipping variables from LARS scaling. If any of the strings in skip_list is a subset of var.name, variable 'var' is skipped from LARS scaling. For a typical classification model with batch normalization, the skip_list is ['batch_normalization', 'bias']use_nesterov
: when set to True, nesterov momentum will be enabled
Raises:
ValueError
: If a hyperparameter is set to a non-sensical value.
Methods
tf.contrib.opt.LARSOptimizer.apply_gradients
apply_gradients(
grads_and_vars,
global_step=None,
name=None
)
Apply gradients to variables.
This is the second part of minimize()
. It returns an Operation
that
applies gradients.
Args:
grads_and_vars
: List of (gradient, variable) pairs as returned bycompute_gradients()
.global_step
: OptionalVariable
to increment by one after the variables have been updated.name
: Optional name for the returned operation. Default to the name passed to theOptimizer
constructor.
Returns:
An Operation
that applies the specified gradients. If global_step
was not None, that operation also increments global_step
.
Raises:
TypeError
: Ifgrads_and_vars
is malformed.ValueError
: If none of the variables have gradients.RuntimeError
: If you should use_distributed_apply()
instead.
tf.contrib.opt.LARSOptimizer.compute_gradients
compute_gradients(
loss,
var_list=None,
gate_gradients=GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
grad_loss=None
)
Compute gradients of loss
for the variables in var_list
.
This is the first part of minimize()
. It returns a list
of (gradient, variable) pairs where "gradient" is the gradient
for "variable". Note that "gradient" can be a Tensor
, an
IndexedSlices
, or None
if there is no gradient for the
given variable.
Args:
loss
: A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable.var_list
: Optional list or tuple oftf.Variable
to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can beGATE_NONE
,GATE_OP
, orGATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the classAggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.grad_loss
: Optional. ATensor
holding the gradient computed forloss
.
Returns:
A list of (gradient, variable) pairs. Variable is always present, but
gradient can be None
.
Raises:
TypeError
: Ifvar_list
contains anything else thanVariable
objects.ValueError
: If some arguments are invalid.RuntimeError
: If called with eager execution enabled andloss
is not callable.
Eager Compatibility
When eager execution is enabled, gate_gradients
, aggregation_method
,
and colocate_gradients_with_ops
are ignored.
tf.contrib.opt.LARSOptimizer.compute_lr
compute_lr(
grad,
var
)
tf.contrib.opt.LARSOptimizer.get_name
get_name()
tf.contrib.opt.LARSOptimizer.get_slot
get_slot(
var,
name
)
Return a slot named name
created for var
by the Optimizer.
Some Optimizer
subclasses use additional variables. For example
Momentum
and Adagrad
use variables to accumulate updates. This method
gives access to these Variable
objects if for some reason you need them.
Use get_slot_names()
to get the list of slot names created by the
Optimizer
.
Args:
var
: A variable passed tominimize()
orapply_gradients()
.name
: A string.
Returns:
The Variable
for the slot if it was created, None
otherwise.
tf.contrib.opt.LARSOptimizer.get_slot_names
get_slot_names()
Return a list of the names of slots created by the Optimizer
.
See get_slot()
.
Returns:
A list of strings.
tf.contrib.opt.LARSOptimizer.minimize
minimize(
loss,
global_step=None,
var_list=None,
gate_gradients=GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None
)
Add operations to minimize loss
by updating var_list
.
This method simply combines calls compute_gradients()
and
apply_gradients()
. If you want to process the gradient before applying
them call compute_gradients()
and apply_gradients()
explicitly instead
of using this function.
Args:
loss
: ATensor
containing the value to minimize.global_step
: OptionalVariable
to increment by one after the variables have been updated.var_list
: Optional list or tuple ofVariable
objects to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can beGATE_NONE
,GATE_OP
, orGATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the classAggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.name
: Optional name for the returned operation.grad_loss
: Optional. ATensor
holding the gradient computed forloss
.
Returns:
An Operation that updates the variables in var_list
. If global_step
was not None
, that operation also increments global_step
.
Raises:
ValueError
: If some of the variables are notVariable
objects.
Eager Compatibility
When eager execution is enabled, loss
should be a Python function that
takes no arguments and computes the value to be minimized. Minimization (and
gradient computation) is done with respect to the elements of var_list
if
not None, else with respect to any trainable variables created during the
execution of the loss
function. gate_gradients
, aggregation_method
,
colocate_gradients_with_ops
and grad_loss
are ignored when eager
execution is enabled.
tf.contrib.opt.LARSOptimizer.variables
variables()
A list of variables which encode the current state of Optimizer
.
Includes slot variables and additional global variables created by the optimizer in the current default graph.
Returns:
A list of variables.