tf.contrib.layers.optimize_loss(
loss,
global_step,
learning_rate,
optimizer,
gradient_noise_scale=None,
gradient_multipliers=None,
clip_gradients=None,
learning_rate_decay_fn=None,
update_ops=None,
variables=None,
name=None,
summaries=None,
colocate_gradients_with_ops=False,
increment_global_step=True
)
Defined in tensorflow/contrib/layers/python/layers/optimizers.py
.
Given loss and parameters for optimizer, returns a training op.
Various ways of passing optimizers include:
- by string specifying the name of the optimizer. See OPTIMIZER_CLS_NAMES
for full list. E.g.
optimize_loss(..., optimizer='Adam')
. - by function taking learning rate
Tensor
as argument and returning anOptimizer
instance. E.g.optimize_loss(..., optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))
. Alternatively, iflearning_rate
isNone
, the function takes no arguments. E.g.optimize_loss(..., learning_rate=None, optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))
. - by a subclass of
Optimizer
having a single-argument constructor (the argument is the learning rate), such as AdamOptimizer or AdagradOptimizer. E.g.optimize_loss(..., optimizer=tf.train.AdagradOptimizer)
. - by an instance of a subclass of
Optimizer
. E.g.,optimize_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))
.
Args:
loss
: ScalarTensor
.global_step
: Scalar intTensor
, step counter to update on each step unlessincrement_global_step
isFalse
. If not supplied, it will be fetched from the default graph (seetf.train.get_global_step
for details). If it has not been created, no step will be incremented with each weight update.learning_rate_decay_fn
requiresglobal_step
.learning_rate
: float orTensor
, magnitude of update per each training step. Can beNone
.optimizer
: string, class or optimizer instance, used as trainer. string should be name of optimizer, like 'SGD', 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant. class should be sub-class oftf.Optimizer
that implementscompute_gradients
andapply_gradients
functions. optimizer instance should be instantiation oftf.Optimizer
sub-class and havecompute_gradients
andapply_gradients
functions.gradient_noise_scale
: float or None, adds 0-mean normal noise scaled by this value.gradient_multipliers
: dict of variables or variable names to floats. If present, gradients for specified variables will be multiplied by given constant.clip_gradients
: float, callable orNone
. If float, is provided, a global clipping is applied to prevent the norm of the gradient to exceed this value. Alternatively, a callable can be provided e.g.: adaptive_clipping. This callable takes alist
of(gradients, variables)
tuple
s and returns the same thing with the gradients modified.learning_rate_decay_fn
: function, takeslearning_rate
andglobal_step
Tensor
s, returnsTensor
. Can be used to implement any learning rate decay functions. For example:tf.train.exponential_decay
. Ignored iflearning_rate
is not supplied.update_ops
: list of updateOperation
s to execute at each step. IfNone
, uses elements of UPDATE_OPS collection. The order of execution betweenupdate_ops
andloss
is non-deterministic.variables
: list of variables to optimize orNone
to use all trainable variables.name
: The name for this operation is used to scope operations and summaries.summaries
: List of internal quantities to visualize on tensorboard. If not set, the loss, the learning rate, and the global norm of the gradients will be reported. The complete list of possible values is in OPTIMIZER_SUMMARIES.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.increment_global_step
: Whether to incrementglobal_step
. If your model callsoptimize_loss
multiple times per training step (e.g. to optimize different parts of the model), use this arg to avoid incrementingglobal_step
more times than necessary.
Returns:
Training op.
Raises:
ValueError
: if:loss
is an invalid type or shape.global_step
is an invalid type or shape.learning_rate
is an invalid type or value.optimizer
has the wrong type.clip_gradients
is neither float nor callable.learning_rate
andlearning_rate_decay_fn
are supplied, but noglobal_step
is available.gradients
is empty.