View source on GitHub |
Optimizer that implements the Adam algorithm.
Inherits From: Optimizer
tf.keras.optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,
name='Adam', **kwargs
)
Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. According to the paper Adam: A Method for Stochastic Optimization. Kingma et al., 2014, the method is "*computationally efficient, has little memory requirement, invariant to diagonal rescaling of gradients, and is well suited for problems that are large in terms of data/parameters*".
For AMSGrad see On The Convergence Of Adam And Beyond. Reddi et al., 5-8.
learning_rate
: A Tensor or a floating point value. The learning rate.beta_1
: A float value or a constant float tensor. The exponential decay
rate for the 1st moment estimates.beta_2
: A float value or a constant float tensor. The exponential decay
rate for the 2nd moment estimates.epsilon
: A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just before
Section 2.1), not the epsilon in Algorithm 1 of the paper.amsgrad
: boolean. Whether to apply AMSGrad variant of this algorithm from
the paper "On the Convergence of Adam and beyond".name
: Optional name for the operations created when applying gradients.
Defaults to "Adam".**kwargs
: keyword arguments. Allowed to be {clipnorm
, clipvalue
, lr
,
decay
}. clipnorm
is clip gradients by norm; clipvalue
is clip
gradients by value, decay
is included for backward compatibility to
allow time inverse decay of learning rate. lr
is included for backward
compatibility, recommended to use learning_rate
instead.iterations
: Variable. The number of training steps this Optimizer has run.weights
: Returns variables of this Optimizer based on the order created.add_slot
add_slot(
var, slot_name, initializer='zeros'
)
Add a new slot variable for var
.
add_weight
add_weight(
name, shape, dtype=None, initializer='zeros', trainable=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE
)
apply_gradients
apply_gradients(
grads_and_vars, name=None
)
Apply gradients to variables.
This is the second part of minimize()
. It returns an Operation
that
applies gradients.
grads_and_vars
: List of (gradient, variable) pairs.name
: Optional name for the returned operation. Default to the name
passed to the Optimizer
constructor.An Operation
that applies the specified gradients. The iterations
will be automatically increased by 1.
TypeError
: If grads_and_vars
is malformed.ValueError
: If none of the variables have gradients.from_config
@classmethod
from_config(
config, custom_objects=None
)
Creates an optimizer from its config.
This method is the reverse of get_config
,
capable of instantiating the same optimizer from the config
dictionary.
config
: A Python dictionary, typically the output of get_config.custom_objects
: A Python dictionary mapping names to additional Python
objects used to create this optimizer, such as a function used for a
hyperparameter.An optimizer instance.
get_config
get_config()
Returns the config of the optimimizer.
An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.
Python dictionary.
get_gradients
get_gradients(
loss, params
)
Returns gradients of loss
with respect to params
.
loss
: Loss tensor.params
: List of variables.List of gradient tensors.
ValueError
: In case any gradient cannot be computed (e.g. if gradient
function not implemented).get_slot
get_slot(
var, slot_name
)
get_slot_names
get_slot_names()
A list of names for this optimizer's slots.
get_updates
get_updates(
loss, params
)
get_weights
get_weights()
minimize
minimize(
loss, var_list, grad_loss=None, name=None
)
Minimize loss
by updating var_list
.
This method simply computes gradient using tf.GradientTape
and calls
apply_gradients()
. If you want to process the gradient before applying
then call tf.GradientTape
and apply_gradients()
explicitly instead
of using this function.
loss
: A callable taking no arguments which returns the value to minimize.var_list
: list or tuple of Variable
objects to update to minimize
loss
, or a callable returning the list or tuple of Variable
objects.
Use callable when the variable list would otherwise be incomplete before
minimize
since the variables are created at the first time loss
is
called.grad_loss
: Optional. A Tensor
holding the gradient computed for loss
.name
: Optional name for the returned operation.An Operation
that updates the variables in var_list
. The iterations
will be automatically increased by 1.
ValueError
: If some of the variables are not Variable
objects.set_weights
set_weights(
weights
)
variables
variables()
Returns variables of this Optimizer based on the order created.