Class Adagrad
Inherits From: Optimizer
Defined in tensorflow/python/keras/optimizers.py
.
Adagrad optimizer.
Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.
It is recommended to leave the parameters of this optimizer at their default values.
Arguments
lr: float >= 0. Initial learning rate.
epsilon: float >= 0. If `None`, defaults to `K.epsilon()`.
decay: float >= 0. Learning rate decay over each update.
References
- [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
__init__
__init__(
lr=0.01,
epsilon=None,
decay=0.0,
**kwargs
)
Initialize self. See help(type(self)) for accurate signature.
Methods
tf.keras.optimizers.Adagrad.from_config
from_config(
cls,
config
)
tf.keras.optimizers.Adagrad.get_config
get_config()
tf.keras.optimizers.Adagrad.get_gradients
get_gradients(
loss,
params
)
Returns gradients of loss
with respect to params
.
Arguments:
loss
: Loss tensor.params
: List of variables.
Returns:
List of gradient tensors.
Raises:
ValueError
: In case any gradient cannot be computed (e.g. if gradient function not implemented).
tf.keras.optimizers.Adagrad.get_updates
get_updates(
loss,
params
)
tf.keras.optimizers.Adagrad.get_weights
get_weights()
Returns the current value of the weights of the optimizer.
Returns:
A list of numpy arrays.
tf.keras.optimizers.Adagrad.set_weights
set_weights(weights)
Sets the weights of the optimizer, from Numpy arrays.
Should only be called after computing the gradients (otherwise the optimizer has no weights).
Arguments:
weights
: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the optimizer (i.e. it should match the output ofget_weights
).
Raises:
ValueError
: in case of incompatible weight shapes.