tf.keras.optimizers.Adadelta

View source on GitHub

Optimizer that implements the Adadelta algorithm.

Inherits From: Optimizer

tf.keras.optimizers.Adadelta(
    learning_rate=0.001, rho=0.95, epsilon=1e-07, name='Adadelta', **kwargs
)

Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: 1) the continual decay of learning rates throughout training 2) the need for a manually selected global learning rate

Two accumulation steps are required: 1) the accumulation of gradients squared, 2) the accumulation of updates squared.

Initialization:

$$E[g2]_0 := 0 \text{(Initialize gradient 2nd order moment vector)}$$ $$E[\Delta x2]_0 := 0 \text{(Initialize 2nd order variable update)}$$

$$t := t + 1$$ $$E[g2]_t := \rho * E[g2]_{t-1} + (1 - \rho) * g2$$ $$\Delta x_t = -RMS[\Delta x]_{t-1} * g_t / RMS[g]_t$$ $$E[\Delta x2]_t := \rho * E[\Delta x2]_{t-1} + (1 - \rho) * \Delta x_t2$$ $$x_t := x_{t-1} + \Delta x_{t}$$

References See M. D. Zeiler (pdf)

Args:

Attributes:

Methods

add_slot

View source

add_slot(
    var, slot_name, initializer='zeros'
)

Add a new slot variable for var.

add_weight

View source

add_weight(
    name, shape, dtype=None, initializer='zeros', trainable=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.compat.v1.VariableAggregation.NONE
)

apply_gradients

View source

apply_gradients(
    grads_and_vars, name=None
)

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args:

Returns:

An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises:

from_config

View source

@classmethod
from_config(
    config, custom_objects=None
)

Creates an optimizer from its config.

This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.

Arguments:

Returns:

An optimizer instance.

get_config

View source

get_config()

Returns the config of the optimimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

get_gradients

View source

get_gradients(
    loss, params
)

Returns gradients of loss with respect to params.

Arguments:

Returns:

List of gradient tensors.

Raises:

get_slot

View source

get_slot(
    var, slot_name
)

get_slot_names

View source

get_slot_names()

A list of names for this optimizer's slots.

get_updates

View source

get_updates(
    loss, params
)

get_weights

View source

get_weights()

minimize

View source

minimize(
    loss, var_list, grad_loss=None, name=None
)

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Args:

Returns:

An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises:

set_weights

View source

set_weights(
    weights
)

variables

View source

variables()

Returns variables of this Optimizer based on the order created.