tf.keras.optimizers.Adagrad

View source on GitHub

Optimizer that implements the Adagrad algorithm.

Inherits From: Optimizer

tf.keras.optimizers.Adagrad(
    learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07,
    name='Adagrad', **kwargs
)

Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.

Initialization:

$$accum_{g_0} := \text{initial_accumulator_value}$$

Update step:

$$t := t + 1$$ $$accum_{g_t} := accum_{g_{t-1}} + g2$$ $$\theta_t := \theta_{t-1} - lr * g / (\sqrt{accum_{g_t}} + \epsilon)$$

References:

Args:

Attributes:

Raises:

Methods

add_slot

View source

add_slot(
    var, slot_name, initializer='zeros'
)

Add a new slot variable for var.

add_weight

View source

add_weight(
    name, shape, dtype=None, initializer='zeros', trainable=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.compat.v1.VariableAggregation.NONE
)

apply_gradients

View source

apply_gradients(
    grads_and_vars, name=None
)

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args:

Returns:

An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises:

from_config

View source

@classmethod
from_config(
    config, custom_objects=None
)

Creates an optimizer from its config.

This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.

Arguments:

Returns:

An optimizer instance.

get_config

View source

get_config()

Returns the config of the optimimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

get_gradients

View source

get_gradients(
    loss, params
)

Returns gradients of loss with respect to params.

Arguments:

Returns:

List of gradient tensors.

Raises:

get_slot

View source

get_slot(
    var, slot_name
)

get_slot_names

View source

get_slot_names()

A list of names for this optimizer's slots.

get_updates

View source

get_updates(
    loss, params
)

get_weights

View source

get_weights()

minimize

View source

minimize(
    loss, var_list, grad_loss=None, name=None
)

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Args:

Returns:

An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises:

set_weights

View source

set_weights(
    weights
)

variables

View source

variables()

Returns variables of this Optimizer based on the order created.