tf.keras.optimizers.RMSprop

View source on GitHub

Optimizer that implements the RMSprop algorithm.

Inherits From: Optimizer

tf.keras.optimizers.RMSprop(
    learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False,
    name='RMSprop', **kwargs
)

A detailed description of rmsprop.

$$mean_square_t = rho * mean_square{t-1} + (1-rho) * gradient ** 2$$ $$mom_t = momentum * mom_{t-1} + learning_rate * gradient / \sqrt{ / mean_square_t + \epsilon}$$ $$variable_t := variable_{t-1} - mom_t$$

This implementation of RMSprop uses plain momentum, not Nesterov momentum.

The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance:

$$mean_grad_t = rho * mean_grad_{t-1} + (1-rho) * gradient$$ $$mean_square_t = rho * mean_square_{t-1} + (1-rho) * gradient ** 2$$ $$mom_t = momentum * mom_{t-1} + learning_rate * gradient / sqrt(mean_square_t - mean_grad_t**2 + epsilon)$$ $$variable_t := variable_{t-1} - mom_t$$

References See ([pdf] http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).

Args:

Attributes:

Methods

add_slot

View source

add_slot(
    var, slot_name, initializer='zeros'
)

Add a new slot variable for var.

add_weight

View source

add_weight(
    name, shape, dtype=None, initializer='zeros', trainable=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.compat.v1.VariableAggregation.NONE
)

apply_gradients

View source

apply_gradients(
    grads_and_vars, name=None
)

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args:

Returns:

An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises:

from_config

View source

@classmethod
from_config(
    config, custom_objects=None
)

Creates an optimizer from its config.

This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.

Arguments:

Returns:

An optimizer instance.

get_config

View source

get_config()

Returns the config of the optimimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

get_gradients

View source

get_gradients(
    loss, params
)

Returns gradients of loss with respect to params.

Arguments:

Returns:

List of gradient tensors.

Raises:

get_slot

View source

get_slot(
    var, slot_name
)

get_slot_names

View source

get_slot_names()

A list of names for this optimizer's slots.

get_updates

View source

get_updates(
    loss, params
)

get_weights

View source

get_weights()

minimize

View source

minimize(
    loss, var_list, grad_loss=None, name=None
)

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Args:

Returns:

An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises:

set_weights

View source

set_weights(
    weights
)

variables

View source

variables()

Returns variables of this Optimizer based on the order created.