tf.keras.optimizers.SGD

View source on GitHub

Stochastic gradient descent and momentum optimizer.

Inherits From: Optimizer

tf.keras.optimizers.SGD(
    learning_rate=0.01, momentum=0.0, nesterov=False, name='SGD', **kwargs
)

Computes:

theta(t+1) = theta(t) - learning_rate * gradient
gradient is evaluated at theta(t).

or Computes (if nesterov = False): v(t+1) = momentum * v(t) - learning_rate * gradient theta(t+1) = theta(t) + v(t+1) if `nesterov` is False, gradient is evaluated at theta(t). if `nesterov` is True, gradient is evaluated at theta(t) + momentum * v(t), and the variables always store theta + m v instead of theta

Some of the args below are hyperparameters, where a hyperparameter is defined as a scalar Tensor, a regular Python value, or a callable (which will be evaluated when apply_gradients is called) returning a scalar Tensor or a Python value.

References

nesterov = True, See [Sutskever et al., 2013](
  http://jmlr.org/proceedings/papers/v28/sutskever13.pdf).

Arguments:

Attributes:

Eager Compatibility

When eager execution is enabled, learning_rate can be a callable that takes no arguments and returns the actual value to use. This can be useful for changing these values across different invocations of optimizer functions.

Methods

add_slot

View source

add_slot(
    var, slot_name, initializer='zeros'
)

Add a new slot variable for var.

add_weight

View source

add_weight(
    name, shape, dtype=None, initializer='zeros', trainable=None,
    synchronization=tf.VariableSynchronization.AUTO,
    aggregation=tf.compat.v1.VariableAggregation.NONE
)

apply_gradients

View source

apply_gradients(
    grads_and_vars, name=None
)

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args:

Returns:

An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises:

from_config

View source

@classmethod
from_config(
    config, custom_objects=None
)

Creates an optimizer from its config.

This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.

Arguments:

Returns:

An optimizer instance.

get_config

View source

get_config()

Returns the config of the optimimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

get_gradients

View source

get_gradients(
    loss, params
)

Returns gradients of loss with respect to params.

Arguments:

Returns:

List of gradient tensors.

Raises:

get_slot

View source

get_slot(
    var, slot_name
)

get_slot_names

View source

get_slot_names()

A list of names for this optimizer's slots.

get_updates

View source

get_updates(
    loss, params
)

get_weights

View source

get_weights()

minimize

View source

minimize(
    loss, var_list, grad_loss=None, name=None
)

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Args:

Returns:

An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises:

set_weights

View source

set_weights(
    weights
)

variables

View source

variables()

Returns variables of this Optimizer based on the order created.