tf.compat.v1.tpu.CrossShardOptimizer

View source on GitHub

An optimizer that averages gradients across TPU shards.

Inherits From: Optimizer

tf.compat.v1.tpu.CrossShardOptimizer(
    opt, reduction=losses.Reduction.MEAN, name='CrossShardOptimizer',
    group_assignment=None
)

Args:

Raises:

Methods

apply_gradients

View source

apply_gradients(
    grads_and_vars, global_step=None, name=None
)

Apply gradients to variables.

Calls tpu_ops.cross_replica_sum() to sum gradient contributions across replicas, and then applies the real optimizer.

Args:

Returns:

An Operation that applies the gradients. If global_step was not None, that operation also increments global_step.

Raises:

compute_gradients

View source

compute_gradients(
    loss, var_list=None, **kwargs
)

Compute gradients of "loss" for the variables in "var_list".

This simply wraps compute_gradients() from the real optimizer. The gradients will be aggregated in apply_gradients() so that user can modify the gradients like clipping with per replica global norm if needed. The global norm with aggregated gradients can be bad as one replica's huge gradients can hurt the gradients from other replicas.

When the CrossShardOptimizer is constructed with reduction == losses.Reduction.MEAN (default), this function scales the loss by 1.0 / num_shards before computing the gradients. Assuming the optimizer uses the default implementation of compute_gradients(), the gradients of the scaled loss are scaled by 1.0 / num_shards compared to the gradients of the original loss. This scaling factor is important because apply_gradients() sums gradients across shards, rather than averaging them. However, the scaling factor must be taken into account when clipping the norm of the gradients or performing other postprocessing.

Args:

Returns:

A list of (gradient, variable) pairs.

Raises:

get_name

View source

get_name()

get_slot

View source

get_slot(
    *args, **kwargs
)

Return a slot named "name" created for "var" by the Optimizer.

This simply wraps the get_slot() from the actual optimizer.

Args:

Returns:

The Variable for the slot if it was created, None otherwise.

get_slot_names

View source

get_slot_names(
    *args, **kwargs
)

Return a list of the names of slots created by the Optimizer.

This simply wraps the get_slot_names() from the actual optimizer.

Args:

Returns:

A list of strings.

minimize

View source

minimize(
    loss, global_step=None, var_list=None, gate_gradients=GATE_OP,
    aggregation_method=None, colocate_gradients_with_ops=False, name=None,
    grad_loss=None
)

Add operations to minimize loss by updating var_list.

This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.

Args:

Returns:

An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step.

Raises:

Eager Compatibility

When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled.

variables

View source

variables()

Forwarding the variables from the underlying optimizer.

Class Variables