Defined in tensorflow/_api/v1/distribute/__init__.py.
Library for running a computation across multiple devices.
Classes
class InputContext: A class wrapping information needed by an input function.
class InputReplicationMode: Replication mode for input function.
class MirroredStrategy: Mirrors vars to distribute across multiple devices and machines.
class ReduceOp: Indicates how a set of values should be reduced.
class ReplicaContext: tf.distribute.Strategy API when in a replica context.
class Server: An in-process TensorFlow server, for use in distributed training.
class Strategy: A list of devices with a state & compute distribution policy.
class StrategyExtended: Additional APIs for algorithms that need to be distribution-aware.
Functions
get_loss_reduction(...): tf.distribute.ReduceOp corresponding to the last loss reduction.
get_replica_context(...): Returns the current tf.distribute.ReplicaContext or None.
get_strategy(...): Returns the current tf.distribute.Strategy object.
has_strategy(...): Return if there is a current non-default tf.distribute.Strategy.
in_cross_replica_context(...): Returns True if in a cross-replica context.