Defined in tensorflow/contrib/distribute/__init__.py
.
A distributed computation library for TF.
See tensorflow/contrib/distribute/README.md for overview and examples.
Classes
class AllReduceCrossDeviceOps
: Reduction using all reduce.
class CollectiveAllReduceStrategy
: Distribution strategy that uses collective ops for all-reduce.
class CrossDeviceOps
: Base class for cross-device reduction and broadcasting algorithms.
class DistributeConfig
: A config tuple for distribution strategies.
class DistributionStrategy
: A list of devices with a state & compute distribution policy.
class DistributionStrategyExtended
: Additional APIs for algorithms that need to be distribution-aware.
class MirroredStrategy
: Mirrors vars to distribute across multiple devices and machines.
class Monitor
: Executes training steps, recovers and checkpoints.
class MultiWorkerAllReduce
: All-reduce algorithms for distributed TensorFlow.
class OneDeviceStrategy
: A distribution strategy for running on a single device.
class ParameterServerStrategy
: A parameter server DistributionStrategy.
class ReductionToOneDeviceCrossDeviceOps
: Always do reduction to one device first and then do broadcasting.
class ReplicaContext
: tf.distribute.Strategy
API when in a replica context.
class StandardInputStep
: Step with a standard implementation of input handling.
class StandardSingleLossStep
: A step function that implements a training step for a feed forward network.
class Step
: Interface for performing each step of a training algorithm.
class TPUStrategy
: TPU distribution strategy implementation.
class UpdateContext
: Context manager when you are in update()
or update_non_slot()
.
Functions
get_cross_replica_context(...)
: Returns the current tf.distribute.Strategy if in a cross-replica context.
get_distribution_strategy(...)
: Returns the current tf.distribute.Strategy
object.
get_loss_reduction(...)
: tf.distribute.ReduceOp
corresponding to the last loss reduction.
get_replica_context(...)
: Returns the current tf.distribute.ReplicaContext
or None
.
has_distribution_strategy(...)
: Return if there is a current non-default tf.distribute.Strategy
.
in_cross_replica_context(...)
: Returns True if in a cross-replica context.
require_replica_context(...)
: Verify in replica_ctx
replica context.
run_standard_tensorflow_server(...)
: Starts a standard TensorFlow server.