Class MirroredStrategy
Inherits From: Strategy
Defined in tensorflow/python/distribute/mirrored_strategy.py
.
Mirrors vars to distribute across multiple devices and machines.
This strategy uses one replica per device and sync replication for its multi-GPU version.
The multi-worker version will be added in the fture.
Args:
devices
: a list of device strings.cross_device_ops
: optional, a descedant ofCrossDeviceOps
. If this is not set, nccl will be use by default.
__init__
__init__(
devices=None,
cross_device_ops=None
)
Initialize self. See help(type(self)) for accurate signature.
Properties
extended
tf.distribute.StrategyExtended
with additional methods.
num_replicas_in_sync
Returns number of replicas over which gradients are aggregated.
Methods
tf.distribute.MirroredStrategy.__deepcopy__
__deepcopy__(memo)
tf.distribute.MirroredStrategy.experimental_finalize
experimental_finalize()
Any final actions to be done at the end of all computations.
In eager mode, it executes any finalize actions as a side effect. In graph mode, it creates the finalize ops and returns them.
For example, TPU shutdown ops.
Returns:
A list of ops to execute.
tf.distribute.MirroredStrategy.experimental_initialize
experimental_initialize()
Any initialization to be done before running any computations.
In eager mode, it executes any initialization as a side effect. In graph mode, it creates the initialization ops and returns them.
For example, TPU initialize_system ops.
Returns:
A list of ops to execute.
tf.distribute.MirroredStrategy.experimental_run
experimental_run(
fn,
input_iterator=None
)
Runs ops in fn
on each replica, with inputs from input_iterator
.
When eager execution is enabled, executes ops specified by fn
on each
replica. Otherwise, builds a graph to execute the ops on each replica.
Each replica will take a single, different input from the inputs provided by
one get_next
call on the input iterator.
fn
may call tf.distribute.get_replica_context()
to access members such
as replica_id_in_sync_group
.
IMPORTANT: Depending on the DistributionStrategy
being used, and whether
eager execution is enabled, fn
may be called one or more times (once for
each replica).
Args:
fn
: function to run. The inputs to the function must match the outputs ofinput_iterator.get_next()
. The output must be atf.nest
ofTensor
s.input_iterator
: (Optional) input iterator from which the inputs are taken.
Returns:
Merged return value of fn
across replicas. The structure of the return
value is the same as the return value from fn
. Each element in the
structure can either be PerReplica
(if the values are unsynchronized),
Mirrored
(if the values are kept in sync), or Tensor
(if running on a
single replica).
tf.distribute.MirroredStrategy.make_dataset_iterator
make_dataset_iterator(dataset)
Makes an iterator for input provided via input_dataset.
Data from the given dataset will be distributed evenly across all the
compute replicas. We will assume that the input dataset is batched by the
global batch size. With this assumption, we will make a best effort to
divide each batch across all the replicas (one or more workers).
If this effort fails, an error will be thrown, and the user should instead
use make_input_fn_iterator
which provides more control to the user, and
does not try to divide a batch across replicas.
The user could also use make_input_fn_iterator
if they want to
customize which input is fed to which replica/worker etc.
Args:
dataset
:tf.data.Dataset
that will be distributed evenly across all replicas.
Returns:
An tf.distribute.InputIterator
which returns inputs for each step of the
computation. User should call initialize
on the returned iterator.
tf.distribute.MirroredStrategy.make_input_fn_iterator
make_input_fn_iterator(
input_fn,
replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function.
The input_fn
should take an tf.distribute.InputContext
object where
information about input sharding can be accessed:
def input_fn(input_context):
d = tf.data.Dataset.from_tensors([[1.]]).repeat()
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(
input_fn)
replica_results = strategy.extended.call_for_each_replica(
replica_fn, iterator.get_next())
Args:
input_fn
: A function that returns atf.data.Dataset
. This function is expected to take antf.distribute.InputContext
object.replication_mode
: an enum value oftf.distribute.InputReplicationMode
. OnlyPER_WORKER
is supported currently.
Returns:
An iterator object that can be initialized and fetched next element.
tf.distribute.MirroredStrategy.reduce
reduce(
reduce_op,
value
)
Reduce value
across replicas.
Args:
reduce_op
: Atf.distribute.ReduceOp
value specifying how values should be combined.value
: A "per replica" value to be combined into a single tensor.
Returns:
A Tensor
.
tf.distribute.MirroredStrategy.scope
scope()
Returns a context manager selecting this Strategy as current.
Inside a with strategy.scope():
code block, this thread
will use a variable creator set by strategy
, and will
enter its "cross-replica context".
Returns:
A context manager.
tf.distribute.MirroredStrategy.update_config_proto
update_config_proto(config_proto)
Returns a copy of config_proto
modified for use with this strategy.
The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args:
config_proto
: atf.ConfigProto
object.
Returns:
The updated copy of the config_proto
.